Couple basic questions

Hi all, just wanted to ask a couple basic questions as I’m trying to get my head wrapped around this.
I’m trying to build a simple energy monitor for my sump pumps. I have a Particle Photon device and a 333mV 100A CT. I followed the documentation and setup something really simple with a 10k voltage divider and I had a 100uF capacitor laying around so I used that (I actually have two which I will put in in series to halve that). The Photon device provides 3.3V.

I understand the basic idea of what we are doing by adding the DC bias to the reading. What I’m struggling with is that even when the CT is not in a circuit I’m still reporting quite a bit of current. I followed some basic code and am using the current function since all I have is current.
What could I be missing that causes the program to report current when there is none?
The second question is about that calibration value. How is that calculated? I read something about the current which would produce a 1V output from the CT? My CT maxes out at 333mV so something doesn’t make sense in all of that.

Any kind of input would be greatly appreciated here, thank you.

You need not have bothered with putting 2 capacitors in series - the value, as long as it is “enough”, is not critical. With 100 µF, you just need to wait a while longer for the mid-point voltage to stabilise at 1.65 V or so, before you use the readings. You must sample, because there’s a digital filter in emonLib - I take it you are using emonLib - that removes that offset, so the voltage must physically stabilise and then the digital filter must catch up with it.

So? The calibration is a multiplier. And what you read is absolutely correct - the calibration factor is the current that WOULD (it doesn’t have to) produce a 1 V output at the input to your ADC. So by some very simple maths, it is 300. (100 A produces 0.333 V, so you would need 300 A to produce 1 V - if it could.) Does that make sense now?

You also need to check that emonLib is getting the correct value (4096) for ADC_COUNTS, because that is part of the calibration.

Now therein lies the answer to your problem…

You’re only using less than a third of the input range available, so any sniff of interference and noise will be multiplied by three, compared to a 1 V output c.t.
I’d recommend trying different power supplies, this can make a big difference. We’re reasonably certain that the major contribution to unwanted output is noise on the power supply you’re using, but a “breadboard” layout can also make a significant contribution. If you think about it, the maximum input of your c.t. is 100 A, and it produces 333 mV rms. If you’re trying to measure 1 A (120 W to you), that’s a 3.3 mV rms signal going into your ADC. It’s a 12-bit ADC you have, so your 3.3 mV rms signal represents a range peak-peak of 11 or 12 counts. That gives you some idea of the sensitivity of your input.

Ok, I’m with you on everything, thank you. I had actually used 300 for my calibration so glad to see that I understood that correctly.
I was definitely considering a smaller CT, maybe like 15 - 30A for this. Now I’ll add 1V output to the requirements.
thank you!

1 Like