DIY Current Monitor on Raspberry Pi. My power calculation isn't accurate. Help please?

YVW,S!

Actually, the 20 kHz sample is still fairly well distorted, i.e. read that as “bad.”
The 100 and 200 kHz samples look much better.
Try it at say, 50 or 75 kHz.

1 Like

You’ve got a little flat-topping (on the +ve peak of the first cycle)
and a little bit of a “divot” (just before the third +ve peak)
There’s a little flattening on the -ve peaks too.
But overall, it looks better than the sample at 20 kHz!

I’m stuck on calculating the RMS voltage (in theory). I’m using this circuit (source) with a 3.3V DC board voltage and a 1.65 reference voltage for my AC input.

image

My ADC is 10 bit so it returns values between 0 and 1023. Here is a sample of the analog data I’m getting from the circuit as read by my ADC:

I am having trouble finding the RMS voltage from the sample data series. I can turn the raw data into actual voltages being fed from the circuit no problem, but… then what?

With the current transformer, it was easy - I knew the input/output ratio and could apply the ratio to my calculations. I’ve calculated the input/output ratio of my AC transformer by measuring the input and output voltage. The ratio is 11.5:1.

I don’t know if it’s because I’ve been at this for many hours today, but I just can’t seem to figure it out on paper. I think there are two ratios at play - one for the AC transformer and one from the input signals that are conditioned from the circuit. Maybe the problem is I haven’t determined what the peak voltage output of the AC transformer is?

Edit: I should add that my intention in using the AC transformer is to measure voltage at the start of each “polling cycle” where I will calculate the power usage using the CT sensor data for that cycle. Instead of assuming a constant 120V in my application, I just want to probe the AC transformer and get a “live” voltage reading that I can then use for the next second or two while I conduct the calculations (as opposed to calculating the instantaneous voltage for each data point).

I’d read the data sheet very carefully before you spend too much effort on that - look particularly the block diagram on page 1. That shows a multiplexer on the front end of a single ADC, so read what I wrote above again.

If it’s any help, emonLib on a Atmel 328P manages about 44 samples per your US cycle, if you have regulations regarding the maximum amount of harmonics you’re allowed to inject into the supply, then that’s a good guide to the sort of maximum frequency you need to resolve. If the law says there must be negligible current at (say) the 15th harmonic, there’s little point in trying to measure it in the first place.

You’re right, there are. The mains voltage is divided down twice, once by the transformer (in the ratio you’ve measured at 11.5:1) and then further by the resistive divider chain R1 & R2 in the ratio 10k / (10k + 100k), or 11:1 So the overall ratio is 126.5:1

That’s only important to ensure you’re not exceeding the input range of the ADC. As long as you use consistent units throughout, it makes no difference to the ratios. I have a ‘rule of thumb’ - the input voltage needed at the ADC is 1.1 V rms, for a 3.3 V input range. That takes into account component tolerances and allows a bit of headroom. If you design the voltage divider for that but no more at your maximum supply voltage (resistors and transformer combined, and it works for c.t. and burden combined too) then you won’t be far wrong.

You won’t get accurate power readings for loads like induction motors, or computer power supplies or the like, doing that. You’ll be calculating apparent power, not real power which is the one you pay for. Apparent power is equal to real power only with a purely resistive (heating) load. For everything else, it’s always greater. For something like a microwave oven on standby, it will probably be many times - 10 or so - greater.

I’m thinking that if the chip can read up to 200k SPS, and I’m only sampling the CT sensor at about half of that, there is some leftover read bandwidth in there. I would anticipate some latency when changing the channel to your point… my threading idea would essentially have to set the channel upon each sample, so I wonder what that would do to the effective read speed. I’m getting ahead of myself with that though :slight_smile:

Looking here at the math theory section on the guide… to calculate real power, you need instant current and instant voltage. The instant voltage is essentially the net change in input voltage * the overall ratio? For instance, the initial raw ADC value in the voltage data I gathered above was about 900. So, the net change in DC voltage when ADC input = 900 is ((900 / 512) - 1) * 3.3 ~= +1.25 V over the 1.65V reference voltage. And the instantaneous AC voltage at that point of the wave is 1.25 * 126.5 ?

I see, thanks. Looks like I need to modify my program to take one CT sensor sample, change ADC input, take one voltage sample, change back to the CT sensor input, and repeat. It will be interesting to see the impact to sampling speed when I change channels before every sample. I also thought about using another ADC on the Pi’s second SPI bridge to read a single channel from each one simultaneously (via threading), but I’m not sure which one would be better. Unfortunately I don’t have any hands-on time to work on the project today so I’ll have to revisit it tomorrow.

Thank you for the feedback! You rock :+1:

What I’ve done in emonLibCM is, for each of the inputs, as each sample is read I subtract a nominal offset (which is hopefully the inputs’s quiescent value but we know it never is, exactly), then accumulate the resulting value and the resulting value squared, and for each pair of voltage and current samples, V × I.
Then, at the end of the averaging period, calculate the averages and subtract a final correction for the offset, taking advantage of the relationship for the components of a complex wave:
[rms of signal+ offset] = √ (signal² + offset² ).
For the real power, it is only necessary to subtract the product of voltage offset and current offset (the ‘offset power’) from the average power (the average of the instantaneous powers).
The nominal offset is subtracted first only to keep the numbers smaller. There’s no other reason to do so. Doing all the ‘per sample’ maths as integers speeds up the processing significantly.

At this point, the squaring and offset removal processes have removed the need to know the midpoint voltage, and you simply apply the scaling factor for the number of volts input per count out of the ADC. So if your ADC is 4096 counts per 3.3 volts at the ADC input, and your voltage divider is 126.5 V at the mains input per volt at the ADC input, then overall it’s 4096/3.3 counts per 126.5 V at the mains input.
It’s exactly the same process for current.

If you want real power accurately, another demon rears its ugly head - phase errors. Your c.t. and your v.t. have a phase error - the output waveform is out of step with the input wave, by anything up to a few degrees. Usually, the output appears to be ahead of the input (it leads). You can get an idea of how much from the test reports I did for the ‘Learn’ section. What that means is, as you’re sampling quite fast, you’ll need to delay one set of samples by the difference between the phase errors of the two transformers. Using our standard ones as an example, the v.t. shows a lead of around 5°, and the c.t. shows a lead of about 4°, so you’d need to delay using the v.t’s readings by 1°, or 46µs, and at 200k SPS (100k sample pairs per second), that’s 4.6 samples. And that needs to be adjustable, because the phase error is different for each type of transformer, and it varies according to what’s being measured.

There’s no real advantage in doing that, as you now know. I don’t think you’re that desperate for a high sample rate.

1 Like

Thanks for the breakdown. This is what I’m currently doing to find the RMS current since I’m taking all the current samples at once. When I refactor my collection function to take one sample of each channel (current sensor and voltage sensor), I’ll follow suit. The nominal offset/quiescent value is simply the reference voltage in the signal conditioning circuit (~ 1.65xxxV), right? Or approximately 512 (one half of 1024) from the raw ADC channel?

I think I follow this - but I’ll have to get back to you later today when I am back in front of my project to see if I’ve ironed this part out.

This is absolutely fascinating to think of. I’m not an electrical engineer, but I do like thinking programmatically, so here’s how I would attempt to correct the phase drift between inputs. Is this correct?

  • Analyze the samples and look for the first peak (or trough) in both the current and voltage samples.

  • Assuming the samples are in an ordered list, grab the index position of the initial peak for the CT and V samples. Find the difference between the two indexes, and this is equal to the number of samples that the two sets are out of phase by (let this equal F)

  • Align the two sets by removing F number of samples from the set that peaks after the first, so that the peaks begin at the same index position number in their respective lists. Also remove F number of samples from the end of the set that that was previously untouched, so that the total number of samples in each set is the same.

  • Do a couple spot checks throughout the sample data to ensure the peaks (or troughs) are still aligned. (Is this even necessary? Can/does the AC frequency from a large-scale commercial energy provider ever slightly drift away from 60Hz? Or is my initial peak alignment method accurate enough for correcting the <1 second worth of sample data?)

Here’s a marked up chart of what I mean using my sample data. This sample data is not actually the voltage and current samples - it’s just the data I had readily accessible. It is actually two different current samples taken at different sampling speeds (so the wavelengths of these samples aren’t even the same and they’ll never align). But, for the sake of theory and demonstration, suppose these two waves were sampled at the same rate and this was an actual variance between my CT and voltage sensor readings.


In summary, I would shift the blue wave sample data left by 21 positions so that its peak aligns with the peak of the purple wave. I would then remove 21 samples from the end of the purple data set.

Correct.

You cannot solve that in a program - or you can, but only when you know that current and voltage are exactly in phase. The whole point of measuring real power is the type of load shifts the relative phases, and that’s what you measure (or rather, you measure the effect of them shifting).
The easy and practical way to do it is fix yourself up with a big purely resistive load - water heater / electric kettle / fan-less electric heater etc, and tweak the one set of samples backwards and forwards until the get a maximum power - or better real power ÷ apparent power (V × I) = 1 - or as near as you can get it.
You can’t rely on the positions of the peaks nor the zero crossing points, because distortions in the transformers move them about, and it’s the 60 Hz component only, stripped of all its harmonics, that you’re bothered about.

1 Like

It can, and does, but the actual amount is quite small. Viz:

As you can see, it dips and rises all day every day, usually anywhere from 59.95 to 60.05 about 98% of the time. If it ever drops below 59.9, or rises above 60.1, a serious system event has occurred, e.g. a major loss of generation or load somewhere in the US.

1 Like

Well crap - I suppose there’s one downside to using a microprocessor as opposed to a micro-controller. I can’t guarantee that my code will be executed at exactly the same time, every time. In every day use this is never really a problem, but at these small time intervals, it might be a problem.

The good news is I just pulled some data from both channels in the same loop, as in, read channel 0, read channel 1, repeat. I collected 2000 pairs of readings (current and voltage) in 0.99 seconds with an ADC clock speed of 150 kHz. Here is the raw data for the first two oscillations. I’ve adjusted the scale on the chart for the voltage reading to allow it to fit on top of the current curve better so we can see the phase error. It turns out it looks like the voltage reading peaks only 2 or 3 samples prior to the current sample peaking.

Would that phase difference have a measurable impact from an accuracy perspective? I’m just trying to gauge exactly how accurate this already is. Am I already well above the accuracy threshold of commercially available meter, like the Kill-A-Watt?

While writing this post, I played around a bit and tried to correct for the phase error in software. The only thing I could come up with was to increase the clock speed of the ADC to 200k so that there is less time between reading voltage and current. I also added a 100 microsecond delay after reading both channels to try to offset the increase in read speed so that I don’t get too much data. This helped bring the current sensor peak closer to the voltage sensor peak by 1-2 samples, but they’re still not perfectly in phase. Here is an interactive plot for my attempt at getting the phases aligned closer.

Do you think I’ll be fine from an accuracy perspective with the above sample data (in the linked chart)?

While on the subject of phases, my project, when complete, will have 4 CT sensors total:

  • 2 on my main panel for each 120V leg (which I think are 180* out of phase with each other)

  • 1 on my 100A subpanel (not sure which phase this is coming from - it has it’s own main breaker fed from the utility’s panel on my house.

  • 1 on my solar feed from the DC inverter.

I am anticipating an issue when trying to calculate instant power using the voltage measurements from leg 1 of the panel, but a CT reading from the main input for leg 2 of the panel. My first thought to work around this is to essentially do what an absolute value operation would do on a negative value (hope that makes sense to you). My second thought would be to add a second voltage sensor for the other leg.

This is very interesting. Thank you for sharing! That is a very small average variation. I guess I wouldn’t need to worry about variations in phase changes throughout my few thousand data samples…

Measuring off your picture, I reckon that’s about 10° phase shift. At unity power factor, that would come out at 0.985, which isn’t too bad. Unfortunately, with a p.f. of 0.8, it would read 0.89 or 0.68 (depending on which way it is), and worse, if you had an almost pure reactive load with a p.f. of 0.1, it could look as if the load was generating!
So if you have “well-behaved” loads with a respectable power factor, it’s a minor error. The worse the power factor, the bigger the error.

But the time between readings is only part of the error, as I explained (I think), it’s down to variable errors in the transformers.

I’m not going to look, please upload it here.

We’ve tried to explain that on the Use in N.America page.

Generally, voltage imbalance between the two legs hasn’t been a problem. And if the c.t. is the wrong way round, you can either reverse it on its cable, or put a minus sign in the maths.

Or swap the CT lead connections.

Indeed, because that’s a wire-ended c.t. (Not a 3.5 mm plug.)

Here is a snippet of the 200kHz samples, scaled so that the waves appear to have the same amplitude:

I just finished refactoring my calculations using my new data collection method and here are the values I came up with for the data set with 2000 samples taken at 200kHz linked above, using an electric heater (with a fan) as the load:

Real Power: 636.604 W
RMS Voltage: 120.134 V
RMS Current: 5.324 A
Apparent Power: 639.546 W
Power Factor: 0.995

p.f. = 0.995 is pretty good for a resistive load, yeah? Now that I have the basics down, I can get started on optimization for looking at the data samples for non-resistive loads.

This is what I was concerned about - that’s an easy fix! The CT sensors I purchased do indeed have the 3.5mm plug and I’m currently just using a female jack with the 3-pin breakout to connect to my breadboard. I may snip the jacks off and use cat5e to transport the signal from the 4 CT sensors that will be in my main panel back to the Pi.

I have some more reading to do on the Calibration Theory section before I continue. Thank you for the assistance so far!

Indeed.

What you need is in post no. 18. - in one sentence: “So if your ADC is 4096 counts per 3.3 volts at the ADC input, and your voltage divider is 126.5 V at the mains input per volt at the ADC input, then overall it’s 4096/3.3 counts per 126.5 V at the mains input.” SImple :grin:

:thinking: If that sentence is about calibration, I have no idea where that needs to be applied, why it needs to be applied, and what it corrects (and how it corrects whatever it is). It sounds similar to the scaling factor, which I’ve already applied to my data. Also, I’m using a 10-bit ADC so I don’t think the 4096 figure is applicable. You probably feel like you’re repeating yourself, but I think I’m missing a piece (or several) of the puzzle.

I’m not sure what to expect when I move my CT sensor over to a non-resistive load, or better yet, the house mains, so I’m trying to broaden my understanding of how other types of loads interact with the waveforms. I just don’t want to be taken by surprise (and frustration) when I connect the other sensors to my breadboard and start compiling the data, only for it to be wrong and not having the slightest clue about where to begin looking for the problem.

Those numbers are (usually) derived from the ideal values that come from manufacturer’s data. In your case, part of one is accurate because you measured it (the transformer ratio). All the rest are subject to manufacturing tolerances. Your reference voltage probably won’t be exactly 3.3 V and the divider resistors won’t be exactly 10 kΩ and 100 kΩ. Your current transformer won’t be exactly 100 A : 50 mA, and the burden won’t be exactly 20 Ω. Calibration simply means tweaking the numbers to take account of all those factors so that you get the same answer as some reliable standard.

You’re right - it’s 1024 in that case.

You’ve almost got that the wrong way round - it’s more like how the waveforms interact with the loads. That can be quite complicated. For example, if you’ve got an old and cheap battery charger, it’s going to have inside the case a transformer and a rectifier. The rectifier will only pass current when the voltage out of the transformer is higher than the battery voltage, so you’ll get a narrow pulse of current around the peak of the voltage wave. If you have a refrigerator with an induction motor in it, then a property of an inductor (a coil of wire, and there’s a lot of that in an electric motor) is the current builds up slowly when you apply a voltage. So the current wave lags behind the voltage wave. Things like microwave oven timers require very little current, and those often use a capacitor to lower the mains voltage to power the electronics. The capacitor works the opposite way to the inductor - when you apply a voltage, the current rushes in fast (that’s why there’s often a sharp crack when you plug it in - a laptop power supply is similar, having large capacitors inside) so then the current waveform leads the voltage.

And that brings me full circle to why you need to set the correction, for timing and all the other factors that shift one wave relative to the other, with a purely resistive load. When you have that right, any other shift results from the properties of the load.

Thank you for clarifying! Would an alternate approach to calibrating static values be to take a sample of the board voltage, and a sample from the voltage divider voltage, prior to the start of each polling cycle? In other words, if I have the actual values that are the direct result of using components with unique tolerances, do I care what those tolerances are in the end? I guess with the current transformers it isn’t possible to do this, but for the board and ref. voltage, it should be?

… by exactly 90 degrees - I just finished a 30 minute lecture on inductors and waveforms :slightly_smiling_face:

So, I suppose the ultimate question is, will my calculations be able to account for these shifts in the wave (and accurately measure power consumption), or do I need to handle different shifting scenarios in code?

In a sample that looks like this…:
image

…I would assume the sign on the power will keep the calculations straightened out, and that I shouldn’t have to worry about the difference in phase between the two waves.

Lastly, I’ve been letting my DIY power monitor run and I’m observing the following output. My CT sensor is on a conductor that is not energized/live. (The real power is actually returning a value between -1 and 1, so I am currently printing 0 if -1< real power < 1 is true):

Real Power: 0 W | RMS Voltage: 123.713V | RMS Current: 0.3A | Apparent Power: 37.09 W | Power Factor: -0.014
Real Power: 0 W | RMS Voltage: 123.859V | RMS Current: 0.299A | Apparent Power: 36.984 W | Power Factor: -0.022
Real Power: 0 W | RMS Voltage: 123.782V | RMS Current: 0.297A | Apparent Power: 36.767 W | Power Factor: 0.002
Real Power: 0 W | RMS Voltage: 123.751V | RMS Current: 0.301A | Apparent Power: 37.262 W | Power Factor: -0.007

I don’t know why apparent power and RMS current is so relatively high when there is absolutely nothing on the cable. Do you have any ideas?

That’s exactly what was done in emonLib for the emonTx V2, which could be powered by batteries and didn’t have an on-board regulator. But… You still had to calibrate because the on-board reference of the Atmel 328P is 1.1 V ± 0.1 V.

Sorry, but you can’t get out of jail that easily. :laughing:

No, you don’t. All you care is the resulting voltage and current readings are correct.

I can’t be explaining very well. There are two groups of things that cause the voltage and current waveforms, that come out of your processor as numbers, to be shifted in relation to one another. The first is errors in the measurement technique and properties of the measuring circuitry. Those you aim to correct or compensate for.

The second group arise from properties of the load. You must not try to compensate for those, because if you do, the measurement will be wrong.

I have. It’s most likely electrical noise being picked up from somewhere. Those numbers are absolutely characteristic of that. What is the construction of your analogue circuitry before the signal gets to the ADC? Because of the voltages involved (0.3 A represents about 3 mV at the ADC input), you need to be very careful with the layout, and grounding, and decoupling the supply. There’s lots of information from all the manufacturers of ADCs and processors that incorporate ADCs available.