I have created this topic for any comments or support queries about EmonLibCM - Version 2.01 and subsequent revisions.
I was having a look at how the new more efficient phase error adjustment stuff works and got stuck on this line of code in EmonLibCM_Init()
double phase_shift = ((double)phaseCal_CT[i] + (double)ADC_Sequence[i] * ADCDuration * cycles_per_second/MICROSPERSEC) * two_pi / 360.0 ;
It looks like the intent is to take the phase error introduced by the sensors (4.2° say), add in the phase error introduced by the ADC sampling lag, and then convert the lot to radians for the subsequent trig maths.
Are those two terms in the same units? The first one looks to be in degrees while the second appears to be in “cycles”. Is there a missing “x 360” on the second term?
I’ll check. It looks as if you’re right (inevitably). I’ll see what I think the best solution is, unless you’ve already thought about it?
Yes, I’m afraid there’s a “360” missing. It means that there is effectively no compensation for the position in the scan cycle.
double phase_shift = ((double)phaseCal_CT[i] / 360.0 + (double)ADC_Sequence[i] * ADCDuration * cycles_per_second/MICROSPERSEC) * two_pi;
Now the big question is, how did everybody miss it?
Yes, that had me wondering whether it was compensated for somewhere else.
At 50Hz, a 104 usec lag is 1.872°, so that’s a pretty big phase error for the last one in the scan cycle - 9.36°?
It also got me wondering if the default 4.2° is truly just the sensor error or might have some ADC lag error inadvertently built into to it too? Could it be the sensor error is 2.3°? Was the 4.2° measured independently, or was it arrived at by calibration while assuming the ADC lag compensation was working as intended?
Don’t ask me now - all this was done a very long time, about 2 years ago - then left in limbo when I didn’t have an emonPi set up to run long-term tests and I thought Trystan was testing when in fact he wasn’t. I started long-term testing in July, with Bill Thomson testing more or less in parallel on a 60 Hz system.
I’m going to have to go back and recheck all those numbers, but I suspect they came from the tests I did separately on the transformers.
At best an only slightly relevant datapoint since I was using an Aussie/Jaycar VT on the stm32 experiments, but I found there I needed 269 usecs (~4.8°) to compensate for the net VT/CT induced phase error at 50Hz. I knew that was all down to the sensors because I had synchronised dual ADCs, one for V and one for I.
Recent samples of the c.t. go from about 2.5° to about 4.5° depending on the current, and the UK v.t. from around 3° to around 4.5° depending on the voltage. So it’s not easy to find a value that is a good compromise. In my testing against the supplier’s meter optical pulses, I found I could set the calibration so that the accumulated kWh see-sawed against the pulse count, (I forget which way round it was now) one would lead at low powers and lag at high powers - I couldn’t isolate the cause, I suspect it was this, but it could have been the c.t’s amplitude error as well (I’m talking about a standing load of a 100 - 200 W compared to 1 - 2 kW).
Yes, that see-saw effect is one reason I remain unconvinced by metrics like “long term agreement with revenue meter to within 1%” especially for per-breaker monitors. That all the individual channels add up to within 1% of the revenue meter over a month doesn’t really put any upper limit on the errors in individual instantaneous power readings from each channel, and surely the whole reason for per-breaker monitoring is so you can determine how much energy circuit x uses.
And in three phase installations, per circuit monitoring with “derived voltage references” (i.e. only one VT) is particularly vulnerable. In that situation I think the “long term agreement with revenue meter to within 1%”, while no doubt true, is close to meaningless for anything but the grand totals. But if that’s the only measurement you can rely on, you could have saved yourself a lot of CTs and just measured the main feed.
Quite so. But you’re up against accuracy vs perceived accuracy. And as most users don’t have access to high quality test gear, comparison with the supplier’s meter is their only yardstick. Then, if your total consumption is to 1%, then that implies that while minor loads might be significantly worse, at least you know to within probably 10%, maybe 5%, what those loads actually are - and until some sort of monitoring is present, that’s been a total unknown. I’m not pretending it’s good, but if the user is happy with it and it’s better than nothing, then it’s OK for their purpose.
I’ve been asking for a 3-phase emonTx (i.e. 3 v.t’s) for a long time, but despite the continuing interest - notably from mainland Europe as well as your part of the world, there’s been no reaction. Hopefully, the STM will put that to rights; we shall see.
Yes, fair comments. I think in some quarters there’s a tendency to just mention the 1% total without any qualification, so the end user is perhaps thinking they’re getting that on each of their channels.
With “derived” 3 phase monitoring, I think even the larger loads can be significantly out. Most loads here are single phase regardless of whether the house is single phase or three phase, and in a three phase house it’s common for the sparkie to distribute the larger loads across different phases. Add to that the observation that it’s common to see 10% variation in phase voltages and it becomes a big fuzzy soup of measurements. Tweak enough knobs often enough and you might happen across a combo that results in a < 1% error in the grand total, but I’d have little confidence in the per-circuit measurements (large or small).
It also varies a lot from distribution transformer to distribution transformer, so one intall’s ability to achieve 1% on the total is no indication that another one in a different location will, or even that the original one will continue to when some change is made in a neighbouring property (like installing PV).
Energex recently installed monitoring on the LV side of my local distro transformer (which I think services about 20-30 houses). They monitor both voltage and current on each phase. They’re due to sweep through and replace all the ageing 80A service fuses on top of the telegraph poles and I heard while they’re at it, they’ll potentially moves houses onto different phases to keep things better balanced based on their acquired data … they know which houses have PV, how big their inverters are and which phase they’re currently on.
I tested both my IDEAL 9V AC-AC transformer (US) and four SCT-013-000 current transformers for phase offsets. This is on the US side, so typical is ~120 Vac on a given standard outlet.
My single IDEAL transformer had an AC offset of 3.41 deg.
The four CTs had additional offsets of +0.54 deg, +0.67 deg, +1.01 deg, and -4.65 deg. Yes, that last one was a bit of an odd-ball.
The ADC lag between V and I can be important, but of course depends on the details. I compensate for it within my own continuous monitoring firmware with real phasor rotations. (Emontx3-continuous - New continuous monitoring firmware)
[edited 2018-12-06 to put correct AC-AC transformer phase offset]
That’s to be expected - the 120 Ω burden gives rise to a huge phase error because it’s heading towards being overloaded. All c.t’s are more accurate the closer their burden is to a short circuit - but measuring the burden voltage becomes harder.
True, thanks… but I forgot to mention that I replaced the 120 Ω surface mount resistor with a 22 Ω through-hole resistor. The fourth CT had the same lag behavior no matter which input channel was used.
In that case, it’s probably from a different batch. Measuring c.t’s independently of the emonTX, I’ve observed a significant variation in phase error over the years. It’s obviously a parameter that’s not kept under strict control in the way that it is in the more costly ‘revenue grade’ devices.
I’ve found another error in that part of the code, so I’ll be providing an update when I’m satisfied with the correction.
7 posts were split to a new topic: Derived voltage reference (three-phase) and utility meter accuracy comparison
Well Glyn, if you’re going to move my comments, how about the above comments that prompted them? I think it’s pretty clear now that this isn’t about the CM.