I got to wondering how to go about quantifying that in some way. I think everyone agrees that it’s really only relevant when the load is changing rapidly, but when the load is changing rapidly the 5-second interval average power is changing wildly from reading to reading as well. Comparing to a reference meter isn’t a lot of help because you’ve no way of knowing if the reference meter and the meter under test are sync’d to the same 5 second interval. You really need the two meters looking at the exact same 250 linecycles and then compare their results.
There’s no way I can tell my energy IC based monitor to only look at every 25th cycle, it’s just not in its DNA, but the emontxshield+stm32 demo code offered some possibilities. I modified the f/w slightly to base everything on ZX detected half-cycles and configured it for 500 half-cycle (5 second) integration. Then I pointed two channels at the induction hob to see how they clocked it:
Ch1 W Ch2 W Diff
566.81, 567.46, 0.11%
550.16, 551.06, 0.16%
192.38, 192.96, 0.30%
388.26, 388.87, 0.16%
561.32, 562.03, 0.13%
378.93, 379.65, 0.19%
All pretty good so far and you can see the wild variations from one 5-second interval to the next as the hob does its thing. 0.3% was the biggest discrepancy I saw.
Next, I knobbled Ch2 to only look at every 25th cycle - so just 10 cycles over the 5 second integration integral. That gave:
Ch1 W Ch2 W Diff
190.64, 195.24, 2.41%,
416.40, 473.19, 13.64%,
552.16, 568.24, 2.91%,
347.01, 288.47, -16.87%,
217.29, 287.87, 32.48%,
553.96, 565.81, 2.14%,
So in one 5 second interval the reported average power was out by 32% (I actually saw 40% at one stage, but didn’t have the scope configured to capture it, so went with this run). Next I toggled a spare IO pin whenever the error exceeded 30% and used that to trigger the scope, which was configured to capture 5 seconds of current prior to the trigger (trigger is just off the right edge). That produced this:
In broad-brush terms it’s 1.1 secs of high power, followed by 3.9 secs of low power.
The final step was to extract that data from the scope and highlight every 25th cycle:
Even broad-brush you can see what’s happened. Ch2 saw a 3/10 ratio of high:low while Ch1 saw the correct ratio of 1.1/5 and those ratios come out pretty close to the 32% error reported above.
Now if you let it run long enough even that under-sampling eventually converges on the correct result but it takes a while. Even over the 30 seconds of those 6 consecutive 5-sec reports above, Ch2 was over reading by 4.45%. At those timescales there’s a real risk that chef will have bumped the hob up or down a notch and then you’ll be busy chasing a new target without ever converging on the old.
This also demonstrates why metrics like “long-term agreement with the revenue meter to within 1%” shouldn’t give you much confidence about the accuracy of individual power readings. In this case my Ch2 was out by much as 32% (actually 40%) and yet I could still meet the long-term 1% agreement with the revenue meter, provided chef was agreeable.