OpenEnergyMonitor Community

Derived voltage reference (three-phase) and utility meter accuracy comparison

I think this might be a case of throwing the baby out with the bath water. While there may be an academic argument to discredit the “Derived Reference” method of monitoring three-phase installations, I have seen no data to support it. To the contrary, IoTaWatt, where I believe the moniker was first used, enjoys tremendous success with this method of approaching polyphase measurement.

Site’s employing IoTaWatt for three-phase power measurement, and there are many, enjoy low-cost, ease of installation, and reasonable accuracy at both the mains and branch circuits. One prominent example of branch circuit integrity is the large number of both single and three-phase solar inverters that agree with the inverter’s counter with high accuracy.

I appreciate the argument that 1% integrity at the mains does not equate to 1% accuracy at the branch circuits, but by the same token, 1% accuracy at the mains does not indicate otherwise. One is not indicative of the other, but 1% consistently at the mains, across a broad base of installations, does suggest that a device has integrity beyond the mains. In fact, continuous IoTaWatt testing includes not only comparison to the mains meter (<1%) but measurement of several individual branch circuits that have loads at a variety of power-factors and are measured with various multiple CTs. Again, results are in the <1% range - daily, weekly, monthly.

I suggest that you look for inadequacies in the hardware or firmware of this device before declaring the method at fault. These arduino based monitors only have relatively slow 10 bit ADCs. You might want to try sampling at 75KHz as IoTaWatt does with 12 bit ADCs.

Obtaining 1% accuracy at any power level is not that difficult with suitable equipment. The differentiator is the firmware offering ease of installation, use, upgrade, and ability to deliver the data reliably.

Actually, my point is h/w agnostic. It doesn’t matter how fast your ADCs are, or what their resolution is, if you’re not measuring all 3 voltages in 3 phase install then you have no idea how accurate your measurements are. The results will vary from install to install, and even within an install, will vary depending on what your neighbours are doing.

Exactly, and that point needs to be understood by all concerned. And it’s one that I have made all along. The reliability of the estimate (because that is all it is) of the other two voltages depends to a large extent (but not completely, of course) on the supply system and the total load placed upon it.

I can’t argue that there’s no variation. That would be wrong. My argument is that in practice, the variation does not appear to be significant enough to detract from useful results. There can be extreme cases, but I haven’t seen any yet. I had one fellow down under there where you are that was suspecting all kinds of things with the neighbor’s usage and power utility faults. Once the CTs were properly assigned to the phases and oriented correctly (not to mention correctly identified), all of a sudden it came into clear focus.

You can pay a lot of money for a certified revenue grade three-phase monitor. Probably more than $1,000, and then still not get very much circuit breakdown. If you want to figure out where you’re money is going and fix it, then it will take a long time to get ahead with that $1,000 meter. If you can get to 1% or even 2% or 3%, with a unit that costs less than $200, you can potentially amortize it with savings.

Neither IoTaWatt or OEM will crack the high end equipment market. The certifications alone would drive up the cost prohibitively. But you don’t need a weatherman to know which way the wind blows.

How would you know? I bet most people plug it in and believe the results. The more advanced users might do a long term comparison with the total against their revenue meter.

Where did those numbers come from? Here’s a simple test… what accuracy will you guarantee on a power reading from a branch circuit that’s on a phase other than the one the VT is connected to? It’s a rhetoric question, obviously you can’t. The result is completely outside your control as it involves a measurement that nobody is making.

That’s the goal. Give people a device they can pug in and believe the results. I work hard to insure that’s the case. You would know if decisions that you make based on the data yield the desired results. Which brings us to the real heart of the matter. If you have a plug-and-play device that doesn’t require any kind of calibration to use different CTs, will keep track of the data to the watt-millisecond, stores that data locally with 5 second resolution, and will upload to servers with a synchronous protocol that doesn’t lose data due to communications or server outages, then useful reliable data is available to a larger number of users who can make informed decisions about their energy use. I don’t encourage users to worship data, I encourage them to rely on it to make reasonable informed decisions.

In that case I guess it’s not a rhetoric question, so I repeat it now hoping for an answer:

Here’s what I guarantee. If you try an IoTaWatt with derived reference and you are dissatisfied with the accuracy of the derived phases, you can add a $25 VT adapter and convert to Direct reference. You can add these additional references by simply plugging them in and reconfiguring the references in the browser based configuration utility, without even restarting the IoTaWatt. So what does anyone have to lose giving it a try except the opportunity to spend a lot of extra money for a guarantee.

I’ve run an IoTaWatt in an industrial setting with both Direct and Derived reference measurements simultaneously to the same loads. It’s anecdotal but the differences were unremarkable.

The point is how do I know whether or not to be satisfied? It’s very easy to look at numbers on a screen and convince yourself they’re true. A power-user might even compare their totals with the revenue meter, but no matter how convincing that result is, it puts almost no upper limit on the potential error in the individual channel readings. The only way I can make an informed decision on whether or not to cough up the $25 is to read the accuracy specs on the channels on phases that aren’t monitored. What are they? You mentioned “1%, 2% or even 3%” above. How did you arrive at those numbers?

Not only anecdotal but almost entirely determined by local conditions. I reckon I could find you plenty of sites where your approach would be out by a country mile (way more than 3%). It’s easy enough to say… in those situations cough up the $25 and monitor all three Vs, but how do you know you’re in that situation? Even if you go to the expense of independently verifying the accuracy in your location, what about six months later after a few of your neighbours have installed single phase 5kW inverters on the phases you’re not monitoring?

A post was split to a new topic: IoTaWatt and emonTx Accuracy