The importance of continuous sampling

(Frogmore42) #41

I believe this post is the one that restarted this discussion:

This section in particular:

One major design goal was to be able to use any VT and any combination of CTs in the 14 inputs without the need to reprogram. This required a different approach to phase correction. The buzz at the time was a paper that described a method that I think may still be in use in some of the Emon devices. I’m not finding fault with the method as used, but it doesn’t really lend itself to the any-to-any approach. That paper starts out by explaining that the method described is needed if resolution of the samples is not fine enough to correct phase errors by simple sample shifting.

I believe this is the reason why Bob chose the design he did. I am sure there are other ways to do it, but when I see something like this :

I don’t think you are doing anything wrong. It will be fairly hard to track down that “offset” error.

My money would be on Robert’s theory that a small uncorrected phase error between the VT and CT is pushing a highly reactive load(s) into the next quadrant, making it look like a 40W generator. If you wanted to try and determine which load, you could flip breakers and/or turn things off at the wall to see if you can make it go away. As Robert says, it’s likely to be something with a standby mode. The big culprits in my house are the induction hob, the microwave oven and the TV.

So, help me understand, because I am confused. It seems using CS as implemented by the emonPi it is okay/possible to have a 40W offset for a real world load for which the recommended solution is to just subtract 40W from the total?

I don’t know if this is a common occurrence using emonPi or not. But I do know that I would be very worried if I ever saw something like that with anything that I was using to measure power. I have seen a 7W apparent power offset with my Kill-A-Watt when nothing is plugged in. I expect it is not very accurate at low power levels, so don’t use it for that. Despite the fact the ecm1240 doesn’t work with my generator power, it never did something like that.

When the STM based design comes out, I know I will look at it and seriously consider getting one. But, I needed something now, not 3,6,9,12,24 months from now. The perfect design you can’t buy yet is much less useful than something that works and is available today.


(Robert Wall) #42

That’s not right. The emonPi uses “discrete sampling” - a 200 ms (from memory - best to check the sketch) burst of samples every 5 s.

The principal problem with the emonPi is the way the front-end software (basically a replica of the default emonTx sketch) links to the RPi, which effectively makes it impossible in practice to calibrate fully. So your “recommended solution” should really say “only practical solution”.

It’s not a commonly reported occurrence - I can’t offhand remember when the last similar case was - I’ve a feeling I might have written something like that one or two times previously.


(Frogmore42) #43

Okay, let me attempt to bring this thread back to the original topic. It is titled, “The importance of continuous sampling”. Can you quantify that statement?

You have a hypothetical $100 to spend on the measurement, how do you allocate that?
How much do you spend on continuous sampling?
How much on ADC bit count?
How much on ADC speed/sample rate per channel?
How much on ADC absolute accuracy?
How much on ADC linearity?
How much on ADC repeatability?
How much on CT accuracy, linearity, repeatability, consistency, etc?
How much on VT accuracy, linearity, repeatability, consistency, etc?
How much on ease of calibration/need for calibration?
How much on the box that holds everything?

By the way, I am enjoying this discussion and I hope no one is offended by any of my comments. While I am sure we disagree on some points, I respect all of you and really am trying to understand exactly what you are saying. We may be like the three blind men and the elephant.

1 Like

(dBC) #44

Not at all. In fact I think it’s a very good example of how such discussions should proceed.

Of all the things on your list only #1 can be changed with an overnight firmware update (ok, strictly speaking you could tweak 2 and 3 as well). All those other things on your list are the sort of stuff h/w engineers spend weeks studying datasheets and price lists to come up with some sort of affordable compromise. Ask them about continuous sampling and they’ll look at you blankly and say “software detail”.

There have been several examples of folk releasing CM firmware for the OEM kit - two are listed in the first post in this thread. The one from the NASA rocket scientist does all sorts of fancy stuff with phase calculations that even allows the calculation of actual Reactive Power (as opposed to the usual shortcut of Apparent^2 - Real^2). I’m not aware of anyone moving from DM to CM and discovering their accuracy went backwards.

I think we need to forget about the limitations of any specific implementation, and look at this from a “fundamentals of meter design” point of view. That was my intention with the thread. There’s nothing inherent about CM that prevents accurate phase adjustments. There’s nothing inherent about CM that prevents ease-of-use calibration options. The only costs I can think of are channel bandwidth and battery life.

CM - it’s like free beer… why wouldn’t you?


(Frogmore42) #45

As far as free beer goes, it usually isn’t free or it doesn’t taste very good. You don’t always get what you pay for, but you usually have to pay for what you get.

If your assertion is that, “in general it is better to continuously monitor a signal”, sure, I can agree to that. But like all things that does come at a price. And if I take that argument to its illogical extreme I really should be measuring really continuously, i.e. not every few hundred usec, but perhaps every atto second. That would be “better” and “more accurate”. Of course, that would also be a waste of resources that I could better spend elsewhere on the system that I as creating/using.

My recollection of engineering is that it is best when it is figuring out the best path to meet the requirement, where best very rarely puts accuracy above all else. When you budget is unlimited it makes it possible to achieve remarkable results, when you have to create something that someone can afford to buy/use it becomes much harder.

My question above is not really about a single measurement, but think of it like this instead. You have $100 BOM cost to make an instrument that can measure a whole house’s electricity utilization to give the user enough knowledge that they can understand what is using electricity.

Sense said we only need two CTs and we will sample really fast and figure out (through machine learning) when individual loads come on and off and it will be magic. My brother has one, it is far from magic today, and I don’t know if they will ever achieve their goal. As power engineer Barbie might say, “disaggregation is hard.” Even with the 7 channels of information I had from the ecm1240, I had a hard time getting a handle on reducing my base load. I think more channels that are reasonably accurate is a far better approach, but that is my opinion. What is yours and why?


(dBC) #46

I think that’s where we’re blocked. The only costs I can see is channel bandwidth and battery life. Channel bandwidth relates to your comment about “measuring really continuously, i.e. not every few hundred usec, but perhaps every atto second” and has already been addressed. And I’m talking with a “fundamentals of meter design” hat on here, not about any specific implementation.

About multiple channels Vs de-aggregation of the mains signal? I’m a long term advocate of per-breaker monitoring. Everyone I know with a smapee reckons it reports their dishwasher comes on when they’re hoovering or something similar. de-aggregation is a hard problem to solve, and good luck to them for trying. Hopefully it’ll improve over time, but for now per-breaker CTs is hard to beat.


(Robert Wall) #47

That’s exactly what a Ferraris meter does - not even every attosecond, but truly continuously. Better? For some purposes, possibly yes. For demonstrating the action of an energy diverter, or regenerative braking in a drive, one (especially one without a ratchet) will beat a digital meter hands down. But on price? Why do we think all the energy suppliers are moving to digital meters?

And I can’t disagree with how you link ‘best’ and engineering. Engineering has always been, and always will be, a matter of finding the compromise that comes closest to meeting all the requirements of the design in question.


(Bill Thomson) #48

To gouge us for things like “Time of Use” and “Demand” charges.
i.e. line their pockets at our expense. :wink:


(dBC) #49

Or in my case, to pay me ~2x for my exports what they charge me for my imports. Lining my pockets at somebody’s expense :wink:


(Robert Wall) #50

I had “Cheaper to make and maintain” in mind - but you’re probably right to add those as well.


(Bill Thomson) #51

Only in my dreams. I pay 13 cents per kWh, but they only pay me what they call their “avoided cost”
which varies between 5 and 6 cents per kWh.

That’s one hell of a deal you’ve got! thumbsup


(dBC) #52

Yes, we’ve moved to something more like yours, but us old-timers are grandfathered in.


(Frogmore42) #53

Now, I don’t now, nor never did work directly for any energy suppliers. But I did consult work for some and was on the citizen’s advisory board for my local supplier a couple of decades ago.

Being an energy supplier is not as easy as many people think It really isn’t a license for printing money as some imagine. Digital meters give utilities the ability to charge customers more for using electricity when it costs them more to produce it. They generally have not been well received, because people don’t really understand the economics of power generation. Some plants are very efficient and the cost per kWh is very low. Other plants are not so efficient, and/or use more expensive fuel, and/or have higher capital or maintenance costs. Electricity has to be generated the moment it is needed, so the utility has to be able to quickly bring on line the necessary generation capacity to handle the load. I have been measuring my house’s load long enough to see the pattern. I have close to a 100:1 ratio for very short term demand and probably at least 10:1 for medium term to base load and something like 2-3:1 for average to base. When you add up lots of customers, the peaks don’t all happen at the same time (mostly) so the peak to average ratio is probably something like 10:1. Different plants have different abilities to throttle down, so generally utilities will have some plants that they bring up and down during the day as load demands change.

All of this is expensive, which is why many of them are working on demand side management and conservation. If they can get people to reduce the load, they can avoid building another plant (which generally no one wants near their back or front yard). (Un?)Fortunately, people have been really good at conserving so, now they need to raise their rates to cover the expenses (and the profits their shareholders demand).

But, I think the real reason is “everybody else is doing it :wink:


(dBC) #54

They call that the death spiral here (perhaps globally?).

Back on topic for a bit…

The really great thing about all this open source stuff is the old open source motto: “if you don’t like it, rewrite it”. Kudos to all vendors who’ve opened up their schematics and software to permit that. Several folk thought the OEM kit could be improved by moving from DM to CM so went about and implemented it. That’s not to say it was bad or useless beforehand, I suspect the developers were over-joyed to have folk contributing improvements.

Who knows, maybe some enthusiastic IotaWatt guru will do the same there one day and host a bake-off of the two firmware versions. Then we’d know for sure whether these costs your’re concerned about are real or not.


(Frogmore42) #55

I agree. I purchased a bunch of Sonoff (and other esp8266 based) hardware because of Tasmota. I would never have done that if it was their FW or the highway. I have some updates to it that I need to submit, but mostly it has just been working.

Being able to see the code is sometimes really helpful to see what it is really doing and why it is not doing what I thought it should. How helpful it is depends on how much time I am willing to spend studying it and how hard it is to understand. Some projects are inherently complex and some are just harder to understand because of the structure of the code and accuracy/presence of any documentation.


(Bill Thomson) #56

Not that it means anything, but after reading the above comment,
I became curious. i.e just who is this guy?

FWIW, it turns out he’s Dr. Craig Markwardt, Research Astrophysicist.
A member of NASA’s Sciences and Exploration Directorate.


(Daniel Bates) #57

I was reading a book the other day :wink:
A chapter on op amps.
Turns out they’re called op amps because they can do mathematical operations, like integration and differentiation. Never learnt that in A-level Electronics.
True continuous sampling could make use of an op-amp and capacitor network set up to integrate a signal, this right? No discrete sampling, instead, 100% continuous representation for a given bandwidth?


(Robert Wall) #58

I was going to post a rhetorical question in answer to that, but Discourse doesn’t want me to:

Let others join the conversation

This topic is clearly important to you – you’ve posted more than 23% of the replies here.

Are you sure you’re providing adequate time for other people to share their points of view, too?


(Daniel Bates) #59

I wouldn’t of thought one person’s posting would be taking away from someone else’s time. Discourse is being a little over-considerate perhaps.
It’s alright, I’ll try use my imagination.


(Robert Wall) #60

It’s all down to the number of tweak-pots that you need to get rid of drift and such things. I was a student apprentice when the 709 op.amp. became generally available, and the firm where I did my industrial experience brought out a range of analogue modules that did, well, exactly what the name says - analogue signal processing, multiplication, addition, subtraction, (not sure about division), square and square root. I can’t remember them all.

Now think of the cost of a board full of tweak pots and the labour cost a skilled tester to set it up. At today’s prices, a shade more than a 328P or STM32 on a PCB.