The importance of continuous sampling

I had “Cheaper to make and maintain” in mind - but you’re probably right to add those as well.

Only in my dreams. I pay 13 cents per kWh, but they only pay me what they call their “avoided cost”
which varies between 5 and 6 cents per kWh.

That’s one hell of a deal you’ve got! thumbsup

Yes, we’ve moved to something more like yours, but us old-timers are grandfathered in.

Now, I don’t now, nor never did work directly for any energy suppliers. But I did consult work for some and was on the citizen’s advisory board for my local supplier a couple of decades ago.

Being an energy supplier is not as easy as many people think It really isn’t a license for printing money as some imagine. Digital meters give utilities the ability to charge customers more for using electricity when it costs them more to produce it. They generally have not been well received, because people don’t really understand the economics of power generation. Some plants are very efficient and the cost per kWh is very low. Other plants are not so efficient, and/or use more expensive fuel, and/or have higher capital or maintenance costs. Electricity has to be generated the moment it is needed, so the utility has to be able to quickly bring on line the necessary generation capacity to handle the load. I have been measuring my house’s load long enough to see the pattern. I have close to a 100:1 ratio for very short term demand and probably at least 10:1 for medium term to base load and something like 2-3:1 for average to base. When you add up lots of customers, the peaks don’t all happen at the same time (mostly) so the peak to average ratio is probably something like 10:1. Different plants have different abilities to throttle down, so generally utilities will have some plants that they bring up and down during the day as load demands change.

All of this is expensive, which is why many of them are working on demand side management and conservation. If they can get people to reduce the load, they can avoid building another plant (which generally no one wants near their back or front yard). (Un?)Fortunately, people have been really good at conserving so, now they need to raise their rates to cover the expenses (and the profits their shareholders demand).

But, I think the real reason is “everybody else is doing it :wink:

They call that the death spiral here (perhaps globally?).

Back on topic for a bit…

The really great thing about all this open source stuff is the old open source motto: “if you don’t like it, rewrite it”. Kudos to all vendors who’ve opened up their schematics and software to permit that. Several folk thought the OEM kit could be improved by moving from DM to CM so went about and implemented it. That’s not to say it was bad or useless beforehand, I suspect the developers were over-joyed to have folk contributing improvements.

Who knows, maybe some enthusiastic IotaWatt guru will do the same there one day and host a bake-off of the two firmware versions. Then we’d know for sure whether these costs your’re concerned about are real or not.

I agree. I purchased a bunch of Sonoff (and other esp8266 based) hardware because of Tasmota. I would never have done that if it was their FW or the highway. I have some updates to it that I need to submit, but mostly it has just been working.

Being able to see the code is sometimes really helpful to see what it is really doing and why it is not doing what I thought it should. How helpful it is depends on how much time I am willing to spend studying it and how hard it is to understand. Some projects are inherently complex and some are just harder to understand because of the structure of the code and accuracy/presence of any documentation.

Not that it means anything, but after reading the above comment,
I became curious. i.e just who is this guy?

FWIW, it turns out he’s Dr. Craig Markwardt, Research Astrophysicist.
A member of NASA’s Sciences and Exploration Directorate.

I was reading a book the other day :wink:
A chapter on op amps.
Turns out they’re called op amps because they can do mathematical operations, like integration and differentiation. Never learnt that in A-level Electronics.
True continuous sampling could make use of an op-amp and capacitor network set up to integrate a signal, this right? No discrete sampling, instead, 100% continuous representation for a given bandwidth?

I was going to post a rhetorical question in answer to that, but Discourse doesn’t want me to:

Let others join the conversation

This topic is clearly important to you – you’ve posted more than 23% of the replies here.

Are you sure you’re providing adequate time for other people to share their points of view, too?

I wouldn’t of thought one person’s posting would be taking away from someone else’s time. Discourse is being a little over-considerate perhaps.
It’s alright, I’ll try use my imagination.

It’s all down to the number of tweak-pots that you need to get rid of drift and such things. I was a student apprentice when the 709 op.amp. became generally available, and the firm where I did my industrial experience brought out a range of analogue modules that did, well, exactly what the name says - analogue signal processing, multiplication, addition, subtraction, (not sure about division), square and square root. I can’t remember them all.

Now think of the cost of a board full of tweak pots and the labour cost a skilled tester to set it up. At today’s prices, a shade more than a 328P or STM32 on a PCB.

Interesting. Thanks.

If analogue were really better, we wouldn’t have software defined radio and suchlike.

The critical bit is

so as long as the digital system can run twice as fast (plus a bit for luck) it will get the same answer.

(Posted mainly to reduce Robert’s % :grin: )

True, and some filter functions that are possible in software are not realisable in analogue form.
And coming back to post no.57, obviously Dan’s never had to fight an analogue integrator.

(Not posted mainly to restore Robert’s % :stuck_out_tongue: )

That’s right. :sweat_smile:
I’d already lost the fight after 4 seconds when I tried using that thing imagination.
Probably too busy COUNTING THE SECONDS.

(Not the best excuse.)

This article from this thread highlights another benefit of continuous sampling. In continuous sampling the conversions are triggered by a h/w clock, so regardless of what the f/w is doing that sampling is going to occur as regular as clockwork. If f/w is in charge of kicking off each conversion, then you introduce jitter into the sampling.

It’s important to note, that if you want low noise and good frequency resolution from your ADC, you will need to sample at a very consistent rate. To do this, you can use first conversion mode, free running mode, or an interrupt that is a multiple of your ADC clock. All other modes will have 1/2 ADC clock jitter. This will cause your samples to not line-up in time, and slowly wander back and forth, effectively causing a frequency modulation of your signal.

That paragraph is not exactly misleading but misses that important bandwidth thing again.
A software sampling approach will basically work if it’s consistently orders of magnitude greater than the target.
I think the main problem with a software approach is at such and such an interval the data must be processed. If there’re a bunch of hungry devices going off in between the time sampling stops and starts that would mean at least two things… Inaccuracy over time, and bad luck. In low-power situations where each 0.1 of Watt counts it’s going to matter, so yeah I’d say it depends on the context and aims, how ‘good’ the method turns out to be.

Can you explain exactly what you mean there? Conventional wisdom is that the sampling frequency must be at least twice that of the highest component of interest in the waveform being measured, and if I understand you correctly, that’s significantly less than “orders of magnitude” - an ‘order of magnitude’ usually being taken as a factor of 10.

And if sampling is continuous, as per the title of this thread, where does

come into it? :confused:

Yeah orders of magnitude is fine.

I’ve been following this thread for some time.
Around 2013 i’ve written myself a lib for Atmel that perform continuous sampling.
Back than i tried some approaches to deal with jitter. Main issue was that while trying to maximize sample rate, sometime calculations inside the interrupt routine toke longer than next ACD read, so jitter was introduced arbitrarely.
First tried to use a software PLL to lock on the cross to zero so i get the same number of ADC readings per AC cycle. It was discharged as the PLL induced minimal delays when not locked and occasionally lost the lock.
After investigation i discover that TIMER B can be used for automatically program the timing that the ADC is read and the value put to the ADC register that can be read at any time before the next read without having to worry about timing the ADC read from software. This fixed the jitter.
With additional math i was able to add “auto sample rate” feature by adjusting ADC reads automatically at boot to the maximal attainable sample rate for the available free cpu time.
It’s usable now at 50 and 60hz without requiring recompile and perform precise frequency measurement.
Been using it for 6years now on multiple installations without a problem measuring few watts ~20 up to 4kWs accurately.

The lib is fully customisable to support different number of voltage and current sensors. Reports Voltage, Frequency, Current, Active Power, Aparent Power and Power Factor.

If interested, take a look:

1 Like