ADC value transformation location (microcontroller vs server)

Hi -

First, just to say it’s great to see a project like this so well supported and active. I’m looking forward to building/contributing a small system to it.

A very general question, but I was wondering if the rationale of the firmware partitioning could be explained? Looking through the source, the code running on the microcontrollers at the monitoring end is not a good match for the hardware available. For example, running heavy maths functions on an 8bit microcontroller is marginal (sine, cos, sqrt etc), and the firmware is brittle to changes due to this.

All the data are aggregated on a destination server, which is a supercomputer relative to the microcontroller. Why are the transformations not done at the interfacer end, with the microcontroller simply reading and forwarding the ADC values over the serial link? The microcontroller firmware can then be significantly simplified, and the transformation functions brought to a higher level of abstraction at the server side.

To be totally clear, I’m not having a dig at the original design and look forward to any comments : slight_smile:

Welcome, Angus, to the OEM forum.

If you take a look at the FAQ page - it’s linked at the top of every page, you’d see that not far down the list is “Asking a good question is difficult. Make no assumptions…”

It would be helpful if you didn’t assume we all knew what you were thinking about. So perhaps if you add some detail to the generalisations, somebody could respond to your concerns.

Hi @Robert.Wall - thanks for the response. For a specific example, consider the emonTx, which uses an ATmega328. The firmware that is running on that is, correct me if I’m wrong, the Continuous Monitoring version. Looking at the library EmonLibCM, a call to the EmonLibCM_get_readings function, for example, has at least two, and up to 5 calls to sqrt() as well as multiple floating point operations. After retrieving the values, the emonTx firmware then sends the result over the wireless link, with the assumption that this is going to a central server (let’s say hosted on a Raspberry Pi, for illustration). At no point does the emonTx use the processed value, and it doesn’t care about the human readable interpretation.

8bit micros are not very good for this kind of processing written in this way (and that is not a criticism of the existing code). Calls to library sqrt and trig functions will be slow, both in terms of number of instructions (shuffling 8 bit words around if nothing else), and absolutely due to the low clock speeds. They have no native floating point support, so one is always reliant on the compiler when using them. That’s why so much code space is required for the emonTx firmware, and the ATmega328 is needed. Combined, this is what I referred to as brittle behaviour - the compiler is has to do so much work to map complex functions to the AVR’s architecture. Simply put, they were not designed (historically) to be good targets for C compilers.

My question/suggestion is therefore to move the processing events to the central server and just have the TX firmware pumping out values from the ADC. That way you can leave everything in native word sizes, with simple accumulation and shifts. The server, with full floating point support, proper libraries, and running at >1 GHz can do the heavy lifting. This frees up the microcontroller side to do what it is good at - handling peripherals and interacting with the real world in a deterministic manner.

Is that a bit clearer? Also happy to discuss porting some of the firmware over from Arduino land to a decent Cortex-M device; I’d be interested to contribute (time allowing as always!) to that effort :slight_smile:

The reason that it’s the way it is can be explained in just one word - history.

The original emonTx - at least as I know it - worked to the emonGLCD, a similar- sized unit with, as the name implies, a liquid crystal display. That combination provided a local display but with absolutely minimal storage. Realistically, there was no possibility to save or analyse the data produced. To answer that need, the NanodeRF was conscripted, to provide access to the WWW and emonCMS running on a server somewhere. All three devices use the Atmel '328P, and were offered as a kit of parts for self assembly. If you think about a '328P running an ISM band receiver that can handle only 1 character at time and feed the display, it becomes fairly clear why much processing had to be done in the emonTx. And once that’s happened, the desire to maintain backwards compatibility makes a radical change difficult. I don’t know whether this is true or not, but the entire system as it was has the distinct look and feel of a final year undergraduate project about it.

As you appreciate, times have moved on and just about the last scrap of performance has been wrung out of the '328P. I’m not too concerned about the heavier maths operations -they’re only carried out every few seconds but even so, some years ago (but principally for reasons of usability now that our consumer base has moved largely from being almost exclusively the knowledgeable and skilled constructor to include those who may have but an introductory level of knowledge of electrical & electronic engineering and computing) I did put forward exactly one of the suggestions you’ve outlined above: to transmit the data after minimal processing to carry out the final stage of calibration and conversion to ‘human’ units at the destination. It was never followed up.
Rather more recently, there was much discussion about moving to a STM platform, but that too seems to have fizzled out.

It’s necessary to carry out at least some data aggregation at source, due to restrictions on the use of the radio band (like it can transmit for only 1% of the time), so it’s quite likely that not all that much load could be shed from the front end.

All comments and suggestions are of course welcome. I’d like to think that at some point, a move can be made to more capable hardware.

I’ll drink to that! thumbsup

1 Like

The atmega328 has more than enough cpu power for the task of measuring electrical energy consumption and it can even be used to build a certified company meter.
With 16 times less cpu power we went to the moon!
You dont need to use sqrt, aritmetrics and floating point math anyway.
Of course there are implementantions that are optimized to the cpu and implementations that are not so efficient.
If you dont believe me just take a look at the ATMEL AVR465 application note.
The truth is that it all relates to cost and availability.

I think you’re always going to want to do at least the V x I dot product at the pointy end, just because of the sheer volume of data and its need to be carefully timed so as not to introduce phase errors.

There was some very early prototype work done on that a few years back: STM32 Development

I was wondering what had happened to that project. @danbates is that project still a going concern or has it died out?

A more modern Cortex-M based processor is way cheaper than the old 8-bit AVR stuff.

Why bother messing with a generic IC anyway?

Atmel makes a couple of energy monitoring chips that would be even better suited for the job.
There’s been a couple of projects that have gotten a little exposure from the OEM forum,
but no one seems to have paid much attention. @jdeglavina’s Split Single Phase Energy Monitor
is one of them. I have one. It’s a solid performer, the support is top notch and at 39 bucks for a 2-channel unit, it’s one hell of a deal. He sells a 6-channel monitor for 89 bucks.

1 Like

Yep, I’m a big fan of energy ICs. Another one to keep your eye on is @cab123’s efforts with the Shelly EM

1 Like

Energy IC’s are nothing more that a customized MCU and ADCs on a single package pre-programmed with an efficient and closed source program and a documented API for accessing the data.
By paying a higher price that includes validated quality it’s a convenient way to reduce risk and save on developing costs.
I bet some may even use a less powerful cpu than atmega.
I like them also by the way.

Shelly is a nice and clean example of what can be done.

I had high hopes for that. With enough thought put into it and a modular approach, it could have been good.

Hello @dBC and @Robert.Wall.
Sorry about the silence on STM32. There has been a necessity of my own education on some fundamental aspects. I’ve wanted to share an update for a while, just I keep coming back to certain gaps in my experience, I’ve been thinking hard, not just about STM32, but on other related EE topics.
I imagine it might be a pain to anyone waiting, but I’m glad to be taking my time making test equipment like this:

A DIN rail switch, indicator lights, and possibly another distro block go on later today, I’m proud of it.

Then some basic tests.

I have a slightly detailed document with thoughts around testing procedure. I’m just getting someone to check it. Perhaps I can share it here with the intention of getting more experienced constructive feedback? It’s 9 pages long.

@Robert.Wall, dBC - thanks for the replies.

Yes, legacy is both a curse and a salvation for hardware - got to keep it backwards compatible for far too long, but it also locks users in…! The 328 is a fine example of that in itself. And totally understand, it is very hard to go completely fresh sheet without annoying someone.

Agreed on the VI being done at the monitoring side, but just a simple multiplication(ish!)

I know the Atmel SAM series better than the STM32s, but I’ve got a few ideas about it. I’ll post up a separate topic when I’ve got the outline of it done (and also maintain backwards compatibility).

Happy to look at the test procedure as well @danbates - good to start learning more :slight_smile: