ESP8266 WiFi power monitor

The big advantage of the op.amp. is that every input can share the same bias source, and it should exhibit a lower impedance, which should be good. ‘Should be’ because there still needs to be some means of protecting the ADC input if the input is over-driven (for a CT, that could be a downstream fault).
All inputs sharing the same source might not be a major advantage for you, but it does mean that you can filter just one input to determine the bias offset, and then a simple subtraction with no further ado will suffice for every channel. With a lower performance processor, that’s a big advantage.
Don’t forget that the cost is not the bought-in cost of the additional components, it’s the extra area that they require and physically putting them on the PCB, then testing them too.
The point about lower impedance being good means that you’re less likely to see errors arising from charging the sample & hold capacitor. The AVR spec sheet lays down a maximum source impedance (and I don’t expect it’s unique there), the inference being that a high source impedance will cause the voltage being measured to dip (or more correctly, change) as the source charges or discharges the S&H capacitor. That’s the effect you rediscovered when you had an insufficiently large capacitor. So with an op.amp. bias source, the current to charge or discharge the S&H capacitor comes from the supply via the output stage of the op.amp, rather than as a transfer of charge from the bias decoupling capacitor.
Finally, as you alluded to earlier, the op.amp. must be quiet in order not to inject noise into the system. You can of course get noisy resistors and capacitors as well as noisy op.amps.

Which you choose is a matter of weighing up the advantages and disadvantages of each, as you perceive them.

That’s a pretty good rundown of the issue as I saw it as well. When I used the voltage regulator, I fed it into the unused ADC channel directly to read the bias value. But with the 10K resistors, I get midpoint ADC readings of 510-512. Some of that is probably due to variation in the actual resistors, but that explanation alone would suggest 511-513. So I’m probably losing one ADC count. That’s the specified accuracy of the ADC, so I’m not losing sleep over it. With the voltage regulator (pretty decent one) I saw maybe 1 count variation with no ADCs, but while running with a few active ADCs, an idle channel would bounce around 3-4 ADC counts. Not good. Moreover, it didn’t seem to correlate with the AC phase, although that was just a naked eye observation of the numbers and not a true regression.

More importantly to my way of thinking, is that whatever the depression in the bias voltage is, it’s probably the same for each measurement, and as long as I take care to determine that value statistically, should be good.

There are three points in favor of resistors that I recognize:

  • Its a simple solution from my perspective.
  • As long as the bias is derived by splitting the AREF voltage, the bias value will be immune to noise in the AREF circuit. (there probably is a way to peg an op-amp to a reference off AREF as well)
  • The CTs sit on top of the bias voltage, and any contact to ground would effectively short the bias on all of the CTs with a common bias supply. With resistors, it simply dumps AREF to ground through a 10K resistor - 330ua.

So I’m going with what I’ve got for now. Any additional thoughts about the original phasing issue? Sliding the current current samples to be in phase with the voltage seems to be more accurate, but I’d like to understand why the AC transformer and CT seem to work opposite (lead and lag). Or maybe the CT is just like 20 degrees out and the 6 deg transformer nets to 14? That doesn’t match your measurements. This situation was measured on a YHDC STC013-030 - basically an SCT013-000 with an internal 62ohm resistor.

Thanks,

I haven’t measured the SCT-013-030, so I can’t comment, but I think it should be between the SCT-013-000 with a 22 Ω burden and the same with a 120 Ω burden, but closer to the 22 Ω curve. Also, I haven’t measured any a.c. adapters other than the UK ones, so I’ve no idea how your transformer behaves. Unless you have a means of accurately measuring either the voltage or the current transformer’s phase error, you’ll never know how the difference is made up, all you can know is the nett difference. And remember, I test at 50 Hz. The 60 Hz errors will be different.

I think we’ve got something to work with here.

See if this makes sense. made up a power cord with a voltage divider and plugged it alongside the AC adapter:


[ DON’T DO THIS - THIS CIRCUIT IS POTENTIALLY DANGEROUS - Moderator (RW)]

When I look at the two traces on the scope, channel one (the AC adapter) crosses zero 280us before channel 2 (the line voltage). My device presents an 11K load to the AC adapter, so pretty close there.

One 60Hz cycle takes 16666us, so I get a phase shift of 360 * 280 / 16666 = ~6 degrees. Not sure of the terminology, but I’ve been saying the AC adapter leads the line voltage by 6 degrees.

I suppose I could do something similar by scoping the VT against the same line voltage probe with various load on the AC line. Does this make sense?

Before you think any more about that circuit, stop and work out what will happen if something goes wrong, for example the neutral falls off outside your house.

The consequences barely bear thinking about. You’ve said you’re a software guy with not too much knowledge of electrical engineering, so quit while you’re still alive. You’ll have a neutral-earth fault and all the current that was carried by your neutral will flow through your 'scope lead to ground. and when that fails, everything is live to line.

3 Likes

Revisiting this question from a few weeks ago, there have been some new developments. I’m actually running the MCP3008s at 4MHz now, and as you point out, the sample-and-hold (S&H) cap needs a really low impedance source to work at that speed under “normal” conditions.
Even with 10K voltage dividers, I had quite a bit of sag running at 2Mhz. I came upon a software solution…

The datasheet explains that the S&H period is defined by the rising and falling edges of two specific CLK cycles in the SPI transaction. In another section, the datasheet recommends a methodology to program the SPI transaction when using a MCU where the transaction has to be in byte (8 bit) multiples. In that scenario, the S&H defining clock cycles are embedded in one of those bytes. You see where I’m going with this…

The actual SPI transaction is 17 bits. In a 24 bit stream, it is defined as the first “one” bit (the start bit) and the 16 bits that follow. They suggest right aligning the transaction, padding with seven leading zeros and the start bit in the first byte. You can shift the whole transaction left 4 bits so that the S&H period is defined by two cycles that straddle the first and second bytes. If you then separate the sending of those bytes, you can effectively make the S&H period any length you please. In my case, the natural latency between sending one and then two bytes seems to be enough to satisfy the S&H cap with my higher input impedance.

Just a heads up that the project direction has changed relative to scaleability. The next generation uses a NodeMod 0.9 development board ESP8266. These things are available for less than $5 from China and $8 or so from a US re-seller. It doesn’t make a lot of sense to gang a bunch of boards together to use one processor.

So in the case where you want to monitor 41 circuits, I would say it would work better to just use 3 of these devices, each independently sampling and sending their 14 channels over WiFi. A single AC voltage reference can be split to service all three, and the device runs on the USB power to the NodeMod boards, so power can come from a multi-port USB brick with three micro-usb cables.

The way this design has settled out, you can use any of the CT channels as a voltage sensing channel by adding a capacitor and two voltage splitting resistors externally. The resulting circuit would be identical to the “voltage” channel that is hardwired on the board. Associating the appropriate voltage channel with the various CTs would be trivial in software. With that in mind I’ll keep the voltage reference channel to an easily modified variable.

Can you just share the voltage reference by connecting the input pins on the nodemod’s and the voltagesensor? Or is an other connection required?

The voltage reference is read by an Analog to Digital Converter ADC that is not on the NodeMCU board. While I think it would be possible to interconnect all of those on-board-signals, it would require that they also have many other things in common like ground and ADC reference voltage. Also, given that these devices typically live near electrical panels and a lot of high-voltage AC wires, noise picked up from the interconnect wires could degrade the accuracy considerably.

The simplest and easiest way will be to split the AC wall plug into three 2.1mm power jacks.

Got a new prototype board and populated it. Working very well at a little over 63K samples/second. WiFi connection is solid. ADCs seems to be reading well within the margin of error of my Fluke 175 true RMS meter. Now I need to develop better calibration data for the CTs.

This version accepts the inexpensive NodeMCU 0.9 ESP8266 (about $6 US). It’s powered by the micro USB port. Uses the Arduino IDE with C++. I added a shunt voltage reference on the unused ADC channel (2.5v 0.1%) so the device self calibrates everything but the AC voltage reference. Also added is a micro-SD slot to log the power on board in addition to or instead of the WiFi export. An 8Gb SD card would hold a few decades of power logging at 10 second intervals.

Any of the 14 channels can also be used for 10K temperature sensors or standard 4-20ma output sensors. Build cost is under $40 at qty 3, assemble yourself. Mostly surface mount (way easier than thru-hole once you tool up a little).

The ADCs are under the MCU board, leaving a lot of space for additional sensor/output hardware.

I’m working on putting the schematic and board on GitHub as well as the code. Also, there are still a few minor problems with the board that I had to rework. They will be fixed in the next version. I’ll also be removing the eMonESP logo as I see that the founders of this project are using that name for a version of their product that uses the ESP8266 as a WiFi add on.

2 Likes

Great to see so many channels and a decent SPI ADC in use. Would love your design files to hack around and apply my experience with using I2C ADC’s (much slower) . I can recommend a usability change for the NodeMCU based design, add a double row of headers to be compatible with the different variants in the market. See my design below:

I’ll look into that. I understand that the NodeMCU 1.0 is narrower. I didn’t know it had the same pinout. That being the case, It would make sense to use the double headers. I’d have to tighten up the traces a bit to squeeze the ADCs in there, but I think it’s doable.

I’ll be publishing the schematics and PCB files, hopefully within the next month. But here’s the schematic. I’ll make the code available on my blog soon.


10/29/16 Note schematic has changed to version 2.1. Pinout on the microSD was changed.

I source my connectors from a certain shop on aliexpress for large quantity use and from RS-Online or Element 14 for prototyping. They are priced about 25-50cents each. I got a box of 200 or so for $50 and still using them. Let me know if you want some and I will post them to you free.

Another design review note the MCP3008 functions at 2.7 V, 5.5 V so supplying it from the 3.3V rail is fine and you can save some parts on the MISO line. The ESP8266 is also 5V tolerant. I guess the MicroSD which is also on the same line is not. I will transfer your schematic to Eagle and lay it out with dual row headers.

Not sure where you get that the 8266 is 5v tolerant. It is NOT 5v tolerant. To my knowledege you’ll fry it if you put 5v through it.

Simon

I think the terms “5v powered” and “5v tolerant” are being mixed incorrectly (not the first time nor the last). The esp8266 breakout boards being used have on-board 5v to 3.3v regs and the esp8266 is operating at 3.3v, which is very different from being “5v tolerant” which as pointed out, the esp8266 is not .

There is some lost in translation related discussion on hackaday about this: ESP8266 is 5V-tolerant after all? | .Stack | Hackaday.io . But the point is moot since microsd is not 5V tolerant. Powering the ADC with a lower voltage will reduce the part count and design complexity a tiny bit.

In my experience with this circuit, the MCP3008 ADCs need 5v to run reliably at 2-4Mhz. I tried 3.3V.[quote=“whatnick, post:31, topic:1692”]
But the point is moot since microsd is not 5V tolerant.
[/quote]

There is the spec for SD and there is the reality. As the schematic demonstrates, the card itself lives on the SPI bus, and so the tolerance to local conditions will probably vary with the manufacturer of the card. Even with the 3.3 level downgrade, I’ve had mixed results using the SD. For that reason, my next board will use an actual level shifter (recommendations welcome) for the ADC miso signal. The idea is that I’d like to preserve the high impedance of the ADC tri-state pin when using the SD card.

Knock yourself out, but I can give you the ExpressPcb file if you want. I tried the Eagle software and found it to be so clunky that I gave up on it. I haven’t been at this long enough to have a broad experience, but the ExpressPCB application is very intuitive and the ordering, service, cost, and board quality has been to my satisfaction. ($61 for three boards in less than a week).
The Eagle software may have virtues that I don’t appreciate with this simple project, but it doesn’t seem to be a good fit for me. That said, if there is broader interest in this device, the advantage of portability to different PCB makers is not to be underrated, especially if it ever came to a production build.

I am just a bit attached to eagle due to my commitment to libraries. In this design - http://wiki.seeedstudio.com/wiki/Xadow_-_SD I used a buffer to drive the microSD, as do a lot of Adafruit and Sparkfun boards.