Emontx v2 peculiar vrms readings and power

I just added one of your ac/ac adapters as a voltage reference to my emontx v2. I have updated the sketch from emonTx_CT123 to emonTx_CT123_voltage. The unit is now giving me some very strange readings The mains vrms is hovering around 263 volts and the power has gone negative.

What have I missed? The mains voltage is definitely incorrect. or the power factor is extremely weird.


Negative power simply means that the c.t. (or the a.c. adapter, but not both) is the wrong way round. The easy way is to flip the c.t. on its cable. Our convention is imported / consumed power is positive, exported is negative.

You will need to calibrate your voltage & current inputs. The V2 relies on the internal band-gap reference inside the processor for its reference voltage, and while the stability of that is good, the uncertainty of it’s value is very large - anywhere within the range 1.0 - 1.2 V, so about a ±10% spread. Then your mains could be as high as 254 V.

I don’t understand what you mean there. Unless you have a large capacitive load, it won’t make the voltage rise much. Heating loads, and most modern switched-mode power supplies, are close to unity power factor, so will only tend to decrease the voltage.

Hi Robert,

Thanks for the quick reply. My jibe about the power factor was not serious, sorry. The old sketch measured power usage to within about 10% of the meter reading. So I guess the adapter needs switching round. I did not notice whether the plug was reversible or not. I think the last reading I took on the 3.3v rail was about 3.267v. I tried to use the calibration sketch, but it was misbehaving and doing a calcalculation after every key press.

I will go and look at it again. It just seemed to me that the margin of error on the vrms reading was way too much.

It has been a few years since I last posted on this forum :slight_smile:



I got the reading down into the 250s by using the correct calibration parameter in ct1.voltageTX(227.59, 1.7) the default value in the original sketch does not match the value in the tables in the calibration document.

I am now looking at how to adjust READVCC_CALIBRATION_CONST. Overriding it in the sketch will not work. The only way I am aware of uses templates. But EnergyMonitor is not a templated class. So once again. What have I missed? I do not like the idea of hacking it in the emonlib include file.


I wouldn’t bother. Now that you’ve got sensible readings all round, changing that will mean starting again. The only reason to calibrate the reference is to bring all the other calibration values nearer to their nominal values.
(And before you ask, I didn’t write emonLib nor the sketch. It was done that way to work off an unregulated battery supply. The emonTx V3.4 has a regulator on the 3.3 V rail, and for that we don’t use the internal band gap reference.)

While you’re at it, you can take out the if (settled) around the sending the r.f. data. That was for when emonLib used a high pass filter that took several calls to read the values to settle.

My joke about the power factor has rebounded on me. I modified the sketch to send real power, apparent power, power factor and rms voltage back to emoncms. The results show a power factor of about 0.2! Does anyone have any suggestions. Using the old no voltage sketch gives realpower readings within 10% of the meter.

That’ll teach you. :grinning:

There has to be something seriously wrong (or you have a lot of low power factor loads). Have you done the calibration of the transformers’ phase error, i.e. changed the “1.7” in ct1.voltageTX(236.2, 1.7); (etc) ?

That corrects for the combination of the phase errors in the v.t, the c.t and the time delay between reading the two quantities, and it may well be different for each combination. Plus for both transformers, it depends on the value being measured. If you’re interested, look at the test reports in ‘Learn’.

I’m confident there’s nothing inherently wrong in the V2, in fact I was using one to develop the ‘continuous monitoring’ library - on account of having spare outputs on which I could hang a 'scope to check the timing, so the problem has to be either in the transducers or in the sketch/calibration.

It has got to be the phase angle on the transformer that is wrong. I had left it at 1.7. The transformer is the same one you reviewed and tested. But it is definitely giving a different off load votage reading, and the plug was wired back to front. So it looks like the spec has changed although they have not updated the datasheet. It is a bit hot in my workshop for brewing 3kW kettles. But if that is what it takes. I think I will do some empirical tests first.

Interesting. We asked Ideal to check not very long ago, and they were unaware of any change.
@pb66 has also reported that he has found the phasing appeared to be random.

I tried searching for that post by @pb66 but failed. Any chance you could point me at it.



There wouldn’t be much of use to you if you did find the post, it was simply an observation. I have bought many of these AC adapters in the past and since I do predominantly 3ph installs, I always check the polarity to ensure I have 3 the same orientation, otherwise it gets very confusing.

I kept records and found around 30% of the AC adapters I purchased were anti-phase, it’s not a problem if you are aware, but when you are not aware it can be a huge headache.

To check for orientation you simply need to hook up a emonTx (or similar) and hook a CT on a steady single direction load, then try each AC adapter, if they are all the same orientation (power direction), that’s great and it doesn’t really matter which way round they are as you will fit the CT’s to suit.

But if any one gives a inverted reading eg -300w (negative) rather than +300 (positive) or vice versa, you can either invert all the CT’s that use that AC adapter as a voltage ref (a little messy to maintain at a later date if half your CT’s are inverted) or you can invert the voltage signal by cutting, inverting and reconnecting the 2 cores of the low voltage AC adapter lead, this is the preferred method if done neatly.

What you shouldn’t try to do is adjust it out in software, not only is that unlikely to work, if the AC adapters get swapped it will be way way out. The cut’n’shut method is the cleanest method, but yes you may have an unsightly ball of insulation tape if not done well.

I used so many of them that I could just group them into sets of 3 without worrying about cutting them about. The direction of the AC adapter is only relative to the CT’s that reference that AC adapter, and it’s only when you have multiple AC adapters that consistency becomes a necessity.


That is what I did. My setup is single phase. .However I am still getting some really strange results on my test rig. The power factor seems to be all over the place and the phase displacement on the transformer is way off. It appears that there is some instability somewhere.

I am scratching my head.


You said

What is the actual voltage out of your a.c. adapter, at what input voltage? You want, allowing for component tolerances, about 1.1 V rms at the ADC input at maximum voltage. The potential divider in the V2 is 10 kΩ & 100 kΩ, so 11:1, therefore you can’t afford much more than 12 V at your highest mains voltage. I’m wondering whether it’s clipping?

Mains 249.6V, Transformer Secondary 12.25 ratio 0.049078526. Transformer secondary 12,24, ADC 1.115 ratio 0.091020408. Total ratio 0.004467147. So for 253 volts the ADC would see 1.13 volts. For my commonly seen mains voltage hovering around 252 it would see 1.12572116. I am not familiar with the Atmel ADC but I assume that is very close to or over the limit.


My ‘rule-of-thumb’ says 1.1 V rms at the ADC input, that takes component tolerances into account and leaves a few percent for headroom. The transformer is actually within spec at +1.5% (spec is 240 V in, 11.6 V ±3% out, unloaded).

The ADC input is rail-rail. Taking worst-case values…
The rail is nominally 3.3 V, “typically” -1.1%, possibly -4%
The voltage divider is 1/11, 1.83%
The transformer +3% (yours is +1.5%).

So at 254 V in, transformer output is 12.277 V, +3% = 12.645 V
Divided down gives 1.15 V, + 1.83% = 1.17 V.

Peak-peak input is 3.3 V - 1.1% = 3.263 V, or 1.154 V rms (assuming a pure sine wave). But UK mains is ‘flat-topped’, so the peak is a little less than √2 × rms (I measured about 1.39, from memory.)

So you may well be seeing clipping. I suppose the easy way to see if this is the case, is to load one of Robin Emley’s test sketches, they are all listed at Robin's Mk2 Code variants and associated tools. | Archived Forum One of the ‘Raw samples’ or especially the ‘Min and max’, should show if there’s a problem (assuming you do it when the mains is high).

If clipping is the problem, I’d suggest replacing R13 with a 120 kΩ (or stand it on end and add a 22 kΩ in series) - which will bring it in line with the V3.

Thanks for the excellent link. There as some very interesting sketches there. I tried the minMaxRangeChecker. These are the results:

511 515   4;
510 513   3;
34 993 959;
509 512   3
511 514   3;
510 513   3;
34 993 959;
509 512   3;
511 515   4;
510 513   3;
33 993 960;
509 512   3;
511 515   4;
510 513   3;
33 993 960;
509 512   3;
511 515   4;
510 513   3;
33 994 961;
509 512   3;

So guess it is not clipping, just. I assume that 1024 is the maximum so it is quite close. I think I need to obtain a better dummy load for doing the tests. I moved the emontxv2 up to my workshop because I had run into a problem with the remote FTDI setup. When I ran the tests up in workshop I was using the load of large pedestal fan. I know, not ideal, mostly reactive. I will have to sneak into the kitchen for a kettle. As an aside, the FTDI problem still persists in my workshop with both my sketch and Robin’s. If I switch either the transformer or the fan on or off, then the USB host controller in my Linux PC receives a reset. This causes linux to delete and then recreate the tty device. If whatever is reading the device holds the device file open rather than just terminating then the device will be recreated using the next available device name for example /dev/ttyUSB1. Somehow the emontxv2 FTDI interface is signalling something to the host. Since the only out of band signal connected is FTDI pin6 RTS/Handshake it is usually used as incoming reset signal by the arduino host IDE. This pin is hooked via a capacitor to the PC6/RESET pin on the avr, whether it is used as a reset or port C I/O line is controlled by a programmable fuse… The other possibility is a control character sent inband to the host and is picked up by the linux driver (or even in the FTDI chip). I think I should really post this as a separate topic. After the kettle test. I will hack up a voltage divider and calculate the transformer waveform displacement on my scope.


Edit - reformatted data for readabilty. BT

That sounds much more like an inductive spike off the fan motor, rather than anything complicated.

You could use a much lower powered load and a multi-turn primary for your c.t. I use 20 turns and a small 250 W convector heater for phase error tests, otherwise a 6.5 V, 8 A transformer and a metal-clad resistor on a heat sink - only 30 W dissipation, and that’s too much in this weather!

That’s OK for a quick look, but not at all good for accurate measurement. Really, there’s no substitute for using the emonTx itself, because that includes the timing difference as well.

Strange thing is it happens even when the fan is not connected and the transformer is turned on or off. I am just puzzled how an event happening on something connected to an adc input on the atmega can be communicated through the FTDI header and then through the FTDI to a usb hub to the linux usb subsystem. I will have a think about it and post a new thread.



The displacement can be seen but is far too small to be measured.


Nothing surprises me any more where interference is concerned.

That’s the magnetising current.

Aaaarrrggghh!!!! Just discovered that the ferrite core in the current sensor has a crack in it!

No wonder the readings are so crazy.