I do understand what it does, but why does the input signal get shifted in time in the first place?
Is it due to the time delay between the ADC sampling of V then I?

The “V then I” sampling delay is part of the reason the phase shift occurs.
The CTs themselves as well as the AC/AC adapter that supplies the voltage waveform sample
also impart some phase shift to the sampled voltage and current signals.

Got it!
So, I’ve used the emonlib as base for the firmware of my own energy monitor hardware but I’m not using any phasecal at the moment and was’t sure if I needed one…

It runs on a dspic33 which can sample 4 inputs at the same time (removing the sampling delay between voltage and the 3 CT inputs is has). I also take advantage of the hardware MAC (multiply and accumulate) to calculate rms values.

I did check the AC/AC adapter waveforms on the oscilloscope and could see that the transformer phase error depends on the load (as expected) - the higher the load the worst it gets.

About the CT’s phase error, I have no clue on how to measure them… I guess the best way to do it should be by attaching a pure resistive load on the CT input and check waveforms, right?

(I plan to share all the stuff I’ve built and some pics soon)

ATM, I can’t remember the name of the thread the post is in, but @Robert.Wall described the method he uses to check CT parameters. It uses a PC sound card and unless one knows exactly what they are
doing, it can be dangerous .

The way I’ve measured the c.t’s is reported in the ‘Learn’ section - a 'scope is very difficult to be precise with, I use a sound card to record the numbers and Fourier in a spreadsheet to analyse. I did post details a few weeks ago - but I can’t find it just now.

I’ve done a quick test with a pure resistive load connected to the CT.
In the o’scope picture you can see the waveforms:
Yellow: AC voltage on adc input (after transformer)
Cyan: CT voltage on adc input.
White: AC mains voltage (directly from wall socket).

And here are some webpages from the board webserver:
Measures:

Shows a power factor of 99.9 which seems good enough!

I think that tells you you have an uncorrected phase error of 2.54°, i.e. cos^{-1}(1624.9/1626.5). It obviously depends on your VT and your CT, but I could well imagine that’s the net phase error between them.

If you were measuring a load with a power factor of 0.5 say, then that same phase error would introduce ~8% error in your real power measurement cos(60) / cos(62.54).

Phase errors barely matter at unity power factor, but really kick in at lower power factors.

But you need to bear in mind that, as the voltage waveform isn’t a true sinusoid, it does introduce distortion in both the shape and the amplitude of the wave that’s phase shifted, so that will affect the power factor calculation.

Well, the o’scope does report a 2º phase error (PHA_A) on the rising edge but a 0º error on the falling edge (PHA_B) - which doesn’t suprise me as it’s precision isn’t that great.

Looking at the o’scope waveforms it seems to me that they are closely overlapping to make me worry with phase corrections.

As Robert said, the waveforms get distorted - it can be seen in the o’scope capture that the falling edge on negative cycle has more error that on the 0 crossing zone, so even if I phase shift the voltage wave to correct the phase at zero crossing, at some point I would be adding more error at other places…

Did a couple more captures and the phase shift is evident between mains AC voltage and sampled signals:

The good news is that either voltage and current waves are pretty close together.
The worst case, where the phase shift is greater, is at the end of the rising/falling cycles.

Like I said before, I still think that phase cal should not be needed since it will fix the phase at some points in time but make it worst at others, right?

I’m afraid I don’t understand that concept. Where the two waveforms that you are comparing are of a different shape, the only legitimate concept that I understand is to do a Fourier analysis of each and compare the phase of each harmonic in turn.

And as most of the power is concentrated in the fundamental, it is phase shift at the fundamental frequency that should concern you most. That’s why, when I test a transformer, I only consider the phase shift at, for the UK, 50 Hz. In any case, there’s little you can do about the phase shifts at the various harmonic frequencies.

I agree wholeheartedly. But that means you must have a way to measure both voltage and current that introduces no phase errors (or exactly equal phase errors in both) and performs both measurements at exactly the same instant. For safety reasons, transformers are mandatory where “unskilled” persons will be installing and using the equipment, and the Atmel 328P processor we use doesn’t have a means to take simultaneous measurements on two channels. So as @dBC pointed out, while you restrict yourself to unity power factor loads, phase errors are not a major concern. Where the power factor is poor, a phase error becomes very significant.

It shouldn’t do. If there’s no phase error and a purely resistive load, then you should get unity power factor no matter what shaped V signal you throw at it. If you look at real power in the frequency domain:

A resistor doesn’t care what frequency is passing through it. The stuff at 150 Hz should warm it up just as effectively as the stuff at 50 Hz.

Distortion doesn’t cause non-unity power factors per se, it’s mis-matched distortion that does that, i.e. distortion that appears in one of I or V but not the other. So if you had a bunch of V at 150 Hz with no I at 150 Hz, then that harmonic would drop out of the summation above but it wouldn’t drop out of the Vrms calculation, so real power would be reduced but apparent power wouldn’t be and the power factor would drop away from 1. But that can’t happen with a resistor.

The more common distortion scenario with non-linear loads like SMPSs is significant I at 150 Hz, with very little V at 150 Hz. Again that term drops out of the real power summation above (or is greatly attenuated when multiped by the very small V component) but contributes to Irms will its full force. So again, real power is reduced but apparent power isn’t, leading to a non-unity power factor.

I was referring to the phasecal algorithm. It is that which introduces the waveform distortion. It only does a ‘pure’ shift when the wave is a perfect sinusoid. If the wave being processed isn’t the one that’s delivering the power, all (well, most) bets are off.

And the major mechanism whereby the power factor becomes wrong in emonLib is the unshifted voltage is used for the rms voltage, while the shifted voltage is used to calculate real power. In effect, two different values for voltage are used in the numerator and denominator in the power factor calculation.

That’s my point. The 3 waveforms I show in the pictures should have exactly the same shape because I used a pure resistive load (as @dBC pointed out, you get a unity power factor) -
but they don’t and the reason is the distortion caused by the voltage transformer and current transformer.

This is where I fail to understand what it has to do with the phase calibration…
Yes, the FFT will tell “how much” distortion certain transformer causes on the waveform (by looking at the harmonics around the 50Hz) but I don’t see how you can use the FFT to check if the voltage is phase shifted from the current.

As I said on a previous post, this the microcontroller I’m using (a dsPIC33) can sample 4 inputs simultaneously - I use 1 for voltage sampling and the other 3 for current sampling.
(btw, this is the whole reason I started this discussion - to understand if I needed phasecal or not).

Anyway, I think we are mixing 2 different effects in here:

Distortion: caused by the transformers (wither voltage or current)

Phase shifting: caused by the sampling delay between V/I and/or by the transformers.

If you work through the maths, that is where you end up. Importantly, the cross products (V_{m} . I_{n}, where m & n are different), integrate to zero, meaning that active power is only produced where the same harmonic is present in both voltage and current waveforms. Importantly, and except for ‘flat-topping’, the amplitudes of the harmonics in the voltage wave are small, therefore most of the power is concentrated in the fundamental frequency, even when both voltage and current waveforms are identical in shape.

On that basis, it is my opinion that only the phase error at the fundamental frequency is important.

Really?

Did you go on to read the paragraph which followed?
When you record the sampled values and perform the DFT, you get phase and amplitude of each frequency, phase being relative to the time of the first sample. Do the same for both - and because you sampled and recorded the two channels in parallel, the starting time is the same for both - then by subtraction you get the relative difference between the two. Performing the DFT tedious to do by hand, easy with software or a even a spreadsheet, but it’s not a difficult procedure.

I don’t think I am. You seem to be confused because you’re referring to amplitude distortion that’s different at different points in the cycle as a phase error that’s different at different points in the cycle. If the zero crossing is shifted because a few samples have moved up or down as a result of the distortion, but the underlying fundamental frequency hasn’t moved in time, then the zero crossings no longer relate to the phase of the 50 Hz component. You are not measuring the quantity you should be measuring.

Agreed. And it is because they don’t have the same shape that it’s necessary to perform any adjustment for phase errors on the 50 Hz component, which is where most of the power is, and ignore the distortion that results from the non-linear transfer functions of the transformers. And if you use ‘our’ phasecal algorithm, you’ve not included in the list the distortion that the algorithm adds to the sampled values before they are used.

What it comes down to is outside the theoretical world of the laboratory and sine waves, the method you’re trying to use just does not work. It’s useful for seeing large errors, but when it comes down to a few degrees, it’s simply not sensitive enough.

I think I’ve answered that above. If you can have two sensing circuits that have zero or identical phase errors, you don’t need to correct for the differential phase error because there isn’t one. Otherwise, you do. If your sampling rate is such that you sample at something like every 0.1° or better, then you can compensate for phase errors by picking pairs of samples with a time difference between them, rather than interpolating.

Did a cross correlation of the sampled voltage and current signals measured when using a pure resistive load on the emon and the result is:

The peak is at 0, meaning that there is no lag (phase shift) between the 2 signals.
So, as you said (@Robert.Wall), my emon hardware doesn’t need any phase calibration.

I’ll open a new discussion topic to share my emon hw/fw.
Thanks all for the valuable inputs!