It is possible that this has been done before, you could try searching the old forum.
The short answer is everything happens at half speed. (Except of course the timed delays and serial baud rates). Therefore, as the ADC conversion time is reckoned in clock cycles, that too will happen at half speed. So instead of reading about 50 sample pairs per cycle (in the 50 Hz world), you will read about 25 sample pairs per cycle. This halves the highest frequency that can be measured, to an absolute maximum that’s a little over 1 kHz. But it should still be enough for most purposes as the power in the harmonics at that frequency should be minimal.
There could well be a degree of inaccuracy arising from the lower sample rate if you have a significant load that is controlled by a thyristor/triac switching in phase angle mode (because the turn-on step could happen up to 1/25 s before it is seen, compared with 1/50 s at the higher clock frequency.
Actually you can add the ADC to the list of things that won’t be impacted if you want, i.e. timed delays, serial baud rates and ADC conversions. The ADC is a lot like the UART or the timers in this regard, they all have their own pre-scaler clocks fed by the system clock. So obviously you need to know the speed of the system clock when you set up those pre-scalers to get the correct result, but it means they can remain unaffected by different system clocks.
I run my 2560 at 3.3V which means I have to run it at 8MHz(*) (it can only run at 16MHz at 5V) but I still run the ADC clock at 125kHz, so my conversions still take 104usecs. You simply need to write a different value to the ADPS bits in ADCSRA. On 16MHz systems a divider of 128 gives you the 125kHz ADC clock, on 8MHz systems, a divider of 64 gives you the same.
If you do plan doing serial IO, or you want your timed delays to be close to reality, you’ll almost certainly need to calibrate the internal oscillator on a part by part basis. AVR053 has all the details and it depends on exactly which part you’re using as the internal oscillator has evolved over the years. The basic method is to run a small calibration image on the AVR while the ISP feeds it an accurate clock. The calibration f/w uses that clock to work out how far out its oscillator is, and stores that result to a known location in EEPROM. Your production firmware then reads that location at boot time, and uses it to tweak the oscillator frequency.
(*) But I don’t use emonlib so don’t take that as an indication that it will or won’t work.
EDIT: rough calculation added: if we assume 50 sample pairs per cycle at 16MHz, then that’s 20000/50 = 400 usecs per pair. Assuming everything is done sequentially (no overlapping the maths with the conversions) then that’s almost a perfect 50/50 split between conversion time and CPU time at 200 usecs each (since one conversion takes about 100 usecs at 125kHz). So on an 8MHz system, it’ll still take 200 usecs to do the two conversions (assuming you tweak the pre-scaler to keep the ADC running at 125kHz) but the maths will blow out to 400 usecs, so it’ll now take 600 usecs per pair, or about 33 sample pairs per cycle.