OpenAI GPT-4 Emoncms knowledge

What with all the talk around GPT-4 and impressive results being shared. It’s interesting to see how well it does at OpenEnergyMonitor topics that we can verify :slight_smile: it’s pretty incredible really!

It gets most of that perfect, not sure about the low write-latency comment, perhaps it is, the optimisation goal was more about reducing the amount of SD card writes as a whole.

I then asked it if it could produce a python script to read from raw PHPFina data files, this was the result:

import os
import struct

def read_phpfina_metadata(meta_file):
    with open(meta_file, "rb") as f:
        meta_data =
        start_time, interval = struct.unpack("<2Q", meta_data)
    return start_time, interval

def read_phpfina_data(data_file, start_time, interval):
    data_points = []

    with open(data_file, "rb") as f:
        file_size = os.path.getsize(data_file)
        num_points = file_size // 4

        for i in range(num_points):
            raw_data =
            value = struct.unpack("<f", raw_data)[0]

            if value != float("nan"):
                timestamp = start_time + i * interval
                data_points.append((timestamp, value))

    return data_points

def main():
    meta_file = "feed_1.meta"
    data_file = "feed_1.dat"

    start_time, interval = read_phpfina_metadata(meta_file)
    data_points = read_phpfina_data(data_file, start_time, interval)

    for timestamp, value in data_points:
        print(f"Timestamp: {timestamp}, Value: {value}")

if __name__ == "__main__":

So close! it didnt quite get the read metdata function right, partly my fault for a confusing metadata structure perhaps…

I tried asking it for a correction but it still got it wrong again. After I explained the correct structure it was able to rewrite the function correctly:

The code now outputs the data correctly!

I tested the earlier ChatGPT and it’s definitely getting a lot better at answering these questions.

While not perfect im still blown away by what it can do!


Later we’ll see you doing code reviews where suggestions are made about how to bring your code more in line with what the robots are likely to be able to consume :slight_smile:


Yes really interesting.

It might be the documentation is not clear enough. If you ask it a second time (in a new session) has it learnt the correct structure and generate the code right first time?

Part of the issue is that humans tend to be imprecise or ambiguous in their problem statement ( a wider problem than AI questions). Take the “There are are 100 murderers in a room. If I kill one, how many murderers are left”.

There is too much ambiguity and the AI has to infer (guess) an answer that might be right, or might not. It also doesn’t tell you if it has guessed.

More precise specification answer the following;

  1. Are you in the room already, so part of the 100?
  2. Are you already a murderer?
  3. Are dead murderers counted?
  4. Is it a deliberate act or an accident?

(there is probably more)

So (might work) “I am not a murderer and I am in a room of with 100 murderers. If I deliberately kill one of those 100, how many alive murderers are there?”

No still getting it wrong, it’s even started making up completely new parameters in the meta data file such as number of decimal places to round to and engine…

Hopefully it will get better once it integrates the browser plugin rather than relying only on it’s training data…