What happens downstream from that? Do you really need to store the data indefinitely for analysis months or years later, or is that just an intermediate step to allow some other process to grind through the data in real time?
Also, do you want to store all of the data all of the time forever, or just a short burst based on some trigger event?
If the storage is really just an intermediate pipe to another process that grinds though the data in real time, then you might get some big gains by moving that process onto the same cpu that’s collecting the data and do the grinding there. The earlier you can reduce the volume of data, the easier the problem becomes.
While it’s only an early stage prototype, something like @TrystanLea’s emonSTM might come close to what you need. It can easily sample at that rate, and has some pretty decent local performance for processing the data (72 MHz Cortex-M4 with FPU and DSP instructions). But it won’t help at all if you truly need to store all that data for future reference. As @Frogmore42 points out, there’s other kit designed to do just that.