First:
Set goals
Profile
Identify bottlenecks
Decide what to do
Is it possible to alleviate the memory bus from being the bottleneck by using (lightly) compressed data on reads?
If by "memory bus" you mean RAM, no.
However, if your bottleneck is disk read throughput, compression can alleviate that, simply because the disks will need to read less data once it is compressed.
Compression also stores more actual data in OS disk cache.
Basically, the size of your cache and your disk throughput are multiplied by the compression ratio.
You need compression algorithms that can decompress faster than the disks can read: zstd, snappy, lz4 for example.
There are drawbacks though.
Your files will likely be compressed using blocks. For example each file is sliced into 64kB blocks, and each block is compressed independently. This means if you need to make a random access to read one 4k page, the entire block has to be decompressed. This makes random reads slower.
Random writes will also require decompression and re-compression of an entire block. This takes time and results in fragmentation.
Thus there is a compromise: large blocks compress better, but they slow down writes and small random reads.
Basically the ideal use case is large sequential reads on read-only data.
Note there is no such thing as "multithreaded decompression". All data compression algos are sequential. If your file is compressed block by block, then decompressing each block can be parallelized. But... if the blocks are too large, then maybe you won't have enough blocks to parallelize efficiently. That's another tradeoff.
I would avoid reinventing the wheel, and instead use a filesystem that supports compression, like btrfs or zfs.
However, there is not much information in your question. You don't even say if your data can be compressed with a good ratio or not, so you can't make an informed decision...
Another thing to consider would be using a specialized "data warehouse" type database with columnar storage, like clickhouse. I'll give an example.
Say you store time series data with columns like mqtt_topic, timestamp, value. You can store that in JSON format in a text file and compress it, that should be quite effective, you can expect around x2 compression ratio. Maybe better if topic strings take a lot of space and repeat often enough that the compressor can do some good work on them. Timestamps in ISO format also compress well: if rows are inserted in time order, most characters in the timestamp are probably the same as the previous row. Labels in the JSON object are the same on each row, so good. Numeric data in text form usually has high entropy so it doesn't compresses well.
I'm explaining that to highlight how to "eyeball" how much you'll save with compression.
But if you put it in a columnar database, you can use several tricks to organize your data, like partitioning and ordering, which open up opportunities for better compression.
I'm getting x12 compression ratio on my time series data in clickhouse, for example. Also the data is already stored in the database, so contrary to the JSON example, it doesn't have to be parsed.
If your flat files contain binary data (not text, not JSON) it will be much faster to parse than text, but usually it won't compress well, because there is higher entropy and much less repetitions. That's why column-storage databases rearrange the data to allow tricks like delta, prefix, etc.
Again, no way to say if this would be helpful in your case without knowing more.