FPGA-based Buffering to accelerate large data applications
Published in Computational Sciences
The disk writes and later reads cause the I/O time of the application to explode. Furthermore, as data is generated and retrieved in frames, this cause huge cache pollution, causing even more significant degradation in execution time. We have managed to get execution times that are almost equivalent to execution times with infinite memory-based buffer (i.e. as if the data is buffered in the CPU's RAM, not the disk). The basic idea was to send data frames to the FPGA for near-lossless compression before it is written to the disk. Data is later fetched from the disk to the FPGA for de-compression before it is written back to the application's memory buffer. Separate servant threads manage this process independent from the application thread. The application threads keep writing/reading to/from the same memory buffers (double buffers for each) all the time -- while the servant threads keep emptying/filling these buffers in a completely seamless manner -- the application threads are completely oblivious to this fact! The result?! performance levels on par with infinite in-memory data buffers!
https://www.sciencedirect.com/science/article/pii/S0743731524001199
Please sign in or register for FREE
If you are a registered user on Research Communities by Springer Nature, please sign in