BlueShift Memory is targeting big data and time-critical data applications; it has proven its model in an FPGA and is now on the hunt for funding to build a chip.
A Cambridge UK-based startup is looking to address the memory bottleneck (or tailback) in high-performance computing with a new memory chip design dedicated to handling large data sets and time-critical data.
Blueshift Memory, which was started by and currently consists of a team of three computer scientists, has successfully demonstrated its new memory model in a Xilinx FPGA. The company is now on the hunt for investors to fund the development of a chip.
We spoke to Peter Marosan, CTO of Blueshift Memory, to find out exactly what the company is trying to do. It is essentially optimizing the memory architecture so that large data sets and time-critical data can be more efficiently handled, hence speeding up memory access speeds up to 1,000 times for specific data-focused applications.
He said, “We use DRAM as the main memory (not Flash) and store the data in a different way inside the memory by changing the wiring in the memory module. Current memory modules are too general; our solution is targeted at time-critical data and large data sets.”
Marosan said the new memory design is one in a wave of radical changes in computer memory emerging from different companies around the world, which aim to address the struggle for high performance computers to keep pace with society’s spiraling data demands. He said computer memory chips (usually RAM) are just not improving as quickly as their central processing units (CPUs).
This creates a ‘tailback’ in processes where high-performance computers perform large-scale operations, like database searches with millions of possible outcomes. Data stacks up in a slow-moving queue between the CPU and the less-efficient memory. Blueshift said its new design reorganizes the way in which a memory chip handles these operations, so that it delivers data to the CPU much faster. With it, operations could take minutes, or even seconds, rather than hours.
Blueshift’s team analyzed and categorized thousands of algorithms used by companies to solve complex data problems. They then designed the chip so that it arranges data in preparation for these tasks — an approach that could be combined with any type of memory cell technology.
Its initial model implemented in an FPGA to emulate the chip’s effects has yielded some impressive results, according to Blueshift. Simulations using this card suggest that the chip could, for example, make searches of vast databases used to match fragments of DNA in scientific research, or in criminal investigations, 100 times faster. In other tests, algorithms used in weather forecasting and climate change modelling could also run 100 times faster using the chip. Better memory chips could also accelerate the more data-hungry aspects of home computing. Blueshift’s prototype makes rendering films in video editing software 10 times faster, for example; it could also improve the processing speeds of virtual reality headsets by a factor of up to 1,000.
The company said its design could also make it much easier to program some data operations, because it would remove the need to include complex instructions about how to handle the vast quantities of data involved. “It would make some big data programming as straightforward as the basic data searches that computing students learn to write in high school,” Marosan said.
For example, the artificial intelligence (AI) in autonomous vehicles needs to process huge quantities of data quickly to make decisions. Or in connected in smart cities, fast, real-time data processing on a large scale will be essential to manage traffic flows, utility supplies, and even evacuation procedures in times of danger.
Once the company manages to validate its own chip, it then plans to license its technology to memory module manufacturers. Marosan said, “Our strategy is ready, our prototype is ready, and we know who our partners will be. We now just need the funding to create the chip.”
Marosan told us they are having conversations currently with DRAM and FRAM manufacturers, and also looking to speak with high bandwidth memory (HBM) makers.
New products & solutions, whitepaper downloads, reference designs, videos
Register, join the conference, and visit the booths for a chance to win great prizes.