ai is just the newest, most hungry market for high-performance computing, and system architects are working around the clock to squeeze every last drop of performance out of every watt. swedish startup tech.com/”>Zero pointArmed with €5 million ($5.5 million) in new funding, he wants to help them with a novel nanosecond-scale memory compression technique—and yes, it's exactly as complicated as it sounds.
The concept is this: losslessly compress data just before it enters RAM and decompress it afterward, effectively expanding the memory channel by 50% or more by simply adding a small piece to the chip.
Compression is, of course, a fundamental technology in computing; As ZeroPoint CEO Klas Moreau (left in the image above, with co-founders Per Stenström and Angelos Arelakis) noted: “Nowadays, we wouldn't store data on the hard drive without compressing it. Research suggests that 70% of the data stored in memory is unnecessary. So why don't we compress into memory?
The answer is that we do not have the time. Compressing a large file to store it (or encode it, as we say when it is video or audio) is a task that can take seconds, minutes or hours depending on your needs. But data passes through memory in a small fraction of a second, moving in and out as fast as the CPU can do it. A delay of a single microsecond, to remove “unnecessary” bits from a data packet entering the memory system, would be catastrophic for performance.
Memory doesn't necessarily advance at the same rate as the speed of the CPU, although the two (along with many other components on the chip) are inextricably connected. If the processor is too slow, data is stored in memory, and if memory is too slow, the processor wastes cycles waiting for the next stack of bits. Everything works together, as you would expect.
While super-fast memory compression has been demonstrated, this creates a second problem: essentially, you have to decompress the data as fast as you compressed it, returning it to its original state, or the system will have no idea how to do it. to handle it. So unless you convert your entire architecture to this new compressed memory mode, it's pointless.
ZeroPoint claims to have solved both problems with hyper-fast, low-level memory compression that requires no real changes to the rest of the computer system. You add their technology to your chip and it's like you've doubled your memory.
Although the nitty-gritty details will probably only be intelligible to people in the field, the basics are fairly easy to grasp for the uninitiated, as Moreau demonstrated when he explained it to me.
“What we do is take a very small amount of data (a cache line, sometimes 512 bits) and identify patterns in it,” he said. “It's the nature of data, that it is full of not-so-efficient information, information that is sparsely located. It depends on the data: the more random it is, the less compressible it is. But when we look at most data loads, we see that we are in the range of 2 to 4 times (more data throughput than before).”
It's no secret that memory can be compressed. Moreau said that everyone in large-scale computing knows about this possibility (he showed me a 2012 paper demonstrating it), but has more or less dismissed it as academic, impossible to implement at scale. But ZeroPoint, he said, has solved the problems of compaction (reorganizing compressed data to make it even more efficient) and transparency, so the technology not only works but works perfectly in existing systems. And it all happens in a few nanoseconds.
“Most compression technologies, both software and hardware, are on the order of thousands of nanoseconds. CXL (compute express link, a high-speed interconnection standard) can reduce that number to hundreds,” Moreau said. “We can reduce it to 3 or 4.”
Here's CTO Angelos Arelakis explaining it his way:
ZeroPoint's debut is certainly timely as companies around the world look for faster, cheaper computing to train another generation of ai models. Most hyperscalers (if you have to call them that) are interested in any technology that can give them more power per watt or allow them to reduce their electricity bill a little.
The main caveat to all of this is simply that, as mentioned, this needs to be included on the chip and integrated from scratch; you can't just drop a ZeroPoint dongle into the rack. To that end, the company is working with chip manufacturers and system integrators to license the technique and hardware design on standard chips for high-performance computing.
Of course, it's your Nvidias and your Intels, but increasingly also companies like Meta, Google and Apple, which have designed custom hardware to run their ai and other high-cost tasks in-house. However, ZeroPoint is positioning its technology as a cost savings, not a premium: Arguably, by effectively doubling memory, the technology pays for itself in short order.
The €5 million A round that just closed was led by Matterwave Ventures, with Industrifonden acting as the local Nordic lead and existing investors Climentum Capital and Chalmers Ventures contributing as well.
Moreau said the money should allow them to expand into US markets, as well as double down on the Swedes they are already pursuing.