Researchers from the École Polytechnique Fédérale de Lausanne (EPFL) have created the world’s largest-scale in-memory processor to based on a two-dimensional semiconductor material — a step on the path to more efficient computation, particularly for computationally-hungry machine learning (ML) and artificial intelligence (AI) on devices at the edge of the Internet of Things (IoT).
“Led by the rise of the internet of things, the world is experiencing exponential growth of generated data. Data-driven algorithms such as signal processing and artificial neural networks are required to process and extract meaningful information from it,” the research team explains in the abstract to its paper.
A 2D semiconductor built into floating-gate transistors (renders a, b, optical photograph c) could spell a path to more efficient computing at the edge. (📷: Marega et al)
“They are, however, seriously limited by the traditional von Neumann architecture with physical separation between processing and memory, motivating the development of in-memory computing,” Kis continues. “This emerging architecture is gaining attention by promising more energy-efficient computing on edge devices.”
In traditional von Neumann computing, named for John von Neumann’s contributions to the 1945 paper First Draft of a Report on the EDVAC, the machine is made up of a central processing unit connected to a separate memory unit — and every single bit that gets processed needs to move from the external memory unit to the processor and back again.
Coupled with a bottleneck in the design causing an inability to perform data operations and instruction fetch operations simultaneously, there’s room for efficiency improvements — which is where in-memory computing comes in.
The initial prototype has shown worth as an accelerator for vector-matrix multiplication operations. (📷: Marega et al)
An in-memory processor, as the name implies, does its work directly on the stored data without having to move it to a central processor and back again. “Today, there are ongoing efforts to merge storage and processing into a more universal in-memory processors that contain elements which work both as a memory and as a transistor,” explains project lead Andras Kis. In the case of Kis and colleagues, that material is molybdenum disulfide (MoS₂), which the team has used to build a two-dimensional transistor — the initial prototype of which was formed from a layer of MoS₂ peeled from a crystal using Scotch tape.
The team’s large-scale chip serves a proof of concept for these novel transistors, combining 1,024 of them with floating gates to act as a memory — directly controlling the conductivity of the transistor to which they’re attached. “By setting the conductivity of each transistor, we can perform analog vector-matrix multiplication in a single step by applying voltages to our processor and measuring the output,” Kis says, with the team demonstrating its use for vector-matrix multiplication and signal processing.
“This functionality and integration,” the team claims in conclusion, “represent a milestone for in-memory computing, allowing in-memory processors to reap all the benefits of 2D materials and [bring] new functionality to edge devices for the Internet of Things.”
The team has also used the prototype as a discrete signal processor with, it claims, high reliability. (📷: Marega et al)
It’s the scale of the device which is a breakthrough for the team: with 1,024 transistors on the chip forming a 32×32 vector-matrix multiplier, the researchers’ device is the largest yet built. That doesn’t mean it’s going to be difficult to build, though: “The key advance in going from a single transistor to over 1,000 was the quality of the material that we can deposit,” Kis explains.
“After a lot of process optimization, we can now produce entire wafers covered with a homogeneous layer of uniform MoS₂. This lets us adopt industry standard tools to design integrated circuits on a computer and translate these designs into physical circuits, opening the door to mass production.”
The team’s work has been published under closed-access terms in the journal Nature Electronics; an open-access preprint is available on Cornell’s arXiv server.
Main article image courtesy of Alan Herzot/EPFL.