Sitemap

The Von Neumann Architecture Currently

4 min readSep 5, 2025
Press enter or click to view image in full size

If you follow computer architecture trends closely, there is an emerging pattern occurring: The integration of neural processing and reinforcement learning into the Von Neumann architecture components — there have been state-of-the-art neural prefetchers, neural cache replacement policies, neural memory controllers, etc., and this feels like the final form of the Von Neumann Architecture, if we have optimal lightweight neural networks and reinforcement learning algorithms encoded in the hardware it’s tough to create to further innovate in the area, the only optimization which there would be focus on would be squeezing as much energy efficiency, accuracy and predictive power from these algorithms.

But why is this the case? Why do we need to implement neural network algorithms in these components? The problem is that the Von Neumann architecture is primarily flawed; it’s a straightforward and conceptually elegant architecture that worked very well for static workloads that had static access patterns; it was excellent for the EDVAC (Electronic Discrete Variable Automatic Computer), but as computers got more complex, software got more sophisticated and differentiated, and hardware got more diverse. It became a huge bottleneck. As computers evolved with different programming languages, instruction set architectures, and other cache behaviors, variations in access patterns emerged. The underlying architecture was designed to accommodate these changes. Still, it remained stagnant; instead of exploring new architectures that matched the trajectory of computers, we were making minor tweaks to the architecture that slightly improved the IPC. Now, we have reached its final form.

Onur Mutlu has stated in several papers and talks that we need more intelligent architectures that understand data and its movement. Von Neumann places the CPU as the central piece of the architecture; considering this in retrospect, a question arises: “Were we meant to focus on the CPU? The CPU performs calculations, and it’s regarded as the computer’s heart, but this isn’t entirely accurate. Computers perform calculations, and that’s correct, but more importantly, they move data; it is a data flow engine, and our focus should be on optimizing data flow. Onur Mutlu has consistently emphasized that we need computers that understand the movement of data and should be optimized for that.

Inventions like Processing-In-Memory (PIM) and Processing-Near-Memory (PNM) are direct responses to the inefficiencies of the Von Neumann architecture. This may be intentional on the part of the researchers, but it might also not be; these technologies challenge the CPU’s role as the frontrunner and implicitly acknowledge that the primary bottleneck has always been data movement.

We need a new architecture that treats memory as the primary component of the system. We have data on the number of cycles required for a memory access to occur. We have reached a point at which a computer is now significantly bottlenecked by memory accesses. We have even developed caches to compensate for this bottleneck; however, this issue remains evident in modern computing systems when running software with sporadic and nonlinear access patterns. The Von Neumann architecture was the first architecture conceived, and we haven’t made any improvements or developed new architectures; it was used for the EDVAC, which was created in 1942, and it’s still being used in a computer developed in 2025 which is strange cause other parts of a computer system has been innovated, we have new Instruction Set Architectures (ISA), a wide variety of software, new kinds of accelerators but for some reason we still kept using the Von Neumann architecture, after we have implemented reinforcement learning techniques that hide the flaws of the architecture which allow us to squeeze significant efficiency from the architecture but the flaws would still be present as the architecture itself its not ideal for the amount of data movement which occurs in a modern computer.

Von Neumann is a hero of mine; he’s creations and ideas have revolutionised the world, and I’m not insisting that the Von Neumann’s architecture is nonsense; when he was writing the first draft of EDVAC he conceptualized this architecture and for the basic components of the EDVAC it was the exemplary architecture but as computation evolved computer architectures needed to grow with it. We need to move beyond outdated foundations and toward a more innovative and adaptable architecture for memory. I hope I’ve provided a few ideas that help you think critically about the Von Neumann architecture and possibly explore alternatives that could take computer architecture to its next phase.

rustian ⚡️

--

--

rustian
rustian

Written by rustian

polymath. curious about things relating to engineering, computing, physics, mathematics, philosophy, literature & more

No responses yet