Tech

An experiment with a 1998 processor shows that 128 MB of RAM is enough to use AI today

An artificial intelligence model successfully running on an old computer.

Editor para NFL
Update:

Artificial intelligence has been on everyone’s lips in the last years, and some manufacturers, such as Nvidia, are generating huge profits thanks to hardware designed specifically for this purpose. Take, for example, the Blackwell B200 chip, designed for training AI models and priced between $30,000 and $40,000. This frenzy for artificial intelligence has also led to the price of basic components such as RAM and video cards beginning to rise to ridiculous levels.

That is precisely why what a team of researchers at Oxford University has achieved using extremely old and limited hardware is even more remarkable. The organization EXO Labs, founded by researchers from this university, shared a video on X (formerly Twitter) showing a PC with a 350 MHz Pentium II processor, 128 MB of RAM, and Windows 98 successfully running an artificial intelligence model.

A step towards more accessible AI

The experiment is based on an AI model composed of a total of 260,000 parameters, calculated on this computer at a speed of 39.31 tokens per second. What are tokens? These are units used, among other things, to enable artificial intelligence models to process and understand natural language. The more tokens per second a system can handle, the faster it can respond to requests.

This type of initiative opens the door to more democratized access to artificial intelligence. Demonstrating that it is possible to run functional models with very modest hardware requirements could have a significant impact on its adoption and development in the long term.

Follow MeriStation USA on X (formerly known as Twitter). Your video game and entertainment website for all the news, updates, and breaking news from the world of video games, movies, series, manga, and anime. Previews, reviews, interviews, trailers, gameplay, podcasts and more! Follow us now!

Tagged in: