
A research laboratory at Stanford University dedicated to artificial intelligence has selected the decentralized cloud computing platform Theta EdgeCloud to enhance its work with large language models (LLMs).
This decentralized cloud could be the solution to the considerable computing demands of AI. On April 17, Theta Labs announced that Stanford’s AI research team would utilize Theta (THETA) EdgeCloud for progressing large language models. The lab, led by Assistant Professor Ellen Vitercik, plans to use the platform for discrete optimization and algorithmic reasoning related to LLMs.
Stanford joins a growing number of academic institutions using the decentralized platform for research. Theta Labs notes that other users of EdgeCloud include Seoul National University, Korea University, the University of Oregon, Michigan State University, and several more.
Major Tech Firms and Decentralized Services Compete for AI Computing Resources
In contrast, Amazon has revealed plans to invest $11 billion in data centers in Indiana. At the same time, Google is expanding its global footprint by dedicating $1.1 billion to its data center in Finland while constructing another facility in Malaysia, which is set to cost $2 billion.
However, the major tech approach is not the only option competing for AI workloads. Unlike conventional large LLM services, Theta EdgeCloud operates as a decentralized cloud computing platform. Its infrastructure is distributed across diverse geographical locations, reducing dependence on large centralized data centers for computing capabilities.
This platform leverages blockchain technology to reward smaller GPU providers based on the revenue earned from end-users. This innovative model enables Theta to keep capital expenditures low and achieve rapid scalability, ultimately providing a more cost-effective infrastructure for users.
The Theta Network is a blockchain protocol that was originally developed for decentralized video streaming. Over time, it has transformed to deliver decentralized infrastructure for cloud computing, with a particular focus on AI applications.