Near AI x HZN – Decentralized Compute
We’re completing our coverage of Near Horizon’s first ever AI cohort and finishing off with a focus on decentralized compute. Previously, we highlighted the importance of data, foundational models and AI payment infrastructure to a thriving AI ecosystem.
Near is building a decentralized AI ecosystem to support User-Owned AI. Hyperbolic and Exabits were chosen to be in Near’s initial AI cohort to seed the AI ecosystem with the necessary computation for it to thrive. Access to sufficient GPU power is a significant cost and barrier for AI startups and decentralized compute solves this problem.
Hyperbolic
Hyperbolic is building a decentralized AI cloud to address the issue of GPU storage and to enable open access to AI. Hyperbolic aims to achieve its goal by focusing three major challenges:
- System: Developing a scalable decentralized system to aggregate global GPU compute from various sources and create AI services to maximize the performance of all GPU types.
- Verification: Ensuring that the outputs from the network are generated by the correct AI model (for example, Llama 2-70B instead of Llama 3-70B).
- Privacy: Safeguarding users’ data and privacy even when requests are handled by a random third-party node.
Universal Compatibility and Performance
Hyperbolic’s scalable backend architecture manages a vast network of globally distributed GPUs to offer scalability while also offering ~75% cost savings compared to traditional cloud providers. In addition, Hyperbolic’s suite of optimized decentralized AI services matches or even surpasses the speed of centralized solutions like Together.ai and Hugging Face.
A key differentiator for Hyperbolic is its ability to handle a diverse set of GPUs as well as universal compatibility across various low-level languages (CUDA, ROCm) and high-level languages (PyTorch, TensorFlow). Hyperbolic’s decentralized operating system is able to abstract away these complexities so that developers can achieve high performance without having to worry about the complexity under the hood. This approach enables Hyperbolic to achieve efficient utilization of hardware from all major manufacturers (NVIDIA, AMD, Intel, Apple), reducing dependency on any single chip type and alleviating shortages.
Verification & Trust
In a decentralized network, third-party nodes might be tempted to cheat. How do you know if your output came from the intended model? To ensure integrity, Hyperbolic introduces Proof of Sampling (PoSP) and Sampling Machine Learning (SpML). The idea is to combine the speed benefits of an optimistic approach while enforcing against bad behavior by randomly sampling and verifying outputs and imposing a harsh penalty as a deterrent when misbehavior is discovered. This design encourages honesty and ensures spML is scalable and fast, making it suitable for decentralized AI applications requiring rapid processing and robust security.
Privacy
Privacy is a critical topic in AI. Hyperbolic’s view is that it is a necessity to prevent user data exposure to random third-party nodes in order to enable businesses to confidently choose their decentralized cloud over traditional providers like AWS, GCP, and Azure. By combining innovative technologies like spML and TEEs with a marketplace model, Hyperbolic is working to create a more accessible, secure, and cost-effective AI infrastructure.
As the AI landscape continues to evolve, Hyperbolic’s focus on trust, privacy, and efficiency positions them at the forefront of the decentralized AI movement. Their innovations in verification mechanisms and confidential computing demonstrate a commitment to solving critical issues in the field, potentially reshaping how AI services are delivered and consumed in the future.
Exabits
Exabits is transforming physical GPU infrastructure into liquid financial assets by tokenizing GPU compute. Investors can purchase EGPU representing GPU capacity. Exabits monetizes these resources through enterprise-grade compute services, generating revenue. This creates a liquid market for GPU compute, allowing anyone to participate in the AI compute economy.
Technology And Domain Experts
The Exabits team’s expertise in building and monetizing GPU infrastructure ensures a cost-effective, high-performance AI compute financialization platform. Proprietary software and hardware reduce operating costs and boost compute output, maximizing revenue for all participants.
Exabits has unique access to enterprise-grade H100 and A100 GPUs (and soon B200s) due to its extensive data center experience. The team includes tier 1 resellers of Nvidia, AMD, and SuperMicro, with over $2 billion in deals for data centers and mining facilities.
Unlike CPU clouds, GPUs lack versatility for multiple users. Exabits has developed a leading GPU management platform with multi-tenant resource management, deep monitoring, adaptive allocation, and real-time scaling. Users can access GPU resources instantly.
Full-stack Capability for High-Performance GPU Clusters
Beyond just procuring rare GPU hardware, GPU clusters require specialized data centers. H100 servers need T3 data centers with specific specifications. Few facilities can host over 1,000 H100 GPUs, with strict partner requirements. Exabits is uniquely positioned to secure these high-end data centers and deploy large-scale H100 GPU clusters.GPU cloud deployment and operation are complex, requiring reliable infrastructure and efficient backend networks for extended operations across thousands of GPUs. Optimal performance demands significant system-level optimization and extensive experience. As a result several crypto GPU projects are discussing collaborations, with Exabits handling data center requirements, building H100 service clusters, and leasing them to these projects.
Cost-Effective Solutions with Consumer-Level GPUs
In addition to rare data center-grade GPUs like the H100/A100, high-end consumer-level GPUs offer strong computational performance at a relatively lower price. Exabits has developed unique technology to accelerate consumer-level GPUs, enabling them to perform similarly to H100/A100 in some cases, offering a cost-effective alternative for many inference scenarios.
This capability allows Exabits to utilize numerous existing high-performance consumer GPUs, providing cost-effective computing services to clients. This is crucial, as many companies face long waiting lists for enterprise-grade GPUs or experience bottlenecks with other cloud compute providers.
Wrapping Up
With Hyperbolic and Exabits as key components of the Near’s burgeoning AI ecosystem, AI projects will have access to some serious horsepower to drive their models. Near plans to integrate Hyperbolic with its own inference router while Exabits will offer its compute marketplace to the Near ecosystem.
Near Horizon just finished up its first AI incubation cohort and is looking to launch its next cohort soon! Follow their socials to stay up to date with all the happenings at Near and Near Horizon.
This article has been written and prepared by the GCR Research team in collaboration with Near Foundation/Near Horizon. Committed to staying current with industry developments and providing accurate and valuable information, GlobalCoinResearch.com is a trusted source for insightful news, research, and analysis.
Disclaimer: Investing carries with it inherent risks, including but not limited to technical, operational, and human errors, as well as platform failures. The content provided is purely for educational purposes and should not be considered as financial advice. The authors of this content are not professional or licensed financial advisors and the views expressed are their own and do not represent the opinions of any organization they may be affiliated with.