#003 Monday Special - AI Memory Chips, OpenAI & Scale AI, India's GPU Shortage
Fresh & Hot curated AI happenings in one snack. Never miss a byte 🍔
This snack byte will take approx 7 minutes to consume
AI BYTE # 1 📢 : How SK Hynix (South Korea) is Powering the AI Revolution with its Memory Chips?
⭐ Artificial intelligence (AI) is transforming the world of computing, and memory chips are playing a crucial role in this revolution.
One of the companies that has been at the forefront of developing high-performance memory chips for AI applications is SK Hynix, a South Korean firm that is the main provider of high-bandwidth memory (HBM) for Nvidia’s top-line AI processor chip.
HBM is a type of memory chip that stacks multiple layers of dynamic random-access memory (DRAM) on top of each other, allowing faster and more efficient data transfer between the processor and the memory.
This is essential for AI applications that require massive amounts of data to be processed in real-time, such as generative AI tools like ChatGPT. Generative AI tools such as ChatGPT guzzle vast amounts of data that must be retrieved from memory chips and sent to the processing units for computations.
SK Hynix was one of the first companies to invest in HBM technology, starting in 2010, when it partnered with AMD to introduce the first HBM product to the market in 2013.
Since then, SK Hynix has continued to innovate and improve its HBM products, launching its fourth-generation version this year, which stacks 12 layers of DRAM and can process the equivalent of 230 full-high-definition movies in a second.
The feat required inventing new ways of stacking and fusing the chips together. For the 12-layer version, SK Hynix uses a liquid material to fill the gaps between the layers, replacing the conventional method of applying a thin film between each layer.
In its latest stacking process, the firm uses intense heat to ensure the chip layers fit evenly together and compresses them with 70 tons of pressure to fill the gaps.
SK Hynix’s bet on HBM has paid off, as the demand for AI memory chips has grown rapidly in recent years. The company’s stock price has risen by almost 60% since the start of the year, despite a downturn in the broader memory chip market.
SK Hynix also expects its HBM revenue for 2023 to grow more than 50%, as it supplies its products to major AI chip makers such as Nvidia, AMD, and Intel.
However, SK Hynix also faces fierce competition from its rivals, especially Samsung Electronics, which is also developing its own next-generation HBM products and plans to double its production by 2024.
SK Hynix will have to maintain its edge in technology, quality, and mass production to keep its lead in the AI memory chip market.
SK Hynix’s story shows how a company can become a leader in a new and emerging field by making bold and strategic investments in innovation.
It also demonstrates how memory chips are no longer just supporting players in computing but are becoming key enablers of AI development and applications.
AI BYTE # 2 📢 : OpenAI Is Shifting Its Focus From ChatGPT To Enterprise AI with Scale AI
⭐ OpenAI, the research organization behind ChatGPT and GPT-4, has recently announced a new partnership with Scale AI, a data labelling company that helps enterprises build and apply AI models.
This partnership will allow Scale customers to fine-tune OpenAI models using their own data, which can improve the performance and accuracy of the models for specific use cases.
Why is this partnership important?
The popular chatbot service powered by OpenAI, has been losing its appeal to users due to its limited and outdated information, lack of new features, and data privacy concerns.
OpenAI seems to be more interested in serving businesses than consumers, as it has been releasing fine-tuning APIs for its existing models GPT-3, GPT-3.5, GPT-3.5 Turbo and GPT-4. The partnership with Scale AI is another indication of this shift.
Scale AI is a trusted partner for enterprise AI. Scale AI, founded in 2016 by Alexander Wang and Lucy Guo, has been providing data labelling and reinforcement learning services to enterprises that want to build their own AI models or apply foundation models to their business problems.
Scale AI claims that its Data Engine improves the data quality and hence the model performance. Scale AI has worked with companies like Brex, Airbnb, Pinterest, and Lyft, and has also collaborated with OpenAI before on fine-tuning GPT-3.5.
Fine-tuning OpenAI models can unlock new possibilities for enterprises. OpenAI models are powerful and versatile, but they are not tailored to specific domains or tasks. By fine-tuning them with custom data, enterprises can enhance the models’ capabilities and relevance for their use cases.
For example, Brex used Scale AI to fine-tune GPT-3.5 for generating financial insights from transaction data, and achieved better results than the stock GPT-3.5 model.
The partnership between OpenAI and Scale AI is a win-win situation for both parties.
OpenAI can access Scale’s customers and expertise, while Scale can offer its customers access to OpenAI’s models and APIs.
Together, they can create more value for enterprises that want to leverage AI for their business goals.
AI BYTE # 3 📢 : The GPU Shortage and Its Implications for India’s AI Vision
⭐ India has ambitious goals to become a global leader in artificial intelligence (AI), as stated by Prime Minister Narendra Modi.
However, one of the key challenges that India faces in achieving this vision is the shortage of GPUs, the specialized chips that power AI models and applications.
GPUs, or graphics processing units, are designed to handle complex computations and graphics, making them ideal for AI tasks such as training large neural networks, generating realistic text and images, and processing massive amounts of data.
GPUs are essential for advancing AI research and innovation, as well as deploying AI solutions in various domains and industries.
However, the global demand for GPUs has outstripped the supply, leading to a scarcity of these chips in the market. The situation is worsened by the fact that most of the GPUs are produced by a few companies, such as NVIDIA, AMD, and Intel, which are subject to geopolitical pressures and export restrictions.
For example, the US government has banned NVIDIA from selling its AI chips to China, one of India’s main competitors in AI.
This poses a serious challenge for India, which has a large and growing AI ecosystem, comprising over 4000 AI startups. Many of these startups are working on cutting-edge AI technologies, such as generative AI and large language models (LLMs), which require significant computational power and GPUs.
Moreover, India also has a strong AI research community, which needs GPUs to conduct experiments and publish papers.
How can India overcome this challenge and secure more GPUs for its AI development and research?
Here are some possible strategies:
Buying GPUs in Bulk. The Indian government can allocate funds to purchase GPUs in large quantities from the global market and distribute them to AI startups and researchers at subsidized rates.
This can help reduce the cost and increase the availability of GPUs for the Indian AI community. For example, the UK government is planning to spend USD 126.3 million to buy AI chips as part of its AI strategy.
Partnering with Local Cloud Service Providers (CSPs). The Indian government can collaborate with local CSPs, such as E2E Networks, which offer GPU-based cloud computing services to Indian customers.
The government can provide incentives and support to these CSPs to expand their GPU infrastructure and make it more accessible and affordable for AI startups and researchers. This can also help promote the ‘Make in India’ initiative and boost the local cloud industry.
Producing GPUs Locally. The Indian government can also invest in developing its own GPU manufacturing capabilities, similar to its efforts in building semiconductor fabs.
This can help India reduce its dependence on foreign suppliers and ensure a more reliable and sustainable supply of GPUs for its AI needs. The government can also attract global GPU makers, such as NVIDIA and AMD, to set up their production units in India, by offering them tax breaks and other incentives.
By adopting these strategies, India can secure more GPUs for its AI development and research, and accelerate its progress towards becoming an AI hub.
This can also help India gain a competitive edge in the global AI landscape, foster innovation, drive economic growth, and enhance social empowerment.
This was the stack that we all craved for