#031 Midweek Special - Samsung Invests in Irreverent Labs, AI Junk: The Dark Side of the Internet, Tailor GPT-3.5 Turbo, Nvidia’s AI Boom - Will It Last?
Fresh & Hot curated AI happenings in one snack. Never miss a byte 🍔
This snack byte will take approx 7 minutes to consume.
AI BYTE # 1 📢 : Irreverent Labs Secures Investment from Samsung Next to Use AI To Transform Video Content Production
⭐ Irreverent Labs, a Bellevue-based AI startup, has secured a strategic investment from Samsung Next that could bring its groundbreaking technology to millions of Samsung device users worldwide.
The size of the strategic investment was not disclosed. The company had previously raised more than $45 million over two rounds
The startup, founded by veteran tech entrepreneurs Rahul Sood and David Raskino, has been developing highly advanced AI models that can generate 3D animated videos from simple text prompts.
The technology, dubbed as the “video foundation model”, promises to revolutionize how video entertainment is produced, allowing anyone to create everything from video game concepts to short films with unprecedented quality and control.
However, the technology also poses challenges and risks, especially when it comes to generating longer, coherent video content safely. Experts caution that the technology is still early and unproven, and may raise concerns about potential misuse of realistic AI-generated video.
Irreverent Labs plans to release a developer preview later this year and aims to initially target gaming, creative professionals, and studios.
With Samsung’s global reach, Irreverent Labs’ novel AI synthesis technology may soon wind up in millions of pockets and living rooms around the world.
The startup hopes to bring its novel AI synthesis technology to the masses and unlock new use cases that have not yet been discovered.
AI BYTE # 2 📢 : Nvidia’s AI Boom: How Long Will It Last?
⭐ Nvidia, the leading chip maker for artificial intelligence, has been enjoying a surge in demand and revenue as more customers adopt its products for AI applications.
However, some challenges and uncertainties remain in the long term.
Nvidia’s revenue and market cap have soared to record highs, thanks to its dominant position in AI chips. The company expects to grow even more as it secures more supply from its contract manufacturer TSMC.
In fact, analysts at UBS forecast that Nvidia’s current supply agreements could take its revenue to as much as $25 billion a quarter, not factoring in the expectation that supply will grow.
In the near term, demand for Nvidia’s products could hardly be hotter. Customers are racing to install chips that underlie artificial intelligence systems such as OpenAI’s ChatGPT.
Companies are increasingly convinced that AI is indispensable for their growth, and analysts estimate that Nvidia has a market share of more than 70% in AI chips.
Like its chips, Nvidia’s shares have been sought-after by investors, having already more than tripled this year, propelling it to a valuation north of $1 trillion. The stock finished higher this past week despite falling later in the week following its earnings report.
However, some analysts are concerned about the sustainability of the AI boom, as customers may over-order chips or cut back on spending if they don’t see enough returns from their AI investments.
There is also concern about the sustainability of demand in the longer term, too. Many technology transitions in the past have come with a boom of investment in new infrastructure followed by a lull in market uptake.
Some analysts see potential problems ahead in Nvidia’s supply chain. The company designs chips but relies on contract manufacturers, primarily Taiwan Semiconductor Manufacturing Co. to produce them. That makes Nvidia dependent on others’ ability to increase production at times of high demand.
Big companies that spend billions of dollars on AI chips will need to generate profits from them to justify further investments. And given that AI investments for the most part have yet to translate into bumper sales, some consumers of Nvidia’s chips have started to raise concerns about costs.
Meta Platforms, the owner of Facebook, plans to spend big on building up its AI computing capabilities into next year, but executives said last month that it wasn’t yet clear how strongly users would adopt AI-driven features.
Nvidia also faces geopolitical risks, especially from China, which accounts for 20% to 25% of Nvidia’s sales in its data center division. The U.S. government has imposed export restrictions on some of its advanced chips and may tighten them further in the future.
Nvidia’s CEO Jensen Huang however remains optimistic about the company’s prospects, saying that demand is still far from being met and that AI is indispensable for growth in many industries.
AI BYTE # 3 📢 - AI Junk: The Dark Side of the Internet
⭐ The rise of AI technology has brought with it many benefits, but it has also introduced a new problem: AI Junk.
This refers to the low-quality content generated by AI systems, such as spam, fake news, and misinformation, that is polluting the internet.
Large language models are trained on data sets built by scraping the internet for text, including all the toxic, silly, false, and malicious things humans have written online. The finished AI models regurgitate these falsehoods as fact, and their output is spread everywhere online.
This creates a vicious cycle where tech companies scrape the internet again, scooping up AI-written text that they use to train bigger, more convincing models, which humans can use to generate even more nonsense before it is scraped again and again.
Some examples of how AI junk is affecting the internet:
Content spinning: Content spinning has been polluting the internet for over 15 years, but nobody really noticed. It pains me to think that AI is now being trained on barely coherent junk that was generated to farm Adsense revenue.
ChatGPT: ChatGPT is being used to generate whole spam sites. This can distort the information we consume and even our sense of reality. It could be particularly worrying around elections, for example.
AI-generated junk: Etsy is flooded with AI-generated junk. This can make it difficult for users to find high-quality products and can harm the reputation of the platform.
AI Editors: A job posting looking for an “AI editor” expects an output of 200 to 250 articles per week. This can lead to a flood of low-quality content that can overwhelm the internet’s capacity for scale.
Should the internet increasingly fill with AI-generated content, it might become a problem for the AI companies themselves.
That is because their large language models, the software that forms the basis of chatbots such as ChatGPT, train themselves on public data sets.
As these data sets become increasingly filled with AI-generated content, researchers worry that the language models will become less useful, a phenomenon known as “model collapse.”
Just as repeatedly scanning and printing the same photo will eventually reduce its detail, model collapse happens when large learning models become less useful as they digest the data they have created,
I believe it’s important to be aware of this issue and take steps to address it. We need to be vigilant in our consumption of information online and be able to distinguish between high-quality content and AI junk.
AI BYTE # 4 📢 - Tailor GPT-3.5 Turbo to Your Business Needs with OpenAI’s Fine-Tuning Support”
⭐ OpenAI has recently announced that it is offering new built-in support for users to fine-tune its GPT-3.5 Turbo large language model (LLM) on their own data.
This means that you can customize the model to handle your specific use cases and create unique and differentiated experiences for your customers.
GPT-3.5 Turbo is one of the most capable and cost-effective models in the GPT-3.5 family, optimized for chat and traditional completion tasks. It has been pre-trained on public data up to September 2021, but you can now bring your proprietary data for training the model and run it at scale.
According to OpenAI, fine-tuning GPT-3.5 Turbo on your data will give you several benefits, such as better instruction-following, shorter prompts, faster API calls, and lower costs.
You can also fine-tune the model to respond in a specific language, format, or tone that suits your brand voice.
Fine-tuning GPT-3.5 Turbo is easy and safe. You just need to prepare your data, upload the files, and create a fine-tuning job. Once the fine-tuning is finished, the model is available to be used in production with the same shared rate limits as the underlying model.
OpenAI also ensures that your data is owned by you and is not used for training any other model besides your own.
If you are interested in fine-tuning GPT-3.5 Turbo for your business needs, you can check out OpenAI’s blog post for more details and instructions. You can also look forward to fine-tuning GPT-4, OpenAI’s flagship generative model that can even understand images, later this fall.
AI will entertain us all.
It will make our reality disappear and take us on a ride into an imaginary world