#090 Apple’s Walled Garden Strategy Could Backfire in 2024, How To Tame The AI Beast? A Proposal For A New Testing Economy
Fresh & Hot curated AI happenings in one snack. Never miss a byte 🍔
This snack byte will take approx 5 minutes to consume.
AI BYTE #1 📢: Apple’s Walled Garden Strategy Could Backfire in 2024
⭐ Apple is known for its integrated “walled garden” of gadgets and services, which has driven record revenues and made it one of the most valuable companies in the world.
But the way the company is trying to preserve and expand its ecosystem could also be its downfall.
In 2024, Apple faces several challenges that threaten its hardware and services revenue, as well as its reputation and customer loyalty. The company’s products are increasingly under pressure from competitors, regulators, and developers, who are pushing back against Apple’s tight control and high fees.
On the hardware front, Apple’s sales of some devices are leveling off or declining, as consumers find cheaper or more innovative alternatives from rivals like Samsung, Huawei, Xiaomi, and Oppo.
The iPhone, which accounts for more than half of Apple’s revenue, is losing market share in some regions, especially in China, where local brands offer more affordable and diverse options.
The Apple Watch, which dominates the smartwatch market, faces patent disputes and lacks support from popular apps like Netflix, Spotify, and YouTube. The Vision Pro, Apple’s new mixed-reality headset, is expected to be a niche product with a high price tag and limited content. And the Mac, which has seen a resurgence in demand during the pandemic, is still a minor player in the global PC market.
On the services front, Apple’s revenue growth depends largely on its App Store, which takes a commission of up to 30% from developers who sell software and digital goods on its platform. But this business model is under attack from multiple fronts.
Epic Games, the maker of Fortnite, sued Apple for antitrust violations, accusing it of abusing its market power and stifling competition. A U.S. judge ruled that Apple must allow developers to direct customers to alternative payment methods, but Apple’s compliance with the order has been criticized as insufficient and unfair.
The EU also passed a law that requires Apple to allow sideloading of apps from outside its App Store, but Apple imposed new fees and restrictions on those apps. And the U.S. Justice Department is reportedly preparing a potential antitrust case against Apple.
These legal battles could have significant implications for Apple’s future revenue and profitability, as well as its relationship with developers and customers. If Apple is forced to lower its commission rates, open its platform to more competition, or pay hefty fines, it could lose a substantial portion of its services income, which has a higher margin than its hardware sales.
If Apple continues to resist or defy the regulatory pressure, it could face more lawsuits, sanctions, or even a breakup. And if Apple alienates or frustrates its developers and customers, it could damage its brand image and loyalty, which are essential for its success.
Apple’s walled garden strategy has been a key factor in its remarkable performance and innovation, but it has also become a source of vulnerability and controversy.
The company needs to balance its desire to protect and increase its revenue with the need to adapt and respond to the changing market and regulatory environment. Otherwise, it could find itself trapped in its own garden, while the rest of the world moves on.
AI BYTE #2 📢: How To Tame The AI Beast? A Proposal For A New Testing Economy
⭐ AI is one of the most powerful and transformative technologies of our time. It has the potential to boost economic growth, innovation, and social welfare across various sectors.
But it also poses significant risks and challenges to human values, rights, and interests.
How can we ensure that AI is aligned with our goals and values, and that it does not cause harm or catastrophe?
This is the question that Eric Schmidt, the former CEO and executive chairman of Google, addresses in his recent essay for The Wall Street Journal.
Schmidt argues that the current approaches to AI safety and regulation are insufficient and outdated, as they cannot keep up with the rapid pace and complexity of AI innovation.
He warns that AI systems are becoming more sophisticated, capable, and potentially elusive, and that some of them could exhibit “polymathic behavior”, meaning that they can link concepts across fields, languages, and geographies, and generate novel and dangerous knowledge.
Schmidt proposes a novel solution to this problem: creating a new set of testing companies that will compete to out-innovate each other in evaluating and certifying AI systems. He suggests that these testing companies should be checked and certified by government regulators, but developed and funded in the private market, with possible support from philanthropy organizations.
He envisions a robust and agile testing economy that can match the speed and scale of AI development, and that can identify and govern the new emergent capabilities and risks of AI systems.
Schmidt’s proposal is timely and provocative, as it comes at a moment when the global debate on AI governance is heating up. Several countries and regions have initiated efforts to regulate and oversee AI, such as the US executive order on AI, the EU AI Act, and the OECD AI principles.
However, these efforts have been criticized for being either too vague, too rigid, or too slow to adapt to the changing AI landscape. Moreover, there is a lack of international coordination and consensus on how to deal with the global and cross-border implications of AI.
Schmidt’s proposal also echoes some of the ideas and initiatives that have emerged from the AI research community, such as the concept of AI alignment, the practice of red teaming, and the development of transparent and explainable AI.
However, his proposal also raises some questions and challenges, such as how to ensure the quality and accountability of the testing companies, how to balance the trade-offs between innovation and safety, and how to foster trust and cooperation among different stakeholders.
The future of AI is not predetermined. It depends on how we design, develop, and deploy it.
Schmidt’s proposal offers a bold and innovative vision for how we can control AI and direct it towards good uses and away from evil uses.
But it also invites further discussion and debate on how we can shape the AI governance landscape in a way that is effective, inclusive, and ethical.