#004 Dark Side of ChatGPT, American Express Adopts AI, Rise of AI By The Police, Softbank Makes a Comeback with AI
Fresh & Hot curated AI happenings in one snack. Never miss a byte 🍔
This snack byte will take approx 7 minutes to consume.
AI BYTE # 1 📢- How American Express Plans to Use AI Services Without Building Its Own Language Model?
⭐ Artificial intelligence (AI) is transforming the financial industry, and American Express (Amex) is one of the leading players in this field.
However, unlike some of its competitors, Amex does not intend to develop its own large language model (LLM) like OpenAI’s ChatGPT or Google’s Bard. Instead, it plans to leverage AI services through partnerships with established technology companies.
Why Amex Does Not Need Its Own LLM
A LLM is a type of AI model that can generate natural language text and images based on a given input. Examples of LLMs include OpenAI’s ChatGPT, which can write realistic stories and conversations, and Google’s Bard, which can create original lyrics and melodies.
However, creating an LLM from scratch requires a lot of time, money, and expertise. Moreover, LLMs can also pose ethical and social risks, such as generating biased or harmful content.
According to Luke Gebb, Senior Vice President of American Express Digital Labs, Amex believes that it would be more beneficial to utilize LLMs through partnerships rather than creating its own. He said:
“Our hypothesis at the moment is that we would be better suited using LLMs through partnerships. I don’t see us spinning up our own LLM from scratch.”
Although he did not mention specific partners, he hinted at Amex’s past collaboration with Microsoft in developing cloud-based AI technologies. Microsoft is one of the major investors in OpenAI and has access to its LLMs.
What Kind of AI Services Amex Plans to Use
Gebb shared some of the activities and services that Amex plans to incorporate AI in the future. These include:
Using AI to expedite transaction approval by analyzing various factors such as fraud risk, customer behavior, and merchant profile.
Employing LLMs to analyze customer interaction data and sentiment by extracting insights from voice, text, and image inputs.
Utilizing AI for credit approval based on historical trends by applying machine learning algorithms to credit scores, income levels, and spending patterns.
These AI services are expected to enhance Amex’s customer experience, operational efficiency, and risk management. For example, using AI for credit approval can help customers make faster and more accurate credit decisions while also reducing Amex’s exposure to bad debts.
How Amex Balances Innovation and Caution in Fintech
Amex’s approach to AI reflects its overall strategy in fintech in recent years. While some of its competitors have embraced disruptive technologies such as cryptocurrency payments, Amex has taken a more measured approach by offering a crypto rewards card without enabling direct cryptocurrency payments.
This shows that Amex is not afraid of innovation but is also cautious about the potential pitfalls of new technologies. By partnering with industry leaders in AI development, Amex can tap into its expertise and resources without investing in building its own LLM from scratch.
Amex’s strategy may prove to be wise in the long run as it can leverage the best of both worlds: the cutting-edge capabilities of AI and the trustworthiness of its brand.
AI BYTE # 2 📢- The Rise Of AI In Traffic Enforcement: How Police Are Using Smart Cameras And Drones To Catch Offenders
⭐ I am always fascinated by the latest innovations and how they impact our lives.
One of the areas that has seen a rapid development of AI-powered solutions is traffic enforcement.
UK police are deploying AI-enabled cameras and drones to catch multiple traffic offenses, signaling the end of traditional speed cameras.
What are the new technologies and how do they work?
Police forces across the UK are embracing high-tech solutions to catch speeding drivers.
These include:
Stealth speed cameras that can detect motorists in both directions and are installed on routes with a history of collisions and speeding incidents.
Drones that can capture dangerous driving and speeding in hotspot locations and can be operated remotely.
AI-enabled cameras that can detect a range of offenses, such as using mobile phones, not wearing seat belts, or having loud music in the car.
These devices use AI software to analyze the photos taken and forward any evidence to the police for manual confirmation.
The police can also check the car’s MOT, tax, and insurance status, allowing for swift identification and arrest of dangerous or stolen vehicles.
What are the benefits and challenges of these technologies?
These technologies are effective in reducing collisions and injuries on the road, as they deter drivers from committing offenses due to the increased risk of being caught.
However, this also raises some concerns about privacy and the potential misuse of these technologies. It questions whether motorists are aware that they are being monitored by drones or AI-enabled cameras and whether they have any rights to challenge or appeal the evidence.
Human oversight is still required to confirm offenses, as AI software may not be accurate or reliable enough.
What are the implications and future prospects of these technologies?
These technological advancements are a step toward a new era of high-tech traffic enforcement but also ignite debate on how best to balance safety and privacy.
As these technologies evolve, they may become even more advanced, potentially detecting distractions like loud music within cars or other nuanced driving offenses.
As a technology expert, I think this topic is very relevant and important for anyone who drives or uses public roads.
I believe that AI can be a powerful tool for improving road safety and saving lives, but it also needs to be regulated and controlled to ensure that it does not infringe on our civil liberties or personal data.
AI BYTE # 3 📢- How SoftBank is Betting on AI After a Year of Losses
⭐ SoftBank Group, the Japanese technology investor, has posted its first investment gains in 18 months and announced its plans to make new bets in artificial intelligence-related fields.
The company’s flagship Vision Fund unit reported $1 billion in investment gains in the April-June quarter, thanks to the recovery of the technology sector. This was the first time in six quarters that the unit was profitable, after suffering losses from the global tech selloff and higher interest rates.
SoftBank’s CEO Masayoshi Son said he had devoted himself to AI and was making his own inventions, often by talking to AI chatbot ChatGPT4. He said he wanted to create a “singularity” where AI surpasses human intelligence.
He also said he had dedicated 97% of his time to AI research and development.
The company said it would select new investments with a stricter eye on their AI technology because some people doubt if SoftBank’s existing investments are truly AI-centric.
One recent Vision Fund investment is a deal with Tractable, a U.K. startup that helps insurers process claims using AI. The company also invested in AutoX, a Chinese startup that develops self-driving technology using AI.
SoftBank also said it was preparing for the listing of chip designer Arm in the U.S., which is expected to be a major deal in the semiconductor industry. Arm’s chips are widely used in smartphones and other devices, and are seen as crucial for AI applications.
SoftBank bought Arm for $32 billion in 2016, and agreed to sell it to Nvidia for $40 billion last year, but the deal faces regulatory hurdles.
SoftBank’s AI strategy reflects its ambition to become a leader in the field and to turn around its tech investments after a year of losses. The company hopes that its bets on AI will pay off in the long run and create value for its shareholders and society.
AI BYTE # 4 📢- Heading: The Dark Side of ChatGPT: How AI Moderators Suffer from Trauma
⭐ ChatGPT is one of the most advanced AI chatbots in the world, capable of generating realistic text and images. But behind its success, there is a hidden cost: the human moderators who have to filter out violent and sexual content from its training data.
It’s appalling that the contractors in Kenya have been traumatized by their work, and what OpenAI and other AI companies should do to protect them.
ChatGPT is powered by a massive neural network that learns from billions of words and images scraped from the internet. To make sure that the chatbot does not produce harmful or offensive outputs, OpenAI hired a team of moderators in Nairobi to review and remove texts that describe rape, murder, abuse, and other disturbing scenarios.
The moderators had to work for 12 hours a day, six days a week, for $3 an hour.
These moderators have suffered from psychological stress and trauma as a result of their work. Richard Mathenge, who led the team that moderated sexual content, said he had nightmares and flashbacks of the texts he read.
He also developed erectile dysfunction and lost interest in sex. Another moderator, Mary Wanjiku, said she felt depressed and suicidal after reading texts that involved children being sexually abused. She also had trouble sleeping and eating.
There is clearly a lack of support and care that OpenAI has provided to these moderators. The company did not offer any counseling or mental health services to them. It also did not inform them about the nature and purpose of their work, or the potential risks involved.
The moderators were not given any contracts or legal protections. They were also threatened with termination if they spoke to anyone about their work.
It’s high time for OpenAI and other AI companies to take responsibility for the well-being of their human workers, who are essential for ensuring the safety and quality of their products.
It’s important that they adopt ethical standards and best practices for AI moderation, such as providing adequate training, compensation, counseling, and transparency to their contractors.
Also, foster public awareness and education about the challenges and opportunities of AI development.
Without proper regulation and accountability, AI could pose serious threats to human rights, dignity, and security. AI could benefit from human values, emotions, and creativity, which are often missing or distorted in its outputs.
Not always necessary to show evidence of your good times