#009 Your Next Interview With An AI ChatBot, Ex-Meta Researchers Raise $40M For A New AI Biotech Startup, How to Spot and Report Deceptive AI Political Ads?
Fresh & Hot curated AI happenings in one snack. Never miss a byte 🍔
This snack byte will take approx 6 minutes to consume.
AI BYTE # 1 📢 : Your Next Interview Is Going To Be With An AI ChatBot. Are You Ready?
⭐ AI chatbots are increasingly being used by companies to interview and screen job applicants, often for blue-collar jobs.
But like other algorithmic hiring tools before them, experts and job applicants worry these tools could be biased.
AI chatbots are designed to filter out unqualified applicants and schedule interviews with the ones who might be right for the job.
They ask straightforward questions like "Do you know how to use a forklift?" or “Are you able to work weekends?” and analyze the responses using natural language processing and machine learning.
Some of the benefits of using AI chatbots are that they can save time and money for recruiters, reduce human errors and biases, and reach a wider pool of candidates.
However, there are also some drawbacks and challenges that need to be addressed.
One of the main concerns is that AI chatbots may not provide enough flexibility and human touch for job applicants, especially those who have disabilities, medical conditions, language barriers, or different cultural backgrounds.
For example, a chatbot may not give an opportunity for an applicant to request a reasonable accommodation or explain a special circumstance.
Another concern is that AI chatbots may inherit or amplify bias and discrimination from the data they are trained on or the criteria they are programmed with.
For example, a chatbot may favor applicants who use correct grammar and complex sentences, which could disadvantage some groups of people.
To prevent these issues, some experts suggest that companies should audit their AI chatbots for gender and racial bias, notify applicants and obtain consent before using them, and ensure transparency and accountability for their decisions.
Moreover, they argue that AI chatbots should not replace human judgment, but rather complement it.
AI chatbots are a new trend in recruiting that has both advantages and disadvantages.
As technology evolves, so should the ethical standards and best practices for using it.
AI BYTE # 2 📢 : Meet The Ex-Meta Researchers Who Raised $40M For A New AI Biotech Startup
⭐ AI is not only revolutionizing the fields of natural language processing and computer vision but also biology and biotechnology.
In this post, I will introduce you to a new startup that is applying AI to solve some of the most challenging problems in biology, such as predicting protein structures, designing new drugs, and engineering novel organisms.
The startup is called EvolutionaryScale, and it was founded by former Meta researchers who developed an AI language model for biology.
Their model, which is based on transformers, can generate realistic and accurate 3D structures of proteins from their amino acid sequences.
Proteins are the building blocks of life, and their shapes determine their functions and interactions. By predicting the shapes of millions of proteins, the model can help scientists discover new targets for drugs, design new molecules, and understand biological processes.
EvolutionaryScale has raised at least $40 million from Lux Capital and other prominent AI investors, such as Nat Friedman and Daniel Gross.
The startup is led by Alexander Rives, who ran Meta AI’s protein-folding team until the tech giant shut down the project in April.
Rives and his team of eight left Meta to pursue their vision of creating a general-purpose AI model for biology that can integrate various types of biological data and enable a wide range of applications.
The startup faces some formidable challenges, such as competing with Google’s DeepMind, which has created the most advanced protein-folding AI system called AlphaFold.
EvolutionaryScale also needs to scale up its AI model significantly, which will require massive amounts of computing power and data.
The startup projects that it will spend $38 million in its first year, with $16 million going to computing power.
Moreover, the startup acknowledges that it may take ten years for biology AI models to help design products and therapies that can benefit society.
However, the startup also has some unique advantages, such as its speed and generality.
EvolutionaryScale claims that its model can make predictions 60 times faster than AlphaFold, though less accurately.
The startup also aims to expand beyond protein-folding to other domains of biology, such as DNA sequencing, gene expression, and epigenetics.
The startup’s long-term vision is to create a biological design lab that can use AI to create programmable cells, molecular machines, and synthetic organisms.
EvolutionaryScale is one of the most exciting and ambitious startups in the field of AI for biology. It represents the potential of AI to transform biology and biotechnology in ways that we can hardly imagine.
AI BYTE # 3 📢 : How to Spot and Report Deceptive AI Political Ads?
⭐ As the political season heats up, there is a growing concern about the use of generative AI to create deceptive and manipulative political ads.
Generative AI is a type of artificial intelligence that can produce realistic text, images, audio and video based on data and algorithms. It can be used to create fake or misleading content that can influence voters and undermine democracy.
News organizations need to learn how to report on these ads and educate the public about their potential risks and harms.
Generative AI can destroy the authenticity of a person’s likeness by disassembling and reassembling different elements of their appearance, voice, gestures and expressions.
This can make it hard to tell if a person actually said or did something in an ad, or if it was generated by AI.
News outlets can do more than just fact-checking the claims in these ads. They can also expose the digital mechanics behind them and help the public understand how generative AI works and how it can be used for deception and manipulation.
Here are some questions reporters and editors can ask when reviewing generative AI political ads:
Who released the ad? Is it from the candidate’s side or an opponent’s side? Is it satire or serious?
Is generative AI used in the ad? If so, in what elements (video, audio, text, etc.)? Did the producer disclose that?
Did the person in the ad actually say or do what is shown or implied? If not, where did the content come from (text posts, speeches, etc.)?
Is the claim in the ad true or false? If false, what is the evidence?
Is the ad using generative AI to create a likeness of a person for a factual or rhetorical message? Did the person authorize that?
By asking these questions, news outlets can provide more context and transparency to their audiences and help them become more critical and informed consumers of political information.
They can also use generative AI ads as an opportunity to educate the public about the ethical challenges and implications of this technology for democracy and society.
Go ahead and give it a try.
Only opportunity for you to beat Einstein