# 100 How Starbucks, Delta, Walmart, Chevron Are Using AI To Monitor Employee Messages & Internal Engagements
Fresh & Hot curated AI happenings in one snack. Never miss a byte 🍔
This snack byte will take approx 4 minutes to consume.
AI BYTE #1 📢: How Starbucks, Delta, Walmart, Chevron Are Using AI To Monitor Employee Messages & Internal Engagements
⭐ If you use Slack, Microsoft Teams, Zoom or other popular apps for work, chances are that AI is reading your messages and analyzing your behavior.
A growing number of companies, including Walmart, Delta, Chevron and Starbucks, are using AI-powered surveillance tools to monitor their employees
These tools, developed by a startup called Aware, claim to help companies “understand the risk within their communications,” by tracking employee sentiment, toxicity, performance and compliance.
Aware’s AI models can read text and process images, and flag behaviors such as bullying, harassment, discrimination, noncompliance, pornography, nudity and more.
Aware says its analytics tool, which monitors employee sentiment and toxicity, does not have the ability to identify individual employees. However, its eDiscovery tool, which is used for governance risk and compliance, can do so in the event of extreme threats or other risk behaviors that are predetermined by the client.
Aware’s CEO Jeff Schumann says the AI helps companies get real-time feedback from their employees, rather than relying on annual or biannual surveys. He also says the AI does not make decisions or recommendations regarding employee discipline, but simply provides information to the investigation teams.
However, not everyone is convinced by the benefits of AI surveillance in the workplace. Privacy experts, workers’ rights advocates and AI researchers have raised serious concerns about the ethical, legal and social implications of these tools.
Some of the main issues are:
Privacy: AI surveillance may violate employees’ privacy rights, especially if they are not informed or consented to the monitoring. Even if the data is anonymized or aggregated, it may be easily de-anonymized or inferred by the AI models, which can make accurate guesses based on language, context, slang and more. Employees may also have no recourse or transparency if their data is misused, leaked or hacked.
Freedom of expression: AI surveillance may create a chilling effect on what employees can say or do in the workplace, limiting their creativity, innovation and collaboration. Employees may also self-censor or avoid expressing their opinions or concerns, for fear of being flagged or penalized by the AI. This may undermine the trust and morale within the organization, and reduce the diversity of perspectives and ideas.
Discrimination: AI surveillance may introduce or amplify biases and discrimination in the workplace, especially if the AI models are not trained or tested on diverse and representative data sets. Employees from marginalized or minority groups may be disproportionately targeted or harmed by the AI, due to their language, culture, identity or expression. This may create a hostile or unfair work environment, and violate the anti-discrimination laws and policies.
Accountability: AI surveillance may pose challenges for accountability and oversight, especially if the AI models are not transparent or explainable. Employees may not know how or why the AI made certain judgments or decisions, or how to challenge or appeal them. Employers may also rely too much on the AI, without verifying or validating its outputs, or considering the human and social factors involved.
AI surveillance in the workplace is not a new phenomenon, but it is rapidly expanding and evolving, thanks to the advances in AI technology and the changes in work patterns due to the pandemic.
While some companies may see it as a way to improve productivity and efficiency, others may use it as a tool to control and exploit their workers.
Remember “Big Brother” from the George Orwell classic? Well, we are just getting started.