#005 Midweek Special - AI In War Decision Making, Meta Challenges OpenAI with LLaMA 2, and Its Free.
Fresh & Hot curated AI happenings in one snack. Never miss a byte 🍔
This snack byte will take approx 5 minutes to consume.
AI BYTE # 1 📢 : How AI is Changing the Way We Make Decisions in War?
⭐AI is increasingly being used to assist human decision-making in warfare, from gunsights that detect and identify targets to chatbots that suggest artillery strikes, to algorithms that analyze drone footage and propose kill chains.
These tools are not fully autonomous weapons, but they still raise novel ethical and legal questions about the role and responsibility of humans in the loop, especially when the machines are prone to errors, glitches, or biases.
Some of the challenges include how to ensure human judgment is not unduly influenced or overridden by the machine, how to design the tools to be transparent and trustworthy, and how to assign accountability when things go wrong.
The use of AI decision support tools also poses a risk of losing something essential about the human act of war, such as the moral responsibility and the qualitative judgment that come from making life-and-death decisions in an open and uncertain world.
As militaries plan to integrate AI into more aspects of warfare, creating a seamless network of human-machine teams, it is important to consider where to draw the line between human and machine agency, and how to preserve the ethical and legal norms that govern war.
The Rise Of Machine Learning
The rise of machine learning has set off a paradigm shift in how militaries use computers to help shape the crucial decisions of warfare—up to, and including, the ultimate decision.
With machine-learning-based decision tools, “you have more apparent competency, more breadth” than earlier tools afforded.
A soldier on the lookout for enemy snipers might do so through an AI-powered gunsight capable of “human target detection” at a range of more than 600 yards.
Decision support tools that sit at a greater distance removed from the battlefield can be just as decisive. The Pentagon appears to have used AI in the sequence of intelligence analyses and decisions leading up to a potential strike.
Other countries are more openly experimenting with such automation. The Ukrainian army uses a program that pairs each known Russian target on the battlefield with the artillery unit that is best placed to shoot at it.
Russia claims to have its own command-and-control system driven by AI.
The Challenges Of Accountability
With AI-based tools, there is a risk that they might glitch in unusual and unpredictable ways; it’s not clear that the human involved will always be able to know when the answers on the screen are right or wrong.
In their relentless efficiency, these tools may also not leave enough time and space for humans to determine if what they’re doing is legal.
Eventually, militaries plan to use machine intelligence to stitch many of these individual instruments into a single automated network that links every weapon, commander, and soldier to every other.
In these webs, it’s not clear whether the human’s decision is very much of a decision at all.
Finding the right people to blame when things go wrong is going to become an even more labyrinthine task as AI tools become larger and more complex.
Unfortunately, those who write these tools, and the companies they work for, aren’t likely to take the fall. Building AI software is a lengthy process that stands at a distance removed from the actual material facts of metal piercing flesh.
AI BYTE # 2 📢 : Meta Challenges OpenAI with LLaMA 2, Its First Open-Source AI-Language Model.
⭐ Meta has released LLaMA 2, its first large language model that anyone can use for free.
LLaMA 2 is a suite of AI models that can generate realistic text and images, as well as chatbots, similar to OpenAI’s ChatGPT and GPT-4.
Meta hopes that by making LLaMA 2 open-source, it will gain an edge over its rivals and benefit the AI community.
How LLaMA 2 differs from OpenAI’s models
Unlike OpenAI’s models, which are proprietary and require access through its website or API, LLaMA 2 can be downloaded from Meta’s launch partners Microsoft Azure, Amazon Web Services, and Hugging Face.
LLaMA 2 is also more customizable and transparent than OpenAI’s models, allowing developers and researchers to tinker with it and study its biases, ethics, and efficiency.
It is trained on more data than its predecessor, LLaMA, and uses human feedback to fine-tune its behavior.
The challenges and risks of LLaMA 2
Meta admits that there is still a performance gap between LLaMA 2 and GPT-4.
Meta also does not disclose the data set that it used to train LLaMA 2, and cannot guarantee that it did not include copyrighted works or personal data.
LLaMA 2 still suffers from the same problems as other large language models: it can produce false or offensive language that can harm users or society.
How Meta plans to improve LLaMA 2
Meta says that by releasing LLaMA 2 into the wild, it will learn important lessons about how to make its models safer, less biased, and more efficient.
Meta also applied a mix of machine learning techniques to mitigate the risk of repeating its past mistakes with Galactica and LLaMA, which were taken offline or leaked online due to safety issues.
Meta says that it welcomes external researchers and developers to probe LLaMA 2 for security flaws, which will make it safer than proprietary models.
The potential impact of LLaMA 2
LLaMA 2 poses a considerable threat to OpenAI, as it offers a powerful alternative for many use cases that do not need GPT-4.
It might also help companies create products and services faster than a big, sophisticated proprietary model.
LLaMA 2 could be a huge win for Meta, as it shows its commitment to openness and innovation in AI.
Innovation cannot be stopped