#053 - Sparta Disrupting Commodity Trading Industry With A $17.5M Fundraise,The Opportunities and Challenges of Generative AI for Data Leaders, How Generative AI is Changing the Future of Robotics.
Fresh & Hot curated AI happenings in one snack. Never miss a byte 🍔
This snack byte will take approx 8 minutes to consume.
AI BYTE # 1 📢 : Meet Sparta, The AI Startup That Is Disrupting The Commodity Trading Industry With A $17.5M Fundraise.
⭐ Commodity trading is a complex and competitive industry that requires traders to constantly monitor and analyze vast amounts of data from multiple sources.
However, most of the data is scattered, unstructured, and outdated, making it difficult for traders to gain accurate and timely insights.
That’s where Sparta comes in. Sparta is a startup that provides live market intelligence and forecasting insight for commodity traders.
The company has built a platform that uses AI, Machine Learning (ML) and Data Science to capture non-liquid prices, such as physical premiums, OTC swaps, and freight, from brokers and pricing analysts around the world.
It then processes them into forward-looking insights and predictive analytics that enable traders to spot trading opportunities before their competition.
Sparta has recently announced a series A funding round of $17.5 million, led by technology venture capital firm FirstMark, alongside existing shareholder Singular.
The company plans to use the new funding to expand its product offerings beyond its current focus on Oil and Gas products, such as Gasoline, Diesel, Jet and NAFTA.
The company intends to cover every product in the oil and gas sector by the end of next year. It then plans to enter other commodity markets, such as agriculture and metals. The company also plans to develop premium insights, optimize workflow processes and develop AI tools that can provide forward-looking predictions and reports.
Sparta also wants to grow its global presence. Currently located in Geneva, London, Houston, Singapore and Madrid, Sparta plans to expand its presence within these existing territories, as well as establish a foothold in new regions. The company has more than 70 customers globally, including Phillips 66, Chevron, Trafigura, Equinor and more.
Sparta’s predictive pricing engine and market opinion layer aim to give traders a competitive edge by providing accurate and timely information. The company’s co-founder and CEO Felipe Elink Schuurman predicts that the speed and accuracy of the information Sparta provides will be so vital to trading that not having it would put traders at a competitive disadvantage.
As Sparta continues to revolutionize the commodity trading industry using AI, it has its sights set on connecting predictive pricing, market opinion and news.
As Schuurman puts it - “Imagine that all of these things are linked, and you can immediately see how future news will impact prices. It’s going to be fascinating what will happen over the next five years, in terms of that enablement, that co-pilot side of assisting people in making those trading decisions.”
Sparta’s ambition to transform the commodity trading industry using AI signifies a major shift in how businesses approach decision-making.
As AI continues to evolve and become more integrated into various industries, companies such as Sparta remain at the forefront, pushing the boundaries of what’s possible and setting new standards for the future of trading.
AI BYTE # 2 📢 : The Opportunities and Challenges of Generative AI for Data Leaders
⭐ Generative AI is changing how software works, creating opportunities to increase productivity, find new solutions and produce unique and relevant information at scale.
However, as Gen AI becomes more widespread, there will be new and growing concerns around data privacy and ethical quandaries.
Its important to explore the potential compliance and privacy risks of unchecked Gen AI use, how the legal landscape is evolving, and best practices to limit risks and maximize opportunities for this very powerful tech.
What are the risks of Gen AI?
Gen AI and Large Language Models (LLMs) can consolidate information and generate new ideas, but these capabilities also come with inherent risks. If not carefully managed, Gen AI can inadvertently lead to issues such as:
Disclosing Proprietary Information: Companies risk exposing sensitive proprietary data when they feed it into public AI models. That data can be used to provide answers for a future query by a third party or by the model owner itself.
Violating IP protections: Companies may unwittingly find themselves infringing on the intellectual property rights of third parties through improper use of AI-generated content, leading to potential legal issues.
Exposing Personal Data: Data privacy breaches can occur if AI systems mishandle personal information, especially sensitive or special category personal data.
Violating Customer Contracts: Using customer data in AI may violate contractual agreements — and this can lead to legal ramifications.
Risk of Deceiving Customers: Current and potential future regulations are often focused on proper disclosure for AI technology. For example, if a customer is interacting with a chatbot on a support website, the company needs to make it clear when an AI is powering the interaction, and when an actual human is drafting the responses.
How is the legal landscape evolving?
The legal guidelines surrounding AI are evolving rapidly, but not as fast as AI vendors launch new capabilities. If a company tries to minimize all potential risks and wait for the dust to settle on AI, they could lose market share and customer confidence as faster moving rivals get more attention.
It behooves companies to move forward ASAP — but they should use time-tested risk reduction strategies based on current regulations and legal precedents to minimize potential issues.
So far we’ve seen AI giants as the primary targets of several lawsuits that revolve around their use of copyrighted data to create and train their models. Recent class action lawsuits filed in the Northern District of California raise allegations of copyright infringement, consumer protection and violations of data protection laws.
These filings highlight the importance of responsible data handling, and may point to the need to disclose training data sources in the future.
However, AI creators like OpenAI aren’t the only companies dealing with the risk presented by implementing Gen AI models. When applications rely heavily on a model, there is risk that one that has been illegally trained can pollute the entire product.
For example, when the FTC charged the owner of the app Every with allegations that it deceived consumers about its use of facial recognition technology and its retention of the photos and videos of users who deactivated their accounts, its parent company Everalbum was required to delete the improperly collected data and any AI models/algorithms it developed using that data. This essentially erased the company’s entire business, leading to its shutdown in 2020.
At the same time, states like New York have introduced, or are introducing, laws and proposals that regulate AI use in areas such as hiring and chatbot disclosure. The EU AI Act , which is expected to be passed by the end of the year, would require companies to transparently disclose AI-generated content, ensure the content was not illegal, publish summaries of the copyrighted data used for training, and include additional requirements for high risk use cases.
What are some best practices for using gen AI?
It is clear that CEOs feel pressure to embrace Gen AI tools to augment productivity across their organizations. However, many companies lack a sense of organizational readiness to implement them. Uncertainty abounds while regulations are hammered out, and the first cases prepare for litigation.
But companies can use existing laws and frameworks as a guide to establish best practices and to prepare for future regulations. Existing data protection laws have provisions that can be applied to AI systems, including requirements for transparency, notice and adherence to personal privacy rights.
That said, much of the regulation has been around the ability to opt out of automated decision-making, the right to be forgotten or have inaccurate information deleted.
This may prove challenging to deploy given the current state of LLMs. But for now, best practices for companies grappling with responsibly implementing Gen AI include:
Transparency and documentation: Clearly communicate the use of AI in data processing, document AI logic, intended uses and potential impacts on data subjects.
Localizing AI models: Localizing AI models internally and training the model with proprietary data can greatly reduce the data security risk of leaks when compared to using tools like third-party chatbots. This approach can also yield meaningful productivity gains because the model is trained on highly relevant information specific to the organization.
Starting small and experimenting: Use internal AI models to experiment before moving to live business data from a secure cloud or on-premises environment.
Focusing on discovering and connecting: Use Gen AI to discover new insights and make unexpected connections across departments or information silos.
Preserving the human element: Gen AI should augment human performance, not remove it entirely. Human oversight, review of critical decisions and verification of AI-created content helps mitigate risk posed by model biases or data inaccuracy.
Maintaining transparency and logs: Capturing data movement transactions and saving detailed logs of personal data processed can help determine how and why data was used if a company needs to demonstrate proper governance and data security.
Between Anthropic’s Claude, OpenAI’s ChatGPT, Google’s BARD and Meta’s Llama, we’re going to see amazing new ways we can capitalize on the data that businesses have been collecting and storing for years, and uncover new ideas and connections that can change the way a company operates.
Change always comes with risk, and lawyers are charged with reducing risk.
But the transformative potential of AI is so close that even the most cautious privacy professional needs to prepare for this wave. By starting with robust data governance, clear notification and detailed documentation, privacy and compliance teams can best react to new regulations and maximize the tremendous business opportunity of AI.
If you enjoyed this article, please share it with your network and leave a comment below. I would love to hear your thoughts on Gen AI and its implications for data leaders.
AI BYTE # 3 📢 - How Generative AI is Changing the Future of Robotics
⭐ One of the challenges of robotics is to teach robots how to perform complex tasks that require coordination, planning, and reasoning.
Traditionally, this involves programming robots with explicit rules and algorithms, which can be time-consuming and error-prone. Gen AI offers a new way to train robots by using human demonstrations, data-driven models, and physics-based simulations.
For example, Formant, a startup that develops cloud-based software for managing and monitoring robots, uses Gen AI to enable human operators to teach robots new skills from just a few examples. The company uses a technique called Diffusion Policy, which combines modern generative AI methods with reinforcement learning, to generate fluid and human-like motions for robots.
Another challenge of robotics is to design robots that are suitable for different tasks and environments. This requires considering various factors, such as shape, size, materials, sensors, actuators, and energy consumption. Generative AI can help automate and optimize the robot design process by using data-driven models and evolutionary algorithms.
For instance, MIT CSAIL researchers have used Gen AI to design soft robots that can walk across land. The researchers used a technique called Instant Evolution, which leverages Generative Adversarial Networks (GANs) to create novel robot designs that meet certain constraints.
The resulting robots look nothing like any animal that has ever walked the earth, but they are efficient and robust.
Generative AI is not just a fad or a hype. It is a powerful tool that can transform the fields of electricity and electronics. It can also revolutionize the robotics industry by enabling faster, cheaper, and more creative robot development.
As Generative AI becomes more accessible and advanced, we can expect to see more innovative and diverse applications of robotics in various domains.
I hope you enjoyed this post and found it informative. If you have any questions or comments, please feel free to share them below. Thank you for reading!