- The Loop
- Posts
- ๐ Me, Myself and AI? The Role of Trust in the AI Ecosystem
๐ Me, Myself and AI? The Role of Trust in the AI Ecosystem
A closer look at current trends on public trust and AI, expectations on leaders and interesting cases to watch. Keep reading to learn more!
Was this email forwarded to you? Sign up here
๐๏ธ Issue 9// โฑ๏ธ Read Time: 8 min
Hello ๐
Last week, Team Lumiera mentioned that the need for responsible AI is gaining traction, as we see public trust in AI begin to waver. Our conclusion is that trust is crucial for the sustainable integration of AI into our businesses, organisations, and communities. This week we take a look at current trends in trust and AI.
In this weekโs newsletter
What are we talking about? Trust, AI, and society.
How is it relevant? The intersection of trust and AI navigates the ethical, social, and practical implications of increasingly pervasive artificial intelligence technologies in our daily lives.
Why does it matter? Building trust in AI systems is crucial for fostering innovation, ensuring responsible development and use of impactful technologies, and enhancing societal well-being.
Trust is a word we use every day but may have difficulty defining clearly. What does trust mean to you? Recent reports show that many of us are currently thinking about trust and AI. Of course, this is a broad topic since the AI ecosystem includes so many actors - the companies building the AI systems, businesses using them to make their day-to-day operations more efficient, and employees using two or three AI tools that they didnโt know existed (or maybe didnโt exist!) a year ago.
How can leaders increase the AI literacy knowledge of their employees to ensure that everyone is confident as they collaborate with AI systems?
One of the report's most interesting results is that 62% of respondents expected CEOs to manage societal changes, not just those occurring in their business. It also concludes that restoring faith in innovation is crucial. Because, if used and managed responsibly, innovation can be an effective tool in tackling significant societal issues. Leaders in all sectors have a very important role to play here.
Another trend to note is that 59 % of respondents felt that governments lack the competence to regulate emerging innovations, and when institutions mismanage innovation, there is a risk of more rejection and less enthusiasm for emerging technologies. When talking to legislators, business executives, and government officials, Team Lumiera observes that leaders find themselves overwhelmed with the task of regulating or introducing new and fast-developing technologies such as generative AI.
Is this risk of innovation rejection avoidable? As with most things - it depends. We believe that trust is something you earn. If leaders make intentional decisions to manage AI responsibly and set an example for others, itโs possible to establish a strong foundation to build on. Only then can we look towards a future where AI is worthy of our trust.
๐ชท Here are a few examples that we are watching with interest:
AI Literacy as a way to move away from hype and/or irrational fears and move toward understanding. We cannot trust something we do not understand. Singapore is one of the first countries to have introduced a comprehensive National AI Strategy. As part of this, a junior high school partnered with AI Singapore to organise an event in which students learnt about AI and engaged in a dialogue with industry leaders.
Human-centered and multidisciplinary AI initiatives. Trailblazers within this field include Dr. Joy Buolamwini, Fei-Fei Li, and Timnit Gebru. Enhancing these nuanced and well-informed voices in the public discussion is key for trust and AI. We have also seen that in order to restore trust, the public wants to see a rigorous examination of AI's social impact by scientists and ethicists alike. Or maybe more simply, not only tech bros on the panel at the next AI and tech conference โฆ
Emphasizing the importance of explainable and transparent AI systems. Having access to more information about how an AI system is built can make it easier to approach new technology and draw relevant conclusions about how it is useful. Explainable AI aims to make AIโs decision-making process clear and easier to understand for the user, a prerequisite for any type of trust building.
What we are excited about:
๐คฏ The Future Today Institute Trend Report for 2024 is packed with mind blowing facts and insights. Are you an AI doomer? โAmid the discourse surrounding AI, a contingent of pessimistic voices, often referred to as โAI doomersโ โฆ For business leaders, navigating this landscape proves challenging, as they are presented with polarizing narratives of either utopian ideals or dystopian anxieties, resulting in a nuanced yet unsettling reality. While itโs crucial to remain vigilant against potential risks and mitigate them effectively, the prevalence of doomerism tends to overshadow constructive dialogue and proactive measures.โ | โ ๏ธ A startup called Cognition Labs just dropped the world's first AI software engineer: It's called Devin and it can write complete apps by itself. In a demo, Devin was able to complete real jobs posted on Upwork. It also correctly resolved almost 14% of GitHub issues found in real-world open-source projects (better than many developers). Perplexity's CEO called it the first AI agent "that seems to cross the threshold of what is human level and works reliably." Even if Devin is getting completely slammed on Reddit - where the hype is considered to be a bigger deal than the product itself - the launch of Devin indicates that we are one step closer to Nvidia CEO Jensen Huangs statement that we wouldnโt need to know how to code in the future. |
Big tech news of the week:
๐ Inflection AI (founded in 2022 by DeepMind cofounder Mustafa Suleyman) unveiled Inflection-2.5, an upgraded in-house model that approaches GPT-4โs performance but used only 40% of the amount of compute for training. Inflection-2.5 is now available for all users of Pi โ a chatbot that Inflection AI released in May of last year and โdesigned to be empathetic, helpful, and safe.โ
๐ฅท A former Google engineer has been charged with stealing trade secrets related to the companyโs AI technology and secretly working with two Chinese firms. The information he is accused of taking relates to the infrastructure of Google's supercomputing data centers, which are used to host and train large AI models.
๐ช๐บ The EU AI Act has officially been approved by European Union Lawmakers. This means that the region has taken a big step closer to demanding transparency from providers and prohibit certain uses of the technology. However, it will still take years for the rules to be implemented and enforced.
Lumiera has gathered the brightest people from the technology and policy sectors to give you top-quality advice so you can navigate the new AI Era.
Until next time.
Follow the carefully curated Lumiera podcast playlist to stay informed and challenged on all things AI.
How would you rate this week's newsletter? |