- The Loop
- Posts
- 🔆 Sweet Talking AI: On Attitudes to AI Integration in Organisations
🔆 Sweet Talking AI: On Attitudes to AI Integration in Organisations
A closer look at the different approaches to using generative AI tools in organisations, why shared learning matters and also - how rude are you to your AI?
Was this email forwarded to you? Sign up here
🗞️ Issue 12 // ⏱️ Read Time: 7 min
Hello 👋
In this week's newsletter
What are we talking about? Different attitudes and approaches to using generative AI in organisations.
Why is it relevant? We see some risk-averse actors banning generative AI tools in the workplace and others using the technology irresponsibly in the name of innovation. A more balanced approach is necessary, focusing on shared learning and responsibility.
How is it impacting society? How organisations in the public and private sectors actively or passively implement generative AI will directly influence whether the technology positively or negatively impacts the broader ecosystem.
Meanwhile in Big Tech 👀
⚔️ Google DeepMind CEO Demis Hassabis gets UK knighthood for ‘services to artificial intelligence’.
🔥 AI-generated text is hot in academia! “According to my latest knowledge update” was the giveaway phrase. Read about the growing concern of AI-generated research papers here.
🤝 Yahoo announced that it has acquired Artifact, the AI-driven news aggregation and discovery platform created by Instagram cofounders, Kevin Systrom and Mike Krieger.
🌵 Experts warn AI is running out of training data. Sophisticated AI programs are consuming more data than ever before. This may lead them to run out of high-quality, natural data sources by 2026, with low-quality text and image data potentially depleted between 2030 and 2060.
🧞♀️ OpenAI has recently unveiled a new voice cloning technology called "Voice Engine.” The company shared preliminary insights and results from a small-scale preview of the model, which uses text input and a single 15-second audio sample to generate natural-sounding speech that closely resembles the original speaker.
Attitudes to Artificial Intelligence
Between Banning and Laissez-Faire
Last week the US Congress announced that staff were no longer authorised to use Microsoft’s AI Copilot. The Office of Cybersecurity believes that the technology is a risk to users due to the threat of leaking House data to non-House-approved cloud services. The US Congress is not the only organisation with concerns about data leakage and questions regarding whether the technology can be used while adhering to the strict security and privacy regulations that government and state entities are bound by.
This question of privacy is one of the first topics Lumiera sees come up in discussions with leaders in both private and public sectors. We also see a spectrum of approaches to AI, with fear and banning on one side and laissez-faire attitudes that sacrifice responsible usage on the other.
Image prompt: Robot standing on a busy street with flowers growing from its head.
These contrasting extremes lack a foundation in knowledge and come with significant drawbacks. On one end, an overly cautious approach to AI prevents organisations from benefiting from technological advancements. They see the hurdle of drafting usage guidelines, forming a strategy, and staying informed as insurmountable, thus sacrificing their potential. On the other end, there are those who recognise the risks associated with implementing AI quickly and broadly, but prioritise immediate gains, often overlooking ethical considerations and opening the door to irresponsible use. These examples can end in incidents, backpedaling work, and failed launches.
A Balanced Approach
The approach to privacy concerns depends on many factors, such as the sector of operation, risk appetite, types of data the organisation or business deals with, knowledge about generative AI within the organisation (especially by those leading it), and the latest hype.
At Lumiera, we believe there is a more balanced path to take. The root of this is ambitious leadership that embraces knowledge, shared learning, curiosity, and responsibility. Instead of banning or punishing internal AI usage, leaders should try to understand what problem it’s solving for the people in their organisations. Learning how AI is used among teams will uncover where guidelines are necessary and where impactful investments can be made. To promote literacy and encourage responsible usage, leaders should rally early AI adopters and leverage their excitement to facilitate knowledge sharing. In this conversation between Russell Johnson, Denisse Groenendaal-Lopez, and Mark Stern, they all agree that “creating a culture of continuous learning isn't just beneficial, it's essential.”
We agree! Ignorance is not bliss. Maintaining an open attitude of continuous learning is key, especially for leaders.
How can leaders have a proactive and encouraging attitude towards AI while being clear on mitigating the risks?