• Lumiera
  • Posts
  • 🔆 The Current State of AI, according to Stanford Human AI Institute

🔆 The Current State of AI, according to Stanford Human AI Institute

Team Lumiera's TL;DR on one of the most comprehensive AI reports out there

🗞️ Issue 15 // ⏱️ Read Time: 5 min

Hello đź‘‹

This week we are turning our attention to one thing only: Stanford’s Artificial Intelligence Index Report. This annual report is comprehensive and covers some of the most relevant insights on the current state of AI. It’s also 500 pages long. We read it all and have gathered some of the most relevant insights so you don’t have to.

In this week's newsletter

What we’re talking about: The seventh edition of the Artificial Intelligence report by Stanford University - a report that tracks, collates, distills, and visualises data relating to artificial intelligence.

How it’s relevant: The report provides a snapshot of where AI is at this very moment in time.

Why it matters: It helps decision-makers get an understanding of the current AI landscape: These data-based insights allow for better strategies.

Big tech news of the week…

🌍 Microsoft released VASA-1, which can essentially create a deepfake with only one photo and speech audio. For example, someone created a rapping Mona-Lisa. We’re not certain Da Vinci would approve.

🤏 Microsoft also introduced Phi-3, a new family of small language models (SLMs) that offer impressive performance in a compact size, making it suitable for resource-constrained environments like smartphones.

💾 Samsung announced that it has developed the industry’s first LPDDR5X DRAM, a type of high-speed memory designed for mobile devices. This is Samsung’s solution for the on-device AI era that requires high-performance, high-capacity, and low-power memory.

🔍️ Perplexity AI reaches unicorn status with its latest funding round valuing the company at $1B. Perplexity offers an AI chatbot that summarises search results, lists citations for its answers, and helps users refine their queries to get the best responses. A personal favourite tool of the Lumiera team!

🎤 Drake released a new diss track titled "Taylor Made Freestyle," which features AI-generated verses that mimic the voices of the late Tupac Shakur and Snoop Dogg. This unprecedented use of AI technology within the hip-hop industry has sparked varying reactions within the community.

Organisational Adoption

  • 55% of organisations are now using AI, including Generative AI, in at least one business unit or function.

  • Several labour reports released throughout 2023 indicated that worker productivity increases with the help of AI: The quality of the output is higher, and tasks are finished more quickly.

  • AI can also help bridge the gap between low- and high-skill workers if done right. Without good oversight functions, it can lead to a decrease in performance.

  • Do you want to set up a clear roadmap to successfully adopt AI in your organisation? Talk to team Lumiera!

picard too long didnt read GIF

Team Lumiera is giving you the most relevant insights from a very long report in a lighter format.

Responsible AI

  • AI incidents (ethical misuse of AI) have increased rapidly. Over the past 10 years, the number has multiplied by over 20x! A common type of AI incident is deepfakes. 

  • AI race-based medical biases in healthcare are obvious and problematic. So are the social biases in image generation: Stereotypical images concerning gender, generation, and race are common.

  • The lack of transparency and explainability in AI models poses big challenges for research. Part of this relates to the fact that the systems are so complex that even experts have a hard time explaining how they work. Additionally, there is a trade-off between explainability and performance: A more complex AI model gives us more sophisticated answers, but makes it harder to explain how the model reaches a specific conclusion.

  • Explainability, a key dimension of responsible AI, is a priority for many organizations looking to adopt AI. To make well-informed choices, it’s important to understand the rationale behind AI decisions.

Higher transparency allows for higher explainability in AI models.

âťť

How can you integrate the insights from this report into your strategic decision making?

The Lumiera Question of the Week

Open Source

  • The rise of open-source foundation models is a major trend. In 2023, 66% of the 149 new foundation models released were open-source, up from just 33% in 2021.

  • However, closed-source commercial models are outperforming the open-source alternatives with a median 24% performance advantage across benchmarks.

  • Want to know more about open source? Read our open source issue here. 

And, last but not least: If you wish to read the full report, you can download it here.

Until next time.

Emma - CEO and Business Strategist
Sarah - Chief Policy Officer and Senior Advisor
Allegra - CTO and Data Specialist

Lumiera has gathered the brightest people from the technology and policy sectors to give you top-quality advice so you can navigate the new AI Era.

Follow the carefully curated Lumiera podcast playlist to stay informed and challenged on all things AI.