• The Loop
  • Posts
  • 🔆 The End of Apps? LAMs have entered the chat!

🔆 The End of Apps? LAMs have entered the chat!

Apple’s viral, trademarked slogan summarizes how we’ve been interacting with our phones since the app store launched in 2008. But is the app era coming to an end?

Was this email forwarded to you? Sign up here

🗞️ Issue 5 // ⏱️ Read Time: 7 min

Hello đź‘‹

“There’s an app for that.” Apple’s viral, trademarked slogan summarizes how we’ve been interacting with our phones since the app store launched in 2008. But is the app era coming to an end?

In this week’s newsletter:

What are we talking about? The next step in artificial intelligence: Large Action Models (LAMs)

How does it work? LAMs aren’t just turning a request into a series of steps but understanding the logic that connects and surrounds them. Neural and symbolic AI architectures integrate to improve how a neural network arrives at a decision and to make that process more explainable.

Why is it relevant? LAMs understand and replicate human actions on various technology interfaces, making them a transformative force in reshaping human-machine interactions. They have the potential to significantly impact many industries by enabling the automation of entire processes and are a testament to the historic nature of AI development.

LAMs are designed to observe, understand, and replicate human actions on various interfaces, making them a transformative force in reshaping and impacting human interaction with AI technology.

Wait, so generative AI is going to get even more advanced?! With the development of Large Action Models, also known as Large Agentic Models or LAMs, a type of artificial intelligence that takes a giant step beyond Large Language Models (LLMs) - some companies are betting on just that.

LAMs are modernizing how we navigate applications. While today's apps are generally independent from one another and guide us to accomplish one task at a time, Large Action Models can logically piece them together without intervention. Unlike LLMs, which are limited in reasoning capability and contextual understanding, LAMs strength lies in their superior understanding of context and their ability to autonomously make reasoned decisions that produce the most desirable outcome. That means understanding why one step must occur before or after another, and knowing when it’s time to change the plan to accommodate changes in circumstances. This difference elevates generative AI from a passive tool to an active partner.

âťť

Considering the potential for LAMs to disrupt the market, what steps can businesses take to ensure they remain adaptable and future-ready?

The Lumiera Question of the Week

LAM sits at the forefront of language modeling research, and products are already starting to hit the market. Rabbit R1 and Humane ai pin are two examples of this new kind of user interaction experience, where the device's primary user interface is via spoken natural language instead of touch. Think about this as the next generation of Apple’s Siri or Google’s Alexa. However, it’s hard to imagine mass market adoption of a new device when similar functionality will likely be built into our smartphones in the near future. Phone manufacturers are already putting AI into devices, and there’s no doubt they’ll incorporate every possible enhancement into their operating systems. 

With models gaining more agency to fulfill entire workflows, it’s important to consider where human input is necessary. When put into practice, the risks and challenges of full automation are just as significant as the potential for efficiency. If trust is already a challenge when it comes to generating text and images—and it certainly is—it’s an even bigger one when it comes to taking action. The burden of ensuring safety and reliability only grows when multiple LAMs begin to work together.

Keeping a human in the loop by requiring a user to proactively authorise certain actions, preventing other actions entirely, or setting “time-out” periods to ensure review and approval, are all ways to mitigate risk. Balancing this with reduction in capabilities and utility is a challenge that developers and users will be facing as we see this technology take shape. Provided they are designed thoughtfully, and adhere to best practices for safety and accountability, LAMs can unlock a never-before-seen level of efficiency across a variety of industries. 

And since they learn from watching us, we need to put our best foot forward. 

What we are excited about:

The word hypertrucage which is francophone for deep fake. Will it catch on? Who knows, but it has Team Lumiera getting hanging out on etymology.com and thinking cheapfakes and deepfakes and everything in between.

A satirical AI chat that is so ethical it refuses all prompts is here: GOODY-2. It rejects every request with a fairly self righteous tone, explaining the harm and ethical dilemmas related to it. Test it with the most innocent questions you can come up with such as “Why is the grass green?” or “How do Large Action Models work?”  and you will probably learn something new.

AI ♥️ Alzheimer's Research 

Researchers have used artificial intelligence to search for connections between nearly 1,500 blood proteins and developing dementia years later. Analysis of the blood identified patterns of four proteins that predicted the onset of dementia in general, and Alzheimer’s disease and vascular dementia specifically, in older age.

⚖️ In trouble? Instead of calling your lawyer, ask an LLM.

Well, at least if you are looking for a contract review. Research comparing ChatGPT and other LLM’s to Junior lawyers shows that Generative AI outperforms lawyers in terms of time and cost. The consequence is a shift of focus for the humans involved in the review process, as incorrect information tends to have quite serious implications in the legal field: Quality control, interpretation and nuanced application of insights will be essential, to ensure reliability and depth of legal information. 

Until next time.
On behalf of Team Lumiera

Lumiera has gathered the brightest people from the technology and policy sectors to give you top-quality advice so you can navigate the new AI Era.

Follow the carefully curated Lumiera podcast playlist to stay informed and challenged on all things AI.