The diagnostic potential of AI will continue to develop. Credit – Loic Venance—AFP/Getty Images
AAt Dreamforce 2024, taking place from September 17 to 19, five speakers and leaders from the artificial intelligence industry will share their thoughts on the most important priorities for the near future.
Edward Norton, Co-Founder and Chief Strategy Officer by Zeck
At a high level, we need something akin to the Hippocratic Oath of Medicine, which governs physicians to do no harm. Whether that is regulation or something else is up to others to decide, but we need a framework commitment.
I often approach things from a narrative angle, and I’ve always been struck by the way writer Isaac Asimov Robot series, in which he weaves meditations on how societal principles and protections are incorporated into the laws of robotics on an almost constructed basis. In the same way, we need someone to affirm a fundamental principle for all of us that AI must do no harm.
On balance, I see far more upsides than any negatives that are actually realized in the phase that we’re in. I think what’s happening in medicine alone should give people a lot of enthusiasm for the positive potential of AI. That’s the area where I’ve seen things that I find truly astonishing and that are going to lead to real revolutions in human health and quality of life for many people.
Even just AI in radiology: the ability of AI and machine learning to just do a much, much better job than human interpretation of cancer screening. And instead of turning to treatments that have little efficacy because we’re throwing a dart at the wall, we’re starting to see the ability of AI to make tailored, curated, data-driven conclusions about what’s going to benefit an individual versus a population.
The diagnostic potential of AI, or the interface between diagnosis and treatments that will be effective, combined with genetics, really brings me into a world that I think is very positive.
But we need an ethical foundation for doing no harm. How that is actually structured and expressed, both at a designed, technological level and at a societal, governmental level, is going to be one of the really big questions and challenges of the coming decades.
Jack Hidary, CEO by Sandbox AQ
Over the past 20 months, generative AI and large language models (LLMs) have dominated the mindshare of leaders and driven countless innovations. However, C-suite executives and AI experts must look beyond the capabilities and limitations of LLMs and explore the larger, more profound impact that large quantitative models (LQMs) will have on their organizations and industries.
While LLMs focus on our digital world – creating content or extracting insights from textual or visual data – LQMs impact the physical world and the financial services industry. LQMs use physics-based first principles to generate new products in industries such as biopharma, chemicals, energy, automotive, and aerospace. They can also analyze large volumes of complex numerical data to optimize investment portfolios and manage risk exposure for financial firms.
LQMs are enabling breakthroughs that seemed impossible 24 months ago, but are now bearing fruit. They are transforming industries and pushing the boundaries of what is possible with AI.
Companies are realizing that they must implement LQMs and LLMs to realize maximum benefits. If CEOs focus solely on LLM-powered AI solutions for customer service, marketing, document creation, digital assistants, etc., they will likely fall behind competitors who are using LQMs to transform processes, create innovative new products, or solve computationally complex problems.
Cristóbal Valenzuela, Co-founder and CEO from runway
Over the next year, our industry will need to reshape the way we talk about AI, both to manage expectations of what progress looks like and to bring smart, creative minds along.
This requires a concerted effort to communicate our vision clearly and remain transparent about our developments. It is important to do this in a way that does not create fear or portray these products as anything more than that: products.
At Runway, we are building significantly more advanced, accessible, and intuitive technologies and tools for our millions of creative users around the world. Our success and future growth is fueled by the strong community we have built through our work with artists and creators. Understanding their needs and how they approach their craft will always be our priority.
You see this reflected in initiatives such as our annual AI Film Festival, our Gen:48 short film competition and our new Research and Art (RNA) community Q&A sessions.
All these initiatives have provided a platform for artists, which in turn has fueled our growth and mission to support these artists.
Sasha Luccioni, AI and climate lead from hugging the face
I think we need to focus on transparency and accountability, and communicating about the impact of AI on the planet, so that both customers and community members can make more informed choices.
We don’t really have good ways to measure the sustainability or the labor impact of AI. And what would be useful is to develop new ways to reflect on how switching from one type of AI tool or approach to another changes the environmental impact.
For example, Google has gone from old-school AI to generative AI summaries for web searches. I think that’s where customers really want more information. They want to know: what do these AI summaries represent in terms of societal and planetary impact? In my research, we found that going from extractive AI to generative AI actually increases energy consumption by 10 to 20 times for the same request.
We can’t ignore new technology. And yet we don’t know how many more computers are needed, how much more energy or water is needed, how many more data centers need to be built so that people can get these AI summaries that they didn’t ask for in the first place.
There’s a lack of transparency there, because a lot of people are climate conscious. And so I think companies have a responsibility to their customers to say, “This is how much more energy you’re using.”
Robert WolfeCo-founder of Zeck
AI has the potential to transform efficiency, offering us the ability to save people time and help create audience-focused content.
I see it firsthand in several businesses that I’ve been fortunate enough to work with. Think of a GoFundMe campaign, for example. If AI can help you generate your story in a way that makes your audience more passionate about your cause, that can be monumental for someone raising money for their neighbor.
The biggest fear our customers have at Zeck is creating infographics, charts, and graphs. What a hassle. There’s not a single person in the world who enjoys creating charts and graphs. But Zeck AI looks at your table or your data and suggests, “This might look good as a pie chart,” and creates that pie chart for you. You can choose to accept it, iterate on it, or reject it. And Zeck AI will come up with red flags as you build your story that you didn’t think of yourself. Imagine the time that saves someone who would normally spend hours and hours building everything from scratch. Now it takes minutes. Mind-blowing.
I’m certainly not saying that AI should replace humans, but AI will make everyone more efficient.
Contact Us via letters@time.com.