By 2040, AI-driven data centers are projected to consume up to 8% of global emissions. Could we use artificial intelligence (AI) to power climate solutions, or is AI fueling a deeper crisis?
On June 5, 2025, an interdisciplinary panel hosted by the HEC Lausanne Research Center on Grand Challenges offered answers to this hotly debated question. In front of a lively audience of UNIL students and faculty, three internationally renowned scientists from computer science, management studies, and environmental science examined whether Artificial Intelligence (AI) in climate research was a tool for hope or harm. The panel was moderated by Bhargav Srinivasa Desikan, the AI and Tech Lead at the Autonomy Institute in the UK, and an incoming PhD student at the University of Oxford.
Hope: From prediction to strategic action
AI has been hyped as a transformative tool for climate action. Across their disciplines, the panelists agreed that AI holds promise for advancing research. For instance, with the deluge of data available to train AI models, scholars are now able to make more informed decisions about everything from species classification to firms’ energy efficiency. “I think we’ve become pretty good over the last few decades at predictive models… to the point where we can do more interesting and useful things,” said Benjamin Rosman, Director of the Machine Intelligence and Neural Discovery (MIND) Institute at the University of the Witwatersrand in South Africa. AI models can integrate more sources of information and engage in complex deliberation, including running numerous counterfactual scenarios that would be impossible to test using in-person research.
In response to the criticisms around the environmental impact of AI, especially around generative AI models such as ChatGPT, the panel also highlighted the diversity of models available, ranging from highly resource-intensive algorithms to smaller versions called frugal AI, such as BabyLLMs. “In reality, there are a lot of applications of AI that you can run on your laptop with an energy use that is effectively nothing,” Melissa Chapman, incoming Assistant Professor of Environmental Policy at ETH Zürich, explained. Recognizing that AI is not a monolithic technology may encourage scholars to explore incorporating it into their own projects.
Furthermore, Yash Raj Shrestha, Associate Professor and Group Head of the Applied AI Lab at HEC Lausanne, emphasized the potential of human-AI collaboration, noting that AI could amplify human creativity and enhance decision-making. “We can use AI development to automate some of the [routine tasks] that you are doing, and this enhancement can result in more efficient discovery.” He cited examples from supplies and manufacturing systems research in which AI helped reduce waste and contributed to sustainability in the long run.
Harm: AI’s big ecological and social footprint
However, the panel cautioned that AI-informed decisions can come at a high price if not used carefully. “[Many of these AI] algorithms amplify bias and cultural norms that are already present in the data,” Shrestha said. If we use them to augment human decisions, there is a chance that these decisions are not fair.”
Chapman concurred that the process of training and using AI is a political question, as AI-generated policies often repeat historic patterns of exclusion and marginalization. “Beyond the carbon emissions of these algorithms, we have to also think about how these algorithms potentially shift who actually gets the power to not only make decisions, but decide what questions we even ask in the first place. How do we make sure that this power is not consolidated?”
Rosman provided an example of data inequity from his own experience with Lelapa AI, a lab dedicated to building large language models for African languages. “There are over 2,000 languages spoken in Africa, and most of them are considered low-resource languages, which means there is not enough data for these models to be built. If the paradigms we’re using don’t support training these models with the data we have, then we are just excluding huge amounts of people that are some of the most disadvantaged.”
Solutions: Navigating critical trade-offs to minimize harm
The impact of AI on climate action is not as much a challenge to consumers of ChatGPT, Gemini, or LeChat alone as it is to companies that develop these AI assistants, including OpenAI, Google, and Mistral, and their regulators. Even as AI models become more efficient, the climate problem is unlikely to disappear. The panel discussed Jevons’ paradox—the phenomenon in which increased efficiency leads to increased consumption. Chapman explained, “Historically, improving efficiency tends to increase absolute resource use, not reduce it.” The panel stressed the need for broader systemic changes, including adopting sustainable or degrowth economic models alongside technological advancements, to ensure genuine progress.
Shrestha raised the need to shift from short-term thinking to long-term vision. “A lot of thinking [in AI development] right now is for the next month, or next year, and so on. We need to think long term when we think about the planet.” The panel emphasized the importance of shifting away from short-term competitiveness—both among companies and countries—to sustainable, long-term goals that consider the potential for dramatic adverse consequences that AI could cause, including job loss, geopolitical conflict, and climate disasters.
Several critical policy directions were proposed to increase corporate accountability for the climate impact of AI use and development. Suggestions involved advocating for regulations requiring AI companies to internalize environmental costs. “If we can tax oil and gas, we can tax computers. Tech shouldn’t get a free pass because it’s shiny,” moderator Srinivasa Desikan noted. The panelists also raised the need for greater transparency through clear mandatory disclosures of AI’s environmental impacts, specifically energy and water usage, to ensure accountability.
During the Q&A, the keen-eyed audience echoed these concerns, raising points on the need for stringent regulation of AI’s rapid expansion amidst environmental harm. Together, scholars and students suggested leveraging unions and civil society to manage AI’s risks effectively. “There should be less of a gap between tech development and actual implementation,” Chapman said. “The systems are currently built by developers in a way that is disconnected from policymakers, decision makers, and stakeholders.” According to her, a user-centered way of developing AI would lead to more effective and grounded decisions.
The next step forward: Interdisciplinary collaboration
One of the features of grand challenges is that their causes, effects, and implications spill over from one intellectual delineation to another; thus, they cannot be solved without coordination across fields. As perhaps the grandest challenge of our time, climate change requires that scholars be open to ideas from other disciplines––including AI. “The way we’re trained typically is very much in silos and thinking from one perspective,” Srestha said. If we want to harness AI’s potential while minimizing its harms, it is imperative that academics acknowledge and incorporate insights advanced by other areas into their own research.
In the final minutes, Rosman put it succinctly: “This is a time when we need as many people as possible to be engaged in thinking about these things. That often involves looking outside of your own discipline and trying to find tools in other places.” The panel was part of a workshop on Climate Change & AI funded by the Canadian Institute for Advanced Research (CIFAR) and the Swiss National Science Foundation and organized by Benjamin Rosman (University of Witwatersrand) as well as Élise Devoie (Queen’s University), Guillaume Dumas (Université de Montréal), Christof Brandtner (emlyon business school), and Patrick Haack (HEC Lausanne). Quotes have been lightly edited for clarity.
Axelle Miel is a Predoctoral Research Fellow in Organizational Sociology and Social Innovation at EM Lyon Business School. She is interested in how organizations shape culture and public policy, particularly in developing regions. She holds a bachelor’s degree in political science and music from Duke University.
Christof Brandtner is an Associate Professor of Social Innovation at EM Lyon Business School, a Fellow in CIFAR's Innovation, Equity, and the Future of Prosperity program, and co-founder of the Civic Life of Cities Lab. His research examines the emergence, diffusion, and implementation of organizational practices and policies aimed at making communities more sustainable and prosperous. Christof holds a PhD in sociology from Stanford University and was previously a postdoctoral scholar at the University of Chicago’s Mansueto Institute for Urban Innovation.
AI for Climate Action: Hype, Harm, or Hope?
By 2040, AI-driven data centers are projected to consume up to 8% of global emissions. Could we use artificial intelligence (AI) to power climate solutions, or is AI fueling a deeper crisis?
On June 5, 2025, an interdisciplinary panel hosted by the HEC Lausanne Research Center on Grand Challenges offered answers to this hotly debated question. In front of a lively audience of UNIL students and faculty, three internationally renowned scientists from computer science, management studies, and environmental science examined whether Artificial Intelligence (AI) in climate research was a tool for hope or harm. The panel was moderated by Bhargav Srinivasa Desikan, the AI and Tech Lead at the Autonomy Institute in the UK, and an incoming PhD student at the University of Oxford.
Hope: From prediction to strategic action
AI has been hyped as a transformative tool for climate action. Across their disciplines, the panelists agreed that AI holds promise for advancing research. For instance, with the deluge of data available to train AI models, scholars are now able to make more informed decisions about everything from species classification to firms’ energy efficiency. “I think we’ve become pretty good over the last few decades at predictive models… to the point where we can do more interesting and useful things,” said Benjamin Rosman, Director of the Machine Intelligence and Neural Discovery (MIND) Institute at the University of the Witwatersrand in South Africa. AI models can integrate more sources of information and engage in complex deliberation, including running numerous counterfactual scenarios that would be impossible to test using in-person research.
In response to the criticisms around the environmental impact of AI, especially around generative AI models such as ChatGPT, the panel also highlighted the diversity of models available, ranging from highly resource-intensive algorithms to smaller versions called frugal AI, such as BabyLLMs. “In reality, there are a lot of applications of AI that you can run on your laptop with an energy use that is effectively nothing,” Melissa Chapman, incoming Assistant Professor of Environmental Policy at ETH Zürich, explained. Recognizing that AI is not a monolithic technology may encourage scholars to explore incorporating it into their own projects.
Furthermore, Yash Raj Shrestha, Associate Professor and Group Head of the Applied AI Lab at HEC Lausanne, emphasized the potential of human-AI collaboration, noting that AI could amplify human creativity and enhance decision-making. “We can use AI development to automate some of the [routine tasks] that you are doing, and this enhancement can result in more efficient discovery.” He cited examples from supplies and manufacturing systems research in which AI helped reduce waste and contributed to sustainability in the long run.
Harm: AI’s big ecological and social footprint
However, the panel cautioned that AI-informed decisions can come at a high price if not used carefully. “[Many of these AI] algorithms amplify bias and cultural norms that are already present in the data,” Shrestha said. If we use them to augment human decisions, there is a chance that these decisions are not fair.”
Chapman concurred that the process of training and using AI is a political question, as AI-generated policies often repeat historic patterns of exclusion and marginalization. “Beyond the carbon emissions of these algorithms, we have to also think about how these algorithms potentially shift who actually gets the power to not only make decisions, but decide what questions we even ask in the first place. How do we make sure that this power is not consolidated?”
Rosman provided an example of data inequity from his own experience with Lelapa AI, a lab dedicated to building large language models for African languages. “There are over 2,000 languages spoken in Africa, and most of them are considered low-resource languages, which means there is not enough data for these models to be built. If the paradigms we’re using don’t support training these models with the data we have, then we are just excluding huge amounts of people that are some of the most disadvantaged.”
Solutions: Navigating critical trade-offs to minimize harm
The impact of AI on climate action is not as much a challenge to consumers of ChatGPT, Gemini, or LeChat alone as it is to companies that develop these AI assistants, including OpenAI, Google, and Mistral, and their regulators. Even as AI models become more efficient, the climate problem is unlikely to disappear. The panel discussed Jevons’ paradox—the phenomenon in which increased efficiency leads to increased consumption. Chapman explained, “Historically, improving efficiency tends to increase absolute resource use, not reduce it.” The panel stressed the need for broader systemic changes, including adopting sustainable or degrowth economic models alongside technological advancements, to ensure genuine progress.
Shrestha raised the need to shift from short-term thinking to long-term vision. “A lot of thinking [in AI development] right now is for the next month, or next year, and so on. We need to think long term when we think about the planet.” The panel emphasized the importance of shifting away from short-term competitiveness—both among companies and countries—to sustainable, long-term goals that consider the potential for dramatic adverse consequences that AI could cause, including job loss, geopolitical conflict, and climate disasters.
Several critical policy directions were proposed to increase corporate accountability for the climate impact of AI use and development. Suggestions involved advocating for regulations requiring AI companies to internalize environmental costs. “If we can tax oil and gas, we can tax computers. Tech shouldn’t get a free pass because it’s shiny,” moderator Srinivasa Desikan noted. The panelists also raised the need for greater transparency through clear mandatory disclosures of AI’s environmental impacts, specifically energy and water usage, to ensure accountability.
During the Q&A, the keen-eyed audience echoed these concerns, raising points on the need for stringent regulation of AI’s rapid expansion amidst environmental harm. Together, scholars and students suggested leveraging unions and civil society to manage AI’s risks effectively. “There should be less of a gap between tech development and actual implementation,” Chapman said. “The systems are currently built by developers in a way that is disconnected from policymakers, decision makers, and stakeholders.” According to her, a user-centered way of developing AI would lead to more effective and grounded decisions.
The next step forward: Interdisciplinary collaboration
One of the features of grand challenges is that their causes, effects, and implications spill over from one intellectual delineation to another; thus, they cannot be solved without coordination across fields. As perhaps the grandest challenge of our time, climate change requires that scholars be open to ideas from other disciplines––including AI. “The way we’re trained typically is very much in silos and thinking from one perspective,” Srestha said. If we want to harness AI’s potential while minimizing its harms, it is imperative that academics acknowledge and incorporate insights advanced by other areas into their own research.
In the final minutes, Rosman put it succinctly: “This is a time when we need as many people as possible to be engaged in thinking about these things. That often involves looking outside of your own discipline and trying to find tools in other places.” The panel was part of a workshop on Climate Change & AI funded by the Canadian Institute for Advanced Research (CIFAR) and the Swiss National Science Foundation and organized by Benjamin Rosman (University of Witwatersrand) as well as Élise Devoie (Queen’s University), Guillaume Dumas (Université de Montréal), Christof Brandtner (emlyon business school), and Patrick Haack (HEC Lausanne). Quotes have been lightly edited for clarity.
Axelle Miel
Axelle Miel is a Predoctoral Research Fellow in Organizational Sociology and Social Innovation at EM Lyon Business School. She is interested in how organizations shape culture and public policy, particularly in developing regions. She holds a bachelor’s degree in political science and music from Duke University.
View all postsChristof Brandtner
Christof Brandtner is an Associate Professor of Social Innovation at EM Lyon Business School, a Fellow in CIFAR's Innovation, Equity, and the Future of Prosperity program, and co-founder of the Civic Life of Cities Lab. His research examines the emergence, diffusion, and implementation of organizational practices and policies aimed at making communities more sustainable and prosperous. Christof holds a PhD in sociology from Stanford University and was previously a postdoctoral scholar at the University of Chicago’s Mansueto Institute for Urban Innovation.
View all postsComments policy & Legal disclaimer