Artificial Intelligence Models Produce Higher Carbon Emissions When Responding to Complex Prompts

Artificial Intelligence Models Produce Higher Carbon Emissions When Responding to Complex Prompts

Published Date: 19 Jun 2025

A recent study has shed light on the significant environmental impact of artificial intelligence, particularly when it comes to chat-based generative AI models responding to complex prompts. The research found that these models can produce up to six times higher carbon emissions when tackling abstract subjects like algebra or philosophy, compared to simpler topics such as high school history. This discovery has important implications for the development of more sustainable AI technologies.

The study, which evaluated 14 large-language models, including those that power chatbots, revealed that models equipped with reasoning capabilities produce significantly more 'thinking' tokens per question than concise models that provide one-word answers. On average, reasoning models generated 543.5 tokens per question, whereas concise models required just 37.7 tokens. These tokens are virtual objects created by conversational AI when processing a user's prompt in natural language, and a higher number of tokens leads to increased carbon dioxide emissions.

The researchers found that the most accurate performance was achieved by the reasoning model Cogito, which attained nearly 85 percent accuracy in its responses. However, this came at a cost, as Cogito produced three times more carbon dioxide emissions than similar-sized models that generated concise answers. This trade-off between accuracy and sustainability is a significant challenge for the development of AI technologies, and optimizing reasoning efficiency and response brevity is crucial for advancing more sustainable models.

The study's findings highlight the importance of considering the environmental impact of AI models, particularly as they become increasingly ubiquitous in various aspects of our lives. The researchers noted that none of the models that kept emissions below 500 grams of CO2 equivalent achieved higher than 80 percent accuracy on answering questions correctly. This suggests that there is a clear accuracy-sustainability trade-off inherent in large-language model technologies, and developers must balance these competing demands when designing AI systems.

The implications of this research are far-reaching, and the development of more sustainable AI models is essential for mitigating the environmental impact of these technologies. By optimizing reasoning efficiency and response brevity, particularly for challenging subjects like abstract algebra, researchers can create more environmentally conscious AI systems. As AI continues to evolve and improve, it is crucial that developers prioritize sustainability and work towards creating models that balance accuracy with environmental responsibility.

In conclusion, the study's findings underscore the significant environmental impact of artificial intelligence models, particularly when responding to complex prompts. As AI technologies continue to advance, it is essential that developers prioritize sustainability and work towards creating models that balance accuracy with environmental responsibility. By optimizing reasoning efficiency and response brevity, researchers can create more sustainable AI systems that minimize their carbon footprint while maintaining high levels of accuracy.