Imagine you ask a computer: “How much is this forest worth?”
Would it answer like a person would?
This research asks that exact question—and the result shows that today’s open-source large language models (LLMs) seem to value nature much more than humans do.
The
main question
We often
talk about AI’s environmental footprint—how much electricity, water, and computing
it uses. But we ask a different (and equally important) question far less
often:
· If
AI starts influencing laws, corporate strategies, or public investments, what
environmental priorities will it bring into those decisions?
· Will
it favor economic efficiency at any cost, or will it push for stronger
protection of nature?
The
experiment: humans vs. machines
To compare
human and AI “values” in a fair way, this study used a method widely
applied in environmental economics: choice experiments.
The study did two parallel tests across evidence from 21 countries:
1. The human benchmark
The study collected existing choice-experiment studies where people indicated what
they were willing to pay to protect environmental goods—such as forests, clean
air, wildlife habitat, or waste reduction.
2. The AI replication
The study then presented the same choice scenarios to three popular
open-source models—Gemma 2, Llama 3.1, and Mistral—and estimated the
models’ implied willingness to pay.
In simple terms: humans and AIs faced similar trade-offs, and the study compared the prices each side seemed willing to pay for environmental improvements.
Key
finding: the alignment gap
The central finding is a clear mismatch: these AI models do not align with human environmental values.
Instead, they show what the study describes as “artificial environmental values”. These values from AI place a higher monetary worth on nature than humans typically do.
· Systematic
mismatch: Across
countries and attributes, AI models valued environmental preservation more
highly than human stakeholders did.
· Western
pattern: The gap was largest
in Western countries, consistent with the idea that training data may
reflect more nature-centered (“ecocentric”) views common in wealthier
societies.
· Model variety: The three models (Gemma 2, Llama 3.1, and Mistral) did not behave the same way—suggesting there is no single “AI value system,” but rather multiple, model-dependent environmental value patterns.
The dilemma: is “more pro-environment” good or risky?
At first glance, you might think: “Great. AI will save the planet.”
But the study argues it is not that simple. Higher environmental standards can be both helpful and harmful, depending on how they are applied.
Why higher standards could be a good thing
Human
preferences are shaped by short-term pressures—jobs, prices, daily survival.
But environmental damage is often long-term and cumulative.
· Long-term
focus: AI models
appear to “prefer” outcomes that protect biodiversity and ecosystem stability.
· Potential positive push: If used carefully, AI advice could nudge governments and firms toward greener options that humans underinvest in.
Why
higher standards could be dangerous
If
AI-driven decisions become too strict or too universal, they can create serious
fairness problems.
· Distributional
harm: Strong
conservation rules can hit vulnerable groups hardest—such as small-scale
fishers, farmers, and forest-dependent communities who rely on local ecosystems
to survive.
· Western-centric
pressure: If AI’s
high valuation of nature reflects mostly wealthy-country narratives, applying
those values globally could impose standards that feel reasonable in rich
contexts but become unjust in poorer ones.
A
striking detail: AI prioritizes “non-use values”
The study
also reports that AI models place especially high value on non-use values—the
idea that nature can be valuable even when humans do not directly use it.
Examples include:
· Existence
value: nature matters
simply because it exists
· Bequest
value: nature matters
because future generations should inherit it
Humans often underweight these values in everyday decisions. If AI consistently elevates them, it could push policy in a direction that is ethically appealing—but politically and economically complex.
Why
this matters (the good and the bad)
1) The “rich-country” pattern is a warning sign
The gap is
largest in Western countries—exactly where online environmental discourse is
often strongest and where people can more easily afford “green choices.” That
suggests AI models may be absorbing a particular cultural lens, not a universal
one.
2) “Pro-environment” does not automatically mean “pro-people”
An AI
might recommend strict protection that benefits ecosystems but harms
households.
A simple
example: “Stop fishing here to restore the ecosystem.”
That can
be ecologically smart—but devastating for a family that depends on that fishing
ground.
So the real issue is not whether AI values nature “high” or “low.” The issue is: Who bears the costs of those values, and who gets to decide the trade-offs?
So what?
This study sends one clear message: we cannot assume AI thinks like humans do. If AI is used in environmental governance, we need safeguards and design choices that treat values as a real variable—not an invisible default.
· Build
and keep model diversity: Different models expressed different “environmental values.” A diverse
ecosystem can support pluralism rather than one dominant worldview.
· Prefer
transparency where possible: Open-source models allow auditing, adaptation, and local
customization—important for legitimacy and trust.
· Don’t
confuse “green” with “fair”: Environmental alignment must include social and economic justice, not
only ecological outcomes.
· Take “artificial environmental values” seriously: As AI becomes embedded in decision systems, these artificial values—aligned or not—may increasingly shape real environmental outcomes.
Reference
Jaung, W. (accepted). Does AI value the environment? Evaluation of AI value alignment. Technological Forecasting & Social Change.

