The Family Debate

It started, as many philosophical tangents do, in the middle of an ordinary lunch.

Out of nowhere, my kids launched into a new debate about whether dolphins or octopuses were more intelligent — a question that, somehow, had never surfaced before but immediately took on the gravity of a courtroom trial.

Trying to break the tie, I reached for Bing and asked its built-in AI copilot to weigh in.

That’s when I was told I was being unethical.

Apparently, by using a large language model, I was complicit in environmental harm. These systems, my kids explained, consume enormous amounts of electricity and water for cooling — and anyone using them, even to answer a question about marine intelligence, was acting unethically.

I didn’t have a quick rebuttal. Partly because I was caught off guard, and partly because they had a point. Yes, AI systems are resource-hungry. They do consume water, electricity, and rare minerals. But as the conversation unfolded, I realized we weren’t really talking about AI. We were talking about what it means to live ethically in a technological age.

What struck me most was that my kids seemed to see this technology as uniquely immoral — as though humanity had suddenly crossed a new, unforgivable line. They were judging AI through the lens of today’s environmental awareness, without recognizing the long arc of technological history. Every generation has built tools that reshaped the environment, often with devastating consequences: the plow scarred the soil, the steam engine blackened the sky, the microchip mined the earth. Yet somehow, those past disruptions are now invisible — forgiven, even forgotten — while AI bears the full weight of ethical outrage.

They were treating this invention as the one, as though all previous revolutions were necessary steps toward civilization, but this one is a moral regression.

But if we take that argument to its logical conclusion — that using any technology that consumes energy is unethical — then nearly everything that defines modern life would have to go. We’d have to give up smartphones, refrigeration, transportation, and even the lightbulb above the table. To be truly “ethical,” we’d have to retreat to the Stone Age — and perhaps even that wouldn’t suffice, since fire itself changed the climate long before the combustion engine.

We cannot exist without impact. The question, then, is not whether our tools change the world — they always have — but whether we change the world well. Whether we act as stewards or spoilers, creators or consumers.

Technology has always been an amplifier of the human spirit — for good or ill. The printing press spread both scripture and propaganda. The atom split gave us both energy and annihilation. And now, AI offers both possibility and peril. The ethical question is not, “Should we use it?” but “How should we use it?”

“Every generation condemns its tools before it learns how to wield them.”

It’s easy to call the new thing dangerous. Harder to see how it might help us become better caretakers of the world we already endanger.

The Pattern of Progress: Every Revolution Has Its Cost

Let’s dig deeper into the history.

This conversation about AI isn’t new — it’s just the latest chapter in an old story. Every age believes its inventions are unprecedented, its moral dilemmas unique, and its environmental costs intolerable. Yet each generation forgets that technological revolutions have always carried both promise and peril.

The Agrarian Revolution transformed the wilderness into farmland. It fed civilizations, but at the cost of deforestation, soil erosion, and the first extinction events caused by human hands. We learned to bend nature to our will, and in doing so, began to break it.

The Industrial Revolution harnessed coal and steam to lift humanity out of poverty and into cities, but it also filled the skies with smog, choked rivers with waste, and ignited our dependence on fossil fuels. It was the first time human ambition visibly scarred the planet — and yet it was also the age that gave us sanitation, medicine, and mass literacy.

The Digital Revolution promised to dematerialize progress — a world of information instead of industry. But even that ethereal dream runs on metal and silicon. Behind every smartphone lies a trail of rare-earth mining, toxic e-waste, and endless upgrades feeding a global appetite for connection.

And now, the AI Revolution — the Fourth Industrial Revolution — stands at our doorstep. Data centers hum like modern factories, their servers drinking water and burning electricity to feed the cloud. The new pollution is not smoke but heat; not soot but invisible energy draw.

Still, it is the same moral rhythm, the same old debate in a new dialect.

As Science Wars reminds us, every advance in knowledge forces us to confront the question of whether our new powers serve truth or hubris, wisdom or control. The conflict between discovery and restraint has shadowed human progress from the time of Galileo to the birth of the internet — and now to AI.

We’ve always stood at this crossroad: one path toward stewardship, the other toward domination. And yet, progress itself is not the villain. The plow, the press, the pixel — none of these are inherently moral or immoral. They reflect the hands that wield them.

“Man is a tool-using animal. Without tools he is nothing, with tools he is all.” — Thomas Carlyle

Technological progress is never ethically neutral — but it is also never wholly unethical. It is, rather, a mirror. What we see in that reflection depends on how willing we are to act as caretakers, not conquerors.

What the Data Say: AI’s Energy Appetite

When someone claims “AI is unethical because it consumes energy,” they’re not wrong — but they’re also not telling the whole story. To understand the moral weight of that claim, we need to see how AI’s energy demands compare, and why those demands matter in the broader context.

Recent Studies & Key Metrics

In the 2025 paper How Hungry Is AI?, the authors analyze inference-level energy consumption across 30 commercial large language models. They find that a short query on a model like GPT-4o consumes about 0.42 Wh, while more energy-intensive models (e.g. “o3” or “DeepSeek-R1”) can reach 33 Wh for a long prompt.

Extrapolating to scale: with 700 million GPT-4o queries per day, the total electricity demand would be comparable to the consumption of 35,000 U.S. homes.

The same study estimates water evaporation (for cooling) equivalent to the annual drinking-water needs of 1.2 million people.

On infrastructure: a recent study of U.S. data centers reports that they now account for more than 4 % of total U.S. electricity use, with a carbon intensity ~48 % higher than average because of reliance on fossil sources. (arXiv)

These are large numbers — but large numbers have a context, and context shapes ethics.

Training vs. Inference: The “When” Matters

One of the frequent blind spots in debates over AI’s environmental cost is conflating training consumption with inference consumption.

Training a large model (e.g. GPT-4) requires massive compute clusters, weeks or months of continuous operation, and thus constitutes a major upfront energy investment.

Inference (i.e. answering a user’s question) is far lighter — though when multiplied by millions or billions of queries, it becomes significant.

In ethical terms, the “cost per use” is often what people care about — and in many models, that cost is modest compared to the training overhead, once the model is deployed.

LLMs Are Resource-Heavy — But So Are Many Other Technologies

To sharpen your case: yes, LLMs draw energy and water. But they don’t stand alone. Other digital services and technologies carry similar (or even greater) burdens:

Cloud computing & data centers in general are already a huge load on power grids.

Streaming video, gaming, social media, IoT (Internet of Things) — all these cumulatively demand bandwidth, servers, cooling, infrastructure.

Cryptocurrency and blockchain often come up in these debates because of their notoriously high energy use.

So the question becomes: is AI’s marginal burden disproportionately high? And does its benefit (or its ability to reduce other burdens) justify that margin?

The Rise of the Personal Computer and Electricity Use

You asked: What was the percentage change in electricity usage when the personal computer became widespread? That’s harder to pin down in a clean “X % increase attributable to PCs” figure, because many factors were at play (economic growth, electrification of households, etc.). But we can glean a few clues:

In the U.S., as early as the 1990s, desktop and laptop computers were estimated to contribute about 2 % of national electricity consumption in the residential sector, and about 3 % in the commercial sector. (American Scientist)

Earlier estimates had more speculative claims (e.g. that “internet + PC infrastructure” would soon consume 8 % of electricity), but later, more grounded measurements pushed that back to ~2 %. (WIRED)

The broader trend is that from 1980 onward, energy use from small electronics and “miscellaneous loads” in homes more than doubled — though this includes many devices, not just PCs. (ENERGY STAR)

So we can say: the PC era corresponded to a perceptible bump in electricity consumption (driven particularly in the commercial/office space) — but it didn’t immediately or singularly overwhelm the grid. Instead, its impact was absorbed over decades, alongside more growth in electrification, appliances, air-conditioning, digital services, etc.

Thus: AI’s energy draw is not a unique break in history, but it might be a sharper slope. The difference now is scale, speed, and cumulative effect.

The Overlooked Outrage: Wireless Technology and Wildlife

Here’s where selective outrage becomes revealing.

If we were truly consistent in our environmental concern, the ecological effects of wireless communication would top the list of moral debates.

Research over the past decade has shown that electromagnetic radiation from cell towers, Wi-Fi, and 5G networks disrupts bees and birds — species vital to the planet’s balance.

Bees use Earth’s magnetic field for navigation; radio-frequency radiation can interfere with this sense, causing disorientation and colony collapse.

A 2014 study on migratory robins found that ambient EMR disrupted their magnetic compass, and shielding the birds restored proper navigation.

Behavioral studies report increased aggression, altered enzyme activity, and reduced return-to-hive rates in bee populations near active cell towers.

With the rollout of 5G, the density of transmitters has increased dramatically — expanding both the reach and intensity of this invisible interference.

Yet, where is the outrage?

Where are the protests demanding we switch off the Wi-Fi, dismantle 5G towers, or trade our smartphones for smoke signals?

There are none — because that would inconvenience us.

This is the uncomfortable truth: selective outrage is not ethical purity; it’s virtue signaling. It’s easier to condemn a new technology — especially one others are excited about — than to question the comforts of our own daily habits.

If we ignore the magnetic disorientation of bees but rage against the water-cooling of data centers, our concern isn’t really about the planet. It’s about moral performance — a desire to appear righteous without genuine sacrifice.

That kind of moral inconsistency tells us something about our cultural psychology: outrage is cheap; stewardship is costly.

“Hypocrisy is the homage that vice pays to virtue.” — François de La Rochefoucauld

AI, like every major tool before it, deserves scrutiny. But let’s make sure our scrutiny is guided by reason, not posturing. Because if we care about the planet, we must care about all the ways we harm it — not only the trendy ones.

The Counterbalance: AI as a Tool for Ecological Repair

If every technology bears a cost, the ethical question must always be: does it give back more than it takes?

AI, despite its heavy footprint, is already proving to be one of the most powerful tools ever developed for environmental stewardship. The same systems accused of draining power grids are also helping us understand — and perhaps reverse — the damage caused by every previous revolution.

Consider how AI is being used today:

  • Climate modeling: Machine learning models process terabytes of atmospheric data to refine weather predictions and simulate climate futures, helping communities prepare for floods, droughts, and wildfires.
  • Energy optimization: Neural networks manage smart grids, balancing renewable inputs with demand, minimizing waste, and forecasting consumption more precisely than any human system ever could.
  • Agriculture: AI-driven irrigation systems detect soil moisture in real time, reducing water use by up to 40 %. Drones and sensors use computer vision to detect crop disease early, limiting pesticide use and increasing yield with less land.
  • Wildlife conservation: AI-powered drones track endangered species, analyze poaching patterns, and monitor deforestation from orbit. Some systems can even identify the sounds of illegal chainsaws deep in rainforests.

This is what The Science of Information called the “transformability” of information — its capacity to take on new forms and new purposes without losing its essence. The bits that once powered advertising algorithms can just as easily drive conservation analytics. The medium isn’t inherently moral or immoral — it’s a vessel. The ethics depend on how we fill it.

AI’s real promise lies not in replacing human judgment, but in augmenting human stewardship. It can help us see complex systems too vast or too subtle for the unaided mind — climate patterns, ocean currents, energy flows, and ecological feedback loops. In that sense, AI is not a threat to our moral agency but an invitation to use it more wisely.

Of course, this doesn’t excuse recklessness. If anything, the greater our technological reach, the greater our responsibility to direct it well. Stewardship demands awareness of both consequence and potential — to account for our harms while harnessing our tools for healing.

We are, in a sense, the first species capable of modeling our own destruction in real time — and perhaps preventing it.

“The measure of a civilization is not how much power it wields, but how wisely it uses the power it has.” — Adapted from Lord Acton

Ethical reflection must evolve beyond rejection to redemption — using our intelligence, human and artificial, to mend the world we’ve already altered.

That’s the moral test of our century: not whether we can stop technology from changing the world, but whether we can make sure it changes the world for the better.

The Ethical Tension: Stewardship vs. Purity

Every discussion about technology eventually runs aground on a moral reef — the belief that to be ethical is to be pure. That to act responsibly, we must leave no trace, no cost, no scar. It sounds noble. But it’s an illusion.

We cannot live in the world and be untouched by it.

Every breath, every meal, every keystroke alters the planet in some small way. Even the most careful hermit leaves footprints. The ethical question, therefore, isn’t whether we cause impact — but whether our impact is guided by love, wisdom, and restraint.

This obsession with purity often masquerades as virtue. It’s the impulse to denounce rather than discern, to distance oneself from guilt instead of participating in repair. It feels moral because it’s clean — it requires no responsibility once you’ve declared the thing “bad.”

But stewardship, by contrast, is messy. It means staying in the tension — accepting that progress and imperfection coexist, and that moral growth happens not through withdrawal, but through engagement.

As Steve Hassan’s Influence Continuum reminds us, healthy systems empower people to think critically and make informed choices, while unhealthy ones demand obedience, guilt, and purity. The same applies to our moral reasoning about technology. When we frame ethics as an all-or-nothing purity test, we edge toward authoritarian thinking — a kind of ecological fundamentalism that condemns inquiry rather than guides it.

The true ethical posture is stewardship: acknowledging our influence and choosing to wield it with humility. That means questioning, measuring, correcting — but not retreating. It means using technology consciously, designing it thoughtfully, and accounting for its costs with honesty, not denial.

“There is no such thing as immaculate perception.” — Marshall McLuhan

The call of stewardship is not to remain unstained, but to keep our hands busy in the work of repair.

Because purity demands distance; stewardship demands presence.

And the world doesn’t need more distant critics — it needs present caretakers.

The Blind Spots: What Critics Miss

To be fair, not all criticism of AI’s environmental footprint is misplaced. Some of it is well-grounded — and necessary. Every revolution blinds its creators to its externalities. We should never be so enamored with innovation that we stop counting the cost.

The problem isn’t criticism itself. The problem is selective criticism — outrage aimed at symbols rather than systems. If we want an honest ethical reckoning, we need to face all the facts, not just the fashionable ones.

So, in fairness, let’s look at what AI’s critics are rightly warning us about.

Scale and Speed

AI’s expansion is not gradual — it’s exponential.

Data centers are multiplying at unprecedented rates. Tech giants are pouring billions into infrastructure, adding hundreds of megawatts of power demand each month. This growth far outpaces the sustainable energy infrastructure needed to support it.

What took the Industrial Revolution a century to achieve, AI could replicate in a decade. That velocity magnifies risk.

Carbon Intensity

Most AI data centers still rely on grids powered largely by fossil fuels.

A Harvard-led study found that U.S. data centers produce over 100 million tons of CO₂e annually, with a carbon intensity 48 % higher than the national average. So even if AI is more efficient per task, its aggregate footprint grows faster than the rate of decarbonization.

Water Use and Local Strain

Cooling those massive server farms requires staggering volumes of water — sometimes drawn from regions already suffering drought.

In Arizona, for instance, one hyperscale data center consumes hundreds of thousands of gallons a day. In places like Wisconsin and Iowa, new AI facilities are projected to draw more electricity than some nuclear plants can generate.

These aren’t abstract ethical puzzles — they’re ecological ones. Local ecosystems and communities bear the brunt.

Noise, Land Use, and Human Impact

The ethical footprint of AI extends beyond electrons and molecules.

Data centers generate constant low-frequency noise, altering local soundscapes. They occupy land once used for farms or forests. And they often arrive in small towns with little transparency or public input. For those communities, the question isn’t philosophical — it’s personal.

Social Blind Spots

There’s another layer often missing from the conversation: the human cost embedded in the digital one.

Behind the sleek interfaces and intelligent chatbots are supply chains for lithium, cobalt, and rare earth minerals — mined under exploitative conditions in some of the world’s poorest regions.

Add to that the human “ghost work” of labeling datasets, moderating toxic content, and cleaning up the byproducts of machine learning. These workers absorb the ethical residue of our convenience.

The Real Lesson

Being fair to the criticism doesn’t weaken the case for technology — it strengthens it.

To dismiss concerns outright would be as dishonest as claiming purity. The ethical path, as always, lies between denial and despair.

The goal isn’t to condemn AI or to canonize it.

It’s to humanize it — to ensure that our use of intelligence, artificial or otherwise, reflects empathy, transparency, and foresight.

If stewardship means anything, it means facing the uncomfortable truth that every act of creation carries a shadow. But shadows only exist where there is light.

“We do not inherit the Earth from our ancestors; we borrow it from our children.” — Native proverb

Fair criticism is not our enemy — it’s our compass. And if we listen carefully, it doesn’t tell us to stop building. It tells us to build better.

The Stewardship Principle

If there’s a single thread that weaves through every technological revolution — from the plow to the microchip — it’s this: progress without stewardship becomes destruction.

The question isn’t whether we should build, but whether we can be trusted with what we build.

Ethics, in this light, is not a list of prohibitions but a discipline of responsibility. It’s the recognition that power — whether physical, digital, or artificial — must be guided by conscience. And conscience, unlike outrage, asks us to stay involved.

Stewardship means more than environmental care; it’s a worldview. It sees creation — both natural and technological — as something entrusted to us, not owned by us. It doesn’t demand purity, but accountability. It calls us to measure, mitigate, and manage impact with intention and humility.

In the Christian tradition, stewardship is rooted in the idea that humanity was placed in the garden to cultivate and keep it — to work with creation, not against it. That command hasn’t changed, only the tools in our hands have. The same moral logic applies to the digital garden we’re cultivating now.

Science itself, as James Hannam reminds us, was born in that soil — not in defiance of faith, but because of it. Medieval Christians studied the natural world because they believed it was intelligible, ordered, and good. The universe was not chaos but cosmos, and to understand it was an act of reverence.

That same belief in intelligibility — that the world can be known, modeled, improved — underlies AI today. We are, in an odd way, continuing that lineage of stewardship through code. The danger is not in the knowledge itself, but in forgetting what the knowledge is for.

AI can model our climate, optimize our grids, and reveal hidden inefficiencies — but it can’t teach us to care. That’s still our work. The machine can predict; it cannot love.

“Science without religion is lame, religion without science is blind.” — Albert Einstein

To be ethical in the age of AI is not to recoil from technology but to redeem it — to bend intelligence toward compassion, efficiency toward justice, and innovation toward sustainability.

Stewardship, at its heart, is the art of living in tension: between progress and preservation, between creation and consequence. It doesn’t seek to end the story of technology, but to ensure that the story continues — not as tragedy, but as testament.

We don’t need fewer tools; we need wiser hands.

Reframing the Debate

We often frame the wrong question.

When people say, “Is AI ethical?”, they imagine ethics as a verdict — a binary judgment handed down by some invisible moral authority. But that question, by itself, ends the conversation before it begins.

The better question is, “How can we make AI ethical?”

That shift changes everything. It moves us from spectators to stewards, from critics to participants. It reminds us that ethics is not an external rulebook but an evolving relationship between our values and our inventions.

We have walked this path before. When the printing press spread misinformation as easily as scripture, we didn’t abandon books — we built literacy and accountability. When electricity brought both light and danger, we didn’t renounce it — we learned safety, regulation, and public trust.

The same logic applies now.

Instead of demonizing AI, we should be asking:

  • How do we power data centers sustainably — through renewables, geothermal, and water recycling?
  • How do we encourage transparency in model training and lifecycle emissions?
  • How can we build AI for ecological benefit, not just profit — optimizing energy grids, agriculture, and transportation?
  • How do we protect the dignity of workers who sustain the digital economy — from miners to moderators?

These are questions of governance, not guilt.

AI doesn’t need a moral banishment; it needs moral architecture — systems of accountability that scale as quickly as the technology itself.

To get there, we need more than engineers; we need ethicists, theologians, educators, and citizens. The future cannot be left solely to coders and corporations. The conversation about AI ethics belongs at kitchen tables, in classrooms, and in city councils.

In Science Wars, Steven Goldman noted that the struggle over knowledge has always been a struggle over power — over who gets to decide what counts as truth and who benefits from it. The same is true here.

AI is not simply a technical question; it’s a human one. Its ethical shape will reflect the moral maturity of the societies that wield it.

“The greatest danger is not that machines will begin to think like men, but that men will begin to think like machines.” — Sydney J. Harris

Reframing the debate means shifting from fear to formation — cultivating citizens who understand both the risks and the responsibilities of intelligent systems.

Because ethics is not a fence that keeps progress out. It’s a framework that helps us build it well.

Progress with a Conscience

That lunchtime debate with my kids has stayed with me.

Not because they were wrong, but because they reminded me of something essential — that every generation inherits both the benefits and the burdens of the ones before it. Their unease wasn’t really about AI; it was about trust. Could we be trusted to learn from history, or would we repeat it again in digital form?

Their challenge made me grateful — and uneasy. They were holding me, and our generation, accountable for the world we’ve built. And they were right to.

But what I wish they could see — what I hope they’ll come to understand — is that progress and conscience are not enemies. We don’t have to choose between innovation and integrity, between intelligence and empathy. The challenge is to hold both.

If humanity has a defining trait, it isn’t strength or speed or even intelligence. It’s creativity — the capacity to imagine better futures and build toward them. Every tool we’ve made, from the stone chisel to the silicon chip, reflects that restless spirit. AI is simply the next instrument in that ancient symphony.

The question isn’t whether the music is dangerous. Of course it is. Fire burns; flight crashes; code corrupts.

The question is whether we will use our tools to destroy or to heal, to exploit or to elevate, to dominate or to serve.

“We are as gods, and might as well get good at it.” — Stewart Brand

We cannot live without impact, but we can live with intention.

We can choose to build machines that restore as much as they consume.

We can teach our children that ethics isn’t abstinence — it’s alignment: the effort to bring our power under the discipline of compassion.

So, no — using AI isn’t unethical.

Ignoring its consequences is.

And so is refusing to see its potential for good.

Because the moral challenge of our age is not to stop creating, but to create consciously.

“Do not be overcome by evil, but overcome evil with good.” — Romans 12:21

If we can do that — if we can make conscience the companion of progress — then perhaps, one day, my kids will look back and say: They didn’t get everything right, but at least they tried to build wisely. And maybe that’s enough.

Excerpt

Artificial intelligence isn’t inherently unethical — its impact depends on how we use it. From the Industrial Revolution to AI’s data centers, every tool reshapes the world. The challenge isn’t purity but stewardship: learning to guide progress with conscience, compassion, and responsibility.

References

2 responses to “Is Using AI Unethical?”

  1. James0219 Avatar

    I think the use of anything A.I. or not could be to some unethical. Speaking from a paid and published author before the A.I. days, using A.I. and calling the results your own is unethical I would say. Now, do I use A.I.? Yes, yes, I do, but I only use it for website photos. I also use an A.I. announcer for my podcast, but admit it and don’t call them my own.

    Furthermore, your statements:

    “So, no — using AI isn’t unethical”.

    “Ignoring its consequences is”.

    “And so is refusing to see its potential for good”.

    Pretty much sums it up for me and I couldn’t agree more.

    Liked by 1 person

  2. Have you seen these guys? – Nudist Geek Avatar

    […] recently read a post about A.I. and the ethical implications. And I thought to myself, just for fun, I wonder if I took a photo of a friend, with a photo of me […]

    Like

Leave a comment

Quote of the week

“Learning to think conscientiously for oneself is on of the most important intellectual responsibilities in life. …carefully listen and learn strive toward being a mature thinker and a well-adjusted and gracious person.”

~ Kenneth R. Samples