I want to start posting about AI. I’ve been working with large language models—ChatGPT, Anthropic Claude, Midjourney for image generation—and I want to write about what I’ve learned. But before I do that, I need to be absolutely clear about my position on AI itself.

If you disagree with what I’m about to say so fundamentally that you can’t stomach reading about my actual use of these tools, then my suggestion is simple: keep those thoughts to yourself. I have zero interest in debating whether or not to use AI. I’ve already put considerable thought into this decision, and this post exists to explain my reasoning—not to invite arguments about it.

So let’s begin.

The Climate Problem (And Yes, I See It)

We are in the middle of a global warming crisis. Full stop. We as a species are producing chemicals, pollution, atmospheric damage that’s changing our climate in observable, undeniable ways. Whether our government acknowledges this or not, the rational, empirical evidence is everywhere. You can see it with the naked eye—the warming trend in our yearly weather cycles is real, it’s not natural, and it’s caused by humans.

There’s serious debate about whether we can even stop it or if we’re already past the point of no return. And I’ve said many times that we need to do everything we can to reverse the climate change we’ve caused. This belief drives how I vote, because I don’t think anyone we’ve put in office understands the ecological damage we’re causing.

The AI Paradox

Here’s where it gets complicated: the technology that supports AI—the hardware, the electricity, the sheer manufacturing and warehousing infrastructure—is having a considerable and undeniable negative effect on global warming. AI is actively making climate change worse.

This is absolutely at odds with my decision to use AI. I acknowledge this openly. I’m choosing to use AI for specific tasks while knowing it contradicts my political and scientific views on climate. It’s actively making the problem worse, and I completely acknowledge that’s the case.

But here’s what I believe on top of that.

We’re in the early stages of AI. We don’t really know what it’s good for yet, but we know we’re onto something. If you take a rational look at what’s available, at the advancement this technology is making, there will be a point where we stop applying it to dumb stuff because it becomes less economically useful to do so.

We’re going to get better at figuring out how to apply this tool. Over time, we’ll not only use it for things that actually make sense—we should be able to use it to help solve the problems we haven’t been able to solve yet. Including using AI to help with climate change itself.

I’m not making some science fiction argument that artificial general intelligence will save us all. What I’m saying is that we can get these tools to a point where we can apply them to help save what’s left of what we can save.

Because frankly? Nothing else is working.

I’m not saying AI should solve all our problems, but any tool we have at our disposal to try to fix this crisis is something we should explore—even if in the early stages it’s detrimental. At this point, we are so thoroughly screwed that I don’t know what else to try. We certainly don’t have viable options through traditional avenues.

So I can hold these two thoughts: AI will hopefully be net positive for humanity in the near future, rather than just a complete drain on resources used to produce advertising and make money for the 1%. The sum will be positive, not negative.

The Other Arguments (And Why They Don’t Move Me)

I know there are other arguments against AI. Let me address the main ones, bearing in mind that these don’t factor into my moral calculation the way climate does.

Copyright. This notion that AI scrapes the web and reproduces published works in new ways? That’s what humans do. That’s what humans have always done. Every bit of human knowledge comes from regurgitating something we saw somewhere else in a new way. Generative AI does the same thing, just faster, because it’s a computer and that’s what computers do.

Are they “stealing” works I’ve published online? Sure. Do I feel bad about that? Yes. Do I think it’s wrong? Absolutely not. This isn’t a valid argument for me to avoid using AI.

Job displacement. This one hits closer to home. The idea that we’ll replace junior software developers with AI—use it to crank out boilerplate code instead of having juniors practice and learn—and then suddenly we won’t have experienced engineers anymore because no one learned how to grow from junior to senior.

You’re right. That’s going to happen. But here’s the thing: the tool existing isn’t the problem. How we choose to apply it is the problem.

It’s like saying hammers make carpenters too efficient, so they should go back to using rocks. The solution isn’t to ban the hammer.

I see uninformed leaders saying “we can replace people with AI” without thinking it through. But my preference—and why I choose to use AI—is not to replace people entirely, but to augment what people can already do. It’s a completely different mindset, and I think we can make a better moral argument for that kind of application.

The Community Problem

Here’s what really gets me: in the tech community, people are vocal about AI, but it’s mostly negative. Like Amazon reviews—you only see the complaints. You don’t see people threading the needle of practical, moral AI use. If they are, they’re either doing it quietly or ambiguously.

I see things that make me sad too. Call centers replacing humans with AI systems that don’t even work well. Companies firing employees to replace them with robots that satisfy no one’s needs. That’s terrible—it’s not serving customers or treating employees well.

But here’s what disturbs me: there’s such an outcry against the technology itself instead of an outcry for good applications of AI.

If the people screaming “shut it down, AI is bad, anyone who uses AI is the devil” could flip their argument to say “use AI for altruistic purposes that advance humanity”—if they could advocate for moral applications instead of just condemning the tool—maybe people in tech who are randomly using this stuff might think twice about how they’re applying it.

Because right now all they hear is “shut it down.” They’re not seeing applications that could actually solve problems that five years ago weren’t easy to solve.

I follow respected technologists online who either won’t touch the topic or only point out how bad everything is. Maybe there’s value in shaming companies for poor AI application, but this general “let’s shut down all AI” position? That’s not where thoughtful people should land.

It’s upsetting to see technologists I respect argue we shouldn’t use a tool simply because the tool can be (and sadly too often is) wielded incorrectly.

Where I Stand

Look, you can make your own moral arguments. Maybe you care deeply about copyright impacts—I respect that opinion even if I disagree. Maybe you think the ecological cost outweighs any benefit—I can respect that calculation too, since i struggle deeply with that argument myself.

But I’ve made these specific choices. I’ve rationalized them—use that word however you want. And I’m asking not to be questioned on it.

This is my opinion on AI as it stands today. I’m open to changing my mind when I find evidence that warrants it, but I’m not asking for that evidence right now. This is what I’ve decided, how I’m going to behave, how I’m justifying it to myself.

If you agree, great. If you can at least accept it for what it is even if you don’t agree, I’d appreciate that. If you don’t agree, then leave me alone. I don’t want to hear about it.

I’m probably going to set up social media filters to avoid AI discourse entirely, which is hard because there’s value there, but I need to stop hearing these opinions that won’t actually affect how AI gets used or benefit anything I care about—like leaving this planet habitable for future generations.

What Comes Next

From here, I’m going to write about how I’m learning to apply AI. I’m sure people will say it sounds like a waste of a city’s worth of water and electrical resources for my “little text-based game toys.”

It’s how I learn. It’s how people do learn.

Hopefully those interested can put aside questions about whether this is morally correct and come along on exploring what can be done, where we might apply this technology better, how we can make it good for humanity.

Because here’s the thing: AI presents itself as a tool that engineers are choosing not to learn today because of moral arguments. But if we’re going to apply this technology in ways that make it genuinely good for humanity—the future these engineers seem to want—then where are those engineers in helping drive that future?

You can’t shape what you refuse to understand.