Presenters
Source
Navigating the AI Dilemma: How Responsible Technologists Can Build a Greener Digital World 🚀💡
In the fast-paced world of technology, AI tools are no longer a novelty; they are a mainstream reality. But what happens when the drive for innovation clashes with the urgent need for environmental sustainability? This is the core dilemma facing a growing number of software engineers, a challenge expertly unpacked by Gaël Duez of Green.io and Natasha Ann Lum, co-founder of Rack Tech podcast and a software engineer for the Singaporean government.
The Ubiquitous Rise of AI in Software Engineering 🤖📈
AI’s integration into software development is undeniably widespread. A recent survey by the Pragmatic Engineer newsletter revealed that a staggering 95% of software engineers use AI tools at least weekly, with 75% leveraging AI for at least half of their software work. This trend includes sustainability-aware engineers, who, despite their awareness of AI’s environmental footprint, find themselves increasingly reliant on these powerful tools.
The stakes are high. The International Energy Agency estimates the IT sector’s energy use will double before 2030. Beyond just energy, generative AI tools contribute significantly to the overall environmental footprint of data centers through carbon emissions, water consumption, and heat generation. This creates a daily tension for engineers: how do we harness AI’s immense potential without compromising our values?
Navigating the Personal Dilemma: Natasha’s Approach 👩💻✨
Natasha Ann Lum, with nearly a decade of coding experience spanning academia (digital humanities) and public service, embodies this dilemma. She codes daily, using AI tools extensively. She sees AI not just as a productivity booster—making her a “10x productive person”—but also as a valuable aid for brainstorming, organizing disheveled ideas, and acting as a sounding board in a technical niche.
Natasha utilizes AI tools across various levels of autonomy:
- Level 1 (Chatbot Style): Bouncing ideas off tools like ChatGPT or Claude, using them as an ideas bouncer to structure thoughts and maintain conversation history.
- Level 2-3 (Coding Agent): Deploying AI agents for specific tasks, and even for goal-oriented autonomy where a broad objective is given, and the AI determines the necessary steps to achieve it.
Her tool usage is agnostic, driven by optimization of token usage and effectiveness for the task at hand. She uses tools like Claude for work and experiments with others like Windsor or Cursor, which allow toggling between different models.
The Elephant in the Room: Environmental Costs & The Dream 🌍💔
Natasha dreams of using AI coding tools “guilt-free, with no environmental cost from irresponsibly run data centers.” This ideal world would allow engineers to boost productivity without the heavy burden of environmental harm.
What stands between us and this dream? Natasha identifies two critical factors:
- Regulation: A larger systemic issue that often feels beyond individual control.
- Tech Literacy: The ability to understand how technology works, critically assess its impact, and advocate for change.
She stresses that people, including residents and technologists, possess significant power and a voice. Instances of communities successfully halting data center projects highlight this. The challenge lies in combating learned helplessness and ensuring that governments and the public are fully aware of the hidden costs of land, water, and energy associated with data centers.
Beyond AI Autonomy: The Power of Intentional AI & Automation 🛠️⚙️
As software practitioners, we can actively push back against irresponsibly built technology. Natasha advocates for a “constructive skeptic” mindset, questioning whether AI is truly the most effective and optimized solution for every problem.
Her core philosophy, which she calls Intentional AI, prioritizes automation over agent autonomy.
- Problem: AI models are probabilistic. Relying on them for tasks like secret scanning every time a commit is made consumes tokens, leading to exponential compute usage and environmental impact.
- Solution: Use AI once to generate a mechanical automation tool (e.g., a regex-based secret scanner). This tool then runs automatically with every commit without consuming additional AI tokens or compute. This approach shifts from AI being the system to AI building the system.
This concept extends beyond coding. Instead of repeatedly using a chatbot to clean up emails, use AI to create a system that automatically parses and corrects common patterns (e.g., replacing exclamation marks with full stops). This leverages AI’s power for initial setup, then relies on efficient, non-AI-driven automation for recurring tasks.
Natasha also highlights the crucial role of CI/CD pipelines (Continuous Integration/Continuous Deployment). While she notes that mass adoption of these best practices is still ongoing, especially in organizations with legacy systems, they are more vital than ever. CI/CD’s mechanical testing capabilities provide the necessary guardrails against the probabilistic nature of AI-generated code, ensuring quality, security, and performance.
Rethinking Compute: Decentralization & Localization 🌐🏘️
Natasha strongly advocates for the decentralization of compute and localization of AI infrastructure.
- Challenge: The current model centralizes compute and AI infrastructure in a few major cloud providers (Microsoft Azure, AWS, Google Cloud), creating an oligopoly. This centralization allows these companies to reap the benefits of monopoly without fully compensating society for negative externalities, as cross-border regulation remains a challenge.
- Solution: Relocate computing resources. If a model can run locally on a personal computer or an on-premise server, it transfers the energy cost directly to the user, eliminating hidden costs. This echoes similar moves in agriculture and industrialization towards local production.
- Benefits:
- Data Sovereignty: Countries can protect their citizens’ and government’s data from being technically owned by foreign companies.
- Cultural Relevance: Localizing AI models and training them on country-specific data can prevent Western-leaning biases in responses, ensuring AI aligns with local ethics, culture, and morality. This is a significant concern for countries like Singapore, which are heavily pushing for AI adoption but lack their own frontier models.
Embracing the Uncomfortable Truth: A Call to Action 🗣️❤️
Natasha’s final message is powerful and thought-provoking: allow yourself to sit in an uncomfortable space where two truths can exist at once.
- Truth 1: AI offers incredible productivity, cognitive enhancement, and access to learning.
- Truth 2: AI carries very real harms, including the potential to diminish critical thinking, deplete societal resources, and enable unethical data mining and exploitation of the global South.
It is not hypocritical to both use and criticize AI; it is brave. By embracing this duality, we become constructive skeptics, uniquely positioned to understand the technology and advocate for solutions. This call extends beyond tech—nurses, educators, and professionals from all walks of life must engage with AI, experiment with its benefits, and simultaneously apply a critical lens to its potential harms and ethical implications.
This nuanced approach is essential for building a truly greener, more responsible digital world, one mindful decision at a time.
To learn more about Natasha’s work, you can find her podcast, Rack Tech, on YouTube, Instagram, LinkedIn, and via email. Gaël Duez encourages everyone to continue building a greener digital world together, one byte at a time.