Just Because You Can Doesn’t Mean You Should: The Ethical Illusion of AI Power
The Rise of Capability Culture
We used to marvel at new tools. Now, we expect them. When a new model is released, the first response is excitement about what it can do, not whether it should exist. Companies compete to launch faster, smarter, and more autonomous systems. Capabilities requiring deep scientific knowledge now arrive as plug-and-play APIs and downloadable code.
Meta’s Voicebox, unveiled in 2023, demonstrated voice cloning from a three-second sample ((Le et al., 2023)). ElevenLabs built commercial offerings on similar technology. What once felt like magic now exists as a product tier. The shift is cultural. Power has become mundane.
Yet cultural normalization does not equal ethical legitimacy. When synthetic voices convince mothers that their children are in danger or deepfake images destroy reputations, we often respond with resignation. But whose decision was it to accept these risks? What frameworks were in place to assess them? Who authorized the public release of this capability?
No one. Because no one was required to ask.
The Fallacy of Justification
Supporters of deepfake and voice cloning technologies often point to emotionally compelling edge cases. They speak of restoring voices for people with ALS, recreating lost dialects, or protecting whistleblowers. They reference virtual assistants adapted for accessibility or localization tools that eliminate language barriers.
These use cases are real, and some are genuinely moving. But they are also rare, and they are used as moral cover for tools that have overwhelmingly been deployed for entertainment, commercial gain, or manipulation.
The existence of a handful of legitimate applications does not justify the global release of high-risk capabilities. In medicine, we do not argue that because insulin saves lives, fentanyl should be sold over the counter. We regulate risk, not potential. Yet, in the AI world, we invert that logic. If a technology can do something good, we assume it is permission enough to release it to everyone.
This is not ethical innovation. It is deflection dressed up as progress.
The Problem of Access
The greatest danger with AI capability is not simply what it can do but how easily it can be accessed and misused. Open-source language models, voice synthesis libraries, and deepfake kits do not require expert knowledge. With access to a GPU and some curiosity, a teenager can recreate tools once reserved for the military.
When high capability meets low friction and no oversight, the result is not innovation, it is systemic risk. In cybersecurity, we understand this. A vulnerability is not dangerous in theory; it is dangerous in practice because of its exploitability.
AI today is designed for exploitability. Whether it is used to impersonate a colleague, defraud a bank, harass individuals, or destabilize public discourse, the technical barrier has collapsed. Large language models fine-tuned for malicious instruction are already circulating (Brundage et al., 2018). Voice cloning is being used in extortion scams. Deepfakes dominate non-consensual pornography (Crawford, 2021).
None of this is surprising. It is a direct consequence of releasing powerful tools without meaningful restraint.
Ethics by Optics
In this climate, ethics has become performance. Technology companies publish responsible AI guidelines, host safety workshops, and conduct red-team evaluations. But none of these practices delay product releases or result in decisions not to deploy. As long as ethics is treated as a function of communication rather than development, it remains decorative.
As Greene, Hoffmann and Stark (2019) argue, the institutional embrace of AI ethics often lacks substance. It becomes a way to signal care without changing behavior. Ethics becomes the glossy brochure handed out at the launch event for something that may never have been built had true ethical reflection been present.
Momentum matters. The faster AI systems are normalized, the harder they are to challenge. Surveillance capitalism, as described by Zuboff (2019), did not emerge from a central plan. It evolved through millions of unchecked product choices. AI is following the same trajectory, only faster, and with fewer constraints.
Restraint as Security
What would it look like to say no?
In cybersecurity, restraint is foundational. We build firewalls, apply the least privilege, isolate systems, and test for failure. We deny access, not because we are pessimistic but because we understand the risks of exposure. We build with limits in mind.
AI needs the same logic. A voice cloning model capable of replicating anyone from three seconds of audio is not neutral. It is an attack vector. A deepfake tool able to produce photorealistic impersonations is not art. It is an escalation in identity fraud. These are not hypothetical risks. They are documented realities.
We must stop conflating capability with legitimacy. Some systems should not be released, some tools should not be open source, and some features should never become normal.
This is not fear. It is foresight.
A Cultural Reset
True ethics begins not with permission but with refusal. Developers must be trained not just to build models but to walk away from them. Product teams must have the authority to cancel a feature, not just release one. And funders must support efforts to build safe infrastructure, not just novel capabilities.
We need a culture that values restraint, not as a delay to progress but as the foundation of trustworthy technology. In AI, the greatest act of innovation might not be the next breakthrough but the one we choose not to pursue.
AI is not fate. It is designed. And we make the decisions, whether we admit it or not.
The technologies we unleash into the world reflect not only what we can do, but what we are willing to accept. We can clone voices, simulate people and deceive at scale. That power is here. But if we do not stop to ask whether we should use it, someone else will use it without asking at all.
The most dangerous systems are not malicious. They are indifferent. And the most dangerous culture is not cruel. It is casual.
We are surrounded by power. The only question left is whether we are brave enough not to use it.
Le, M., Vyas, A., Shi, B., Karrer, B., Sari, L., Moritz, R., Williamson, M., Manohar, V., Adi, Y., Mahadeokar, J. and Hsu, W.-N., 2023. Voicebox: Text-guided multilingual universal speech generation at scale. arXiv preprint arXiv:2306.15687. Available at: https://arxiv.org/abs/2306.15687 [Accessed 10 Apr. 2025].
Brundage, M. et al., 2018. The Malicious Use of Artificial Intelligence: Forecasting, Prevention, and Mitigation. [online] Future of Humanity Institute. Available at: https://arxiv.org/abs/1802.07228 [Accessed 10 Apr. 2025].
Crawford, K., 2021. Atlas of AI: Power, Politics, and the Planetary Costs of Artificial Intelligence. New Haven: Yale University Press.
Greene, D., Hoffmann, A.L. and Stark, L., 2019. Better, nicer, clearer, fairer: A critical assessment of the movement for ethical artificial intelligence and machine learning. Proceedings of the 52nd Hawaii International Conference on System Sciences, pp.2122–2131.
Zuboff, S., 2019. The Age of Surveillance Capitalism: The Fight for a Human Future at the New Frontier of Power. New York: PublicAffairs.
