Member-only story

Unleashing Pandora’s Box: When AI Shouldn’t be Transparent

Funso Richard
3 min readJun 18, 2023

--

I am casting my vote against a popular position that has been buzzing around the AI community lately — disclosing what goes into developing AI models and systems to the general public. I’m well aware of the value of transparent AI. Having transparency in AI development, deployment, and adoption is a goal that must be pursued to ensure AI is designed for social good.

For those who find my position atypical, I’m not a lone wolf on this issue. OpenAI’s chief scientist and co-founder, Ilya Sutskever, has this to say about AI, “These models are very potent and they’re becoming more and more potent. At some point it will be quite easy, if one wanted, to cause a great deal of harm with those models. And as the capabilities get higher it makes sense that you don’t want to disclose them.”

Let’s face it, who is okay with giving a loaded gun to a child to play with? Similarly, even the most ardent supporters of AI transparency would agree that disclosing too much information about AI development is like handing a loaded gun to a toddler. It may be entertaining for a while, but the outcome is unlikely to end well.

Transparency in AI is undoubtedly a core value that should be promoted. However, revealing too much detail about how AI systems are developed raises legitimate concerns. This conundrum is known…

--

--

Funso Richard
Funso Richard

Written by Funso Richard

AI Pragmatist Ethicist & GRC Thought Leader. I write about governance, risk, cybersecurity and strategy to help organizations minimize business risks.

No responses yet