AI is Coming, But What Culture Will Influence It?
The questions surrounding AI development are heavy on the why, the how, the should, and other things; so what about what culture that AI would be operating from? Naturally the designer will be reflected, but something as complex as culture meeting the brave new world of AI, there are questions to answer for both fields.
William Michael Carter writes an interesting piece in The Conversation “How to Get Culture Right When Embedding It Into AI,” and the whole thing is a great read, including this excerpt:
What happens when these very unique ecosystems begin to communicate with each other? How will norms and values be determined as the various AI entities begin to exchange information and negotiate realities within their newly formed cultures?
MIT’s Norman, an AI personality based on a fictional psychopath produced a singular example of what we have long known in humans: With prolonged exposure to violence comes a fractured view of cultural norms and values. This represents a real danger to future exposure and transmission to other AI.
How so?
Envision Norman and Alexa hooking up. Both AI’s are representative of the people who made them, the human data that they consume and a built-in need to learn. So whose cultural values and norms would be more persuasive?
Norman was built to see all data from the lens of a psychopath, while Alexa as a digital assistant is just looking to please. There are countless human examples of similar personalities going awry when brought together.Social scientists argue that the debate over AI is set to explode and, as a result, that multiple versions of AI are bound to co-exist.
As philosophers, anthropologists and other social scientists begin to voice their concerns, the time is ripe for society to reflect on AI’s desired usefulness, to question the realities and our expectations, and to influence its development into a truly pan-global cultural environment.
Read the whole piece at The Conversation, and comment below.