With so much talk about Artificial Intelligence lately, it got me thinking about all of the consequences that come along with AI. AI is probably the biggest buzzword in the last few years in the business world. Every company has it, every product offers it, and everyone wants it. But who is responsible for it? Many will probably list the big tech companies like, Microsoft, OpenAI, Amazon, or Google.
However the question that I find most interesting to try and answer is, who is to blame? Who should be at fault when something goes wrong? What information are we using to help machines make those decisions?
These were some of the last questions that I asked my class during their fall course (which I know they were not to thrilled about given it was the day of their final).
If you haven’t heard years ago there was a website created to answer such question.
Moral Machine – https://www.moralmachine.net/
Welcome to the Moral Machine! A platform for gathering a human perspective on moral decisions made by machine intelligence, such as self-driving cars.We show you moral dilemmas, where a driverless car must choose the lesser of two evils, such as killing two passengers or five pedestrians. As an outside observer, you which outcome you think is more acceptable. You can then see how your responses compare with those of other people.
It didn’t save her, it saved me….The robot’s brain is a difference engine, it was reading vital signs, it must have calculated that.. I was the logical choice. I had a 45% chance of survival, Sarah only had an 11% chance. That was somebody’s baby. 11% is more than enough. A human being would have known that…
Leave a comment