“Siri, suicide, and the ethics of intervention”

We may want Siri to stop people from searching for ways to hurt themselves or others, says Bosker, but there’s the underlying ethical question of whether we want her interfering with our right to access information or our ability to make personal decisions, like buying a gun legally to use for target practice, for example.

The issue then becomes one of free will and moral decision-making. “When Siri provides suicide-prevention numbers instead of bridge listings, the program’s creators are making a value judgment on what is right,” says Jason Bittel at Slate. Are we really okay with Siri making moral decisions for us, asks Bittel, especially when her “role as a guardian angel is rather inconsistent”? Siri, for instance, will still gladly direct you to the nearest escort service when you ask for a prostitute, and when asked for advice on the best place to a hide a body, “she instantly starts navigating to the closest reservoir,” Bittel adds.

While it’s great that Siri may be saving people’s lives, we may be heading down a slippery slope of what we can and cannot search. “There are all sorts of arguments for why the internet must not have a guiding hand — freedom of speech, press, and protest chief among them,” says Bittel. “If someone has to make decisions based one what’s ‘right,’ who will we trust to be that arbiter?” Man or machine?

via The Week.

Seems to me a bit unreasonable to hold a machine accountable for how mentally ill people use it.

Advertisements