Email or username:

Password:

Forgot your password?
Top-level
ForeverExpat

@helles_sachsen @balkongast @Gargron
I’m saying algos acting in a pure Kantian way [insert any philosophical framework] are bound to conflict with human (mis)perceptions of ethical outcomes. Problem is that humans often refuse to give up personal well-being/agency to benefit the many. How does an algo weigh the diversity of definitions of human flourishing in a complex adaptive system? Kant doesn’t scale. Humans a shit at this. No reason to expect AI will be better. Unless you just give into AI

6 comments
Helles Sachsen replied to ForeverExpat

@ForeverExpat @balkongast @Gargron I for one everyone think we are so shiitty with this that it's very likely everyone and everything is better than us. Often throwing a coin would be better then human decisions.

balkongast replied to Helles

@helles_sachsen @ForeverExpat @Gargron

Coding this way seems to fit perfectly to any requirement?

Helles Sachsen replied to balkongast

@balkongast @ForeverExpat @Gargron These algorithm today that come out to be racists for example where made from humans. I mistrust humans more than a machine and throwing a coin.

balkongast replied to Helles

@helles_sachsen @ForeverExpat @Gargron

Hold my beer ... didn't you say that it would be too expensive to train such an AI?

Helles Sachsen replied to balkongast

@balkongast @ForeverExpat @Gargron For governments for sure, but not for Google. But Google got a shitstorm for a racist KI. They have to produce ai that they can sell.

Go Up