I for one would expect that a general ai just would discover kants ethics because its build on logic. And follow it better than we do.
Top-level
I for one would expect that a general ai just would discover kants ethics because its build on logic. And follow it better than we do. 14 comments
We vote for the algorithm with our feet. We use co pilot or chatGPT because its useful, really powerful tools, they improve our life, speed of work or learning. We will use also more powerful ai if they improve our life. EDIT: The people wont felt forced. The will search for the benefits of the ai decisions. @helles_sachsen @balkongast @Gargron @ForeverExpat @balkongast @Gargron So, the human is the problem not the ai? :> @helles_sachsen @ForeverExpat @Gargron If you see it this way, humans are not necessary at all. @helles_sachsen @balkongast @Gargron @ForeverExpat @balkongast @Gargron I for one everyone think we are so shiitty with this that it's very likely everyone and everything is better than us. Often throwing a coin would be better then human decisions. @helles_sachsen @ForeverExpat @Gargron Coding this way seems to fit perfectly to any requirement? @balkongast @ForeverExpat @Gargron These algorithm today that come out to be racists for example where made from humans. I mistrust humans more than a machine and throwing a coin. @helles_sachsen @ForeverExpat @Gargron Hold my beer ... didn't you say that it would be too expensive to train such an AI? @balkongast @ForeverExpat @Gargron For governments for sure, but not for Google. But Google got a shitstorm for a racist KI. They have to produce ai that they can sell. @helles_sachsen @ForeverExpat @Gargron From a certain point of view, the humans are a problem. See: @helles_sachsen @ForeverExpat @Gargron As long as you restrict that to code, fine with me. |
@helles_sachsen @balkongast @Gargron
Maybe. But individual ethics clashes with global reach and time scale of many of our problems. How to balance international humanitarianism for “slow burn” problems with local needs? How would people react if the algo decides to send funds to help far off places with larger problems at the expense of helping a local, short term crisis? How do you vote an algorithm out of office? Not to say humans are better, but algorithmic driven ethics is problematic also