@drq Well, it depends on the objective.

If we want to just replace humans at the wheel, make the autopilot work just like real pilot but more efficient - then yes.

If we want AI to make better decisions - we have to admit some of human decisions were inherently wrong just because of our nature. So if AI is capable of generating different solution and it works (in terms of logic) then it should be preferred just to see if it is more successful (even if it seems unnatural and thus rarely tried by human operators). And it should learn on these practical results.

The possible downside of this approach is the image above :)