What happens when a situation that the agent hasn’t been trained on occurs?
Is it going to have the critical thinking skills to understand the situation and make the right decision or is it going to just hallucinate some impossible answer and get people killed?
I’m not saying human controllers don’t make mistakes but this should be one of the last areas to fully automate.
Is it going to have the critical thinking skills to understand the situation and make the right decision or is it going to just hallucinate some impossible answer and get people killed?
I’m not saying human controllers don’t make mistakes but this should be one of the last areas to fully automate.