In a landmark victory for the program that will surely become Skynet, an AI has defeated a trained fighter pilot in an aerial dogfight. It's understandable to be somewhat concerned about things like Google Home listening to everything that goes on in your house, but those kinds of concerns are quaint compared to computers that can beat us humans at murder. Fortunately, this was only a contest and not a real act of coding-on-soldier violence.

As we march toward a future more greatly influenced by the advancement of artificial intelligence, we'll also continue to see more cause for concern. Self-driving cars are the elephant in the room when it comes to AI, given how their eventual release unto the public will fundamentally change how people travel day-to-day. On a physically smaller, but no less impactful scale, AI has also begun to have a larger hand in our entertainment. Services continue to use AI for ad placement, TV show recommendations, and building playlists while also developing profiles on our likes and dislikes whether or not we permit those actions.

Related: Here's An AI-Generated David Attenborough Reading Reddit Threads

AI is apparently even starting to get the better of our military, according to the result of a recent dogfighting competition. Defense One reports on a competition in which DARPA (which is real and not just a thing you know from Metal Gear Solid) arranged for several teams of defense contractors to compete in a series of trials with AI-piloted planes facing off in a simulation. The finale of the event pit the leading AI fighter against a human pilot wearing a VR headset and using a flight stick designed for PC games. The result was a 5-0 win for team robots.

The US Military Is Pouring Lots of Work Into AI Pilots

Ender with the colnol in Enders Game

The competition's finale makes for an amusing headline, but the story leading up to that moment belies the herculean efforts required to train a computer to fly. The report explains that teaching the AI to aim and shoot is easy, but piloting is a process that requires lots of judgment calls and risk management. AI won't have the life experiences to avoid crashing into the ground as an option to avoid enemy fire, for example.

One of the larger hurdles for the teams involved deciding if the AI should be programmed with certain risks by default or if it should be able to learn on its own through trial and error. They eventually decided to rely on deep learning and have the AI assign costs to different types of failures so that it could ultimately make decisions using a sort of judgment-based priority system. Since computers have the advantage of being connected to processing units, those simulations could be run thousands of times with the AI picking up countless amounts of data to build into its "experiences". In short, computers can learn faster than humans if we teach them how to learn.

Ultimately, a DARPA director says the goal is to find a way to implement AI into a pilot's cockpit so that some of the more basic tasks that don't require critical judgment can be automized. It's a far less dystopian approach than all of this sounds, but it makes sense in the same ways that self-driving cars do. Both ideas will require new paradigms in quality assurance and bug testing, as well as an honest look at our own biases when it comes to making piloting or driving decisions in dangerous situations, but if successful, self-driving cars and AI-piloted planes could work well with humanity... in this timeline.

More: Why Sci-Fi Movies Starring AI Actors Is Actually A Good Idea

Source: Defense One