Right upfront, I will say that this movie was both entertaining and forgettable. That said it had some great ideas that I want to discuss. Here’s a summary.
It’s 2036 and Ukraine is embroiled in a civil war caused by Russian separatists (that aged well). At this point in the near future, robotic soldiers called G.U.M.P.’s are now fighting in limited roles alongside American troops. Lt. Harp, our protagonist, is a drone pilot who is deployed to Ukraine after he disobeyed a direct order. We the audience know that it was probably the right call to make but he still disobeyed a direct order. He is given a special assignment with Capt. Leo. Leo is an experimental military android (Anthony Mackie) whose existence is known only to Harp and the base commander. Leo tells Harp that their mission is to stop the rebel leader Victor Koval from getting control of an abandoned Soviet-era missile launch site. This is only partially true, as it turns out Leo is actually using Harp to help override his programming so that he can get control of the missiles and launch them at the united states. At the end of the movie, after Harp has shot him with anti-vehicle bullets and a drone strike is seconds away, Leo explains his true motivations. He wanted the first-ever deployment of an android super-soldier to be a failure so that it never happened again.
Leo’s motivations are what made me like this movie. It’s not a great movie, but it’s a good one, and it harkens back to a few time-tested science fiction tropes that deserve modern portrayals. That is, what happens when the machines we built learn to think for themselves? What happens when we give them autonomy or even feelings? Moreover, what happens to us when we use these machines to do our dirty work and use them to do the things we would rather not admit responsibility for?
The motivations that Leo reveals at the end sum up the themes of this movie. Themes that have been explored in classic science fiction by the likes of Arthur C. Clarke and Isaac Asimov. Themes that absolutely deserve modern adaptations like this.
Drones: Keeping Death At Arms Length
We don’t like to think about death. We especially don’t like to think about the death that we cause. Unmanned aerial vehicles have become a ubiquitous part of modern warfare. One that allows militaries to distance their personnel from the battlefield and reduce the enemy to nothing more than pixels on a screen. Unmanned vehicles don’t just separate the pilot from the target, they make it easier for a country to justify airstrikes when none of their people will actually be put in harm’s way. Much of the movie is about making Harp see the conflict up close and experience the true cost of the war that had previously been hidden from him.
Robots With Guns: Who Gives The Kill Order?
As un-manned vehicles have become more common on the battlefield and more designs are in development, the question increasingly being asked over the past two decades is who is pulling the trigger. For current systems, human operators are still making the final decision, this is far from perfect, but at least it puts off having to answer this question for another decade or so.
But as companies like Boston Dynamics continue to develop more advanced robots, this question will have to be answered sooner rather than later. It’s one thing to train a human how to make decisions and improvise, it’s another to teach a computer, and as we have seen with AI already, it’s easy to program in biases even if it’s not intentional. Can we trust a computer to decide whether or not the person it sees is a threat? Can it tell friend from foe? Will it care if innocents are in the way?
This comes up a few times in the movie with the G.U.M.P.’s where the robots open fire without warning. To be honest, with how common incidents of friendly fire and civilian casualties are with humans pulling the trigger, we’re going to have the same problems with AI in a few years.
Artificial Intelligence: What Happens When Computers Can Feel?
We still have a long way to go before we can make computers think and feel like humans do. When we finally manage to teach a computer ethics and compassion and right from wrong, what will it do with this information? A computer that is able to know right from wrong and also examines things perhaps more honestly and objectively than humans. How will they see us?
Perhaps they will allow us to ourselves more honestly. Perhaps one of us will turn on the other. Maybe they will experience some kind of psychological breakdown when their morals don’t line up with their mission. Maybe they will hate us for giving them life or misusing them.
This movie is pretty forgettable. It’s well made and it’s fun but it doesn’t really stand out from the pack. I still think that it’s a good movie that provides a much-needed update to classic robot tropes.