Advancements in Technology During World War Two


The advancement of technology after World War II has reduced the need for manpower at the forefront of the military through the introduction of artificial intelligence. Today's military is constantly seeking new ways to incorporate advanced technology in place of human soldiers. Becoming more automated and almost completely removing the human element from the battlefield is the ultimate goal of most military operations. However, the use of artificial intelligence (AI) to perform life and death decisions completely autonomously should be prohibited due to the technology’s lack of morals, ethics, and contextual knowledge. Developing a combination of the technology’s accuracy and precision with the morals and decision-making skills of humanity is the safest route. There are two categories most AI technology falls under in the military: “in the loop” and “out of loop”. In the loop, machinery allows human oversight in operations either some or all of the time while out-of-loop machinery is completely autonomous and does not accept human interaction. Technology that is in the loop is considered less dangerous and ethically wrong due to intercommunication. The combination of man and machine places accuracy and precision at the forefront of the battlefield while protecting the ethical decision-makers in the background. 

The definition of artificial intelligence has been constantly evolving in accordance to the accomplishments of technology. The varying definitions are a result of the ‘AI Effect’. 

When AI achieves something traditionally thought to only be achieved by human intellect or intuition, it is written off as a function of computational power rather than a complex computer program, according to this effect (). People become accustomed to the “new technology”, it is no longer considered AI, and new technology is continuously produced (). According to Dr. Paul Marsden’s adapted definition of AI, “Artificial Intelligence is technology that behaves intelligently using skills associated with human intelligence, including the ability to perceive, learn, reason and act autonomously” (). AI was first introduced in the 1950s, only a few years following the events of World War II. The credited founder of this technology is John McCarthy, a glorified computer scientist from Boston, Massachusetts (). He presented the concept of Artificial Intelligence to several other computer scientists during the Dartmouth Conference in 1956. McCarthy proposed that machines could be designed to mimic the abstract-thinking and problem-solving capability of human brains. His theory began a new era of AI as technology continued to evolve following the events of the conference. While influencing future technology through his own inventions, McCarthy also succeeded in inspiring people to challenge the boundaries of man and machine. Prior to the discovery of AI, these boundaries were combined in a more separate and distinct way in the military. 

Prior to World War II, the country relied on human-led military tactics such as tanks and piloted aircrafts. While proven to be semi-effective, the soldiers controlling the vehicles were prone to accidents as a result of human error, the inability of the operators to make prompt decisions, and the overall danger of being sent into enemy territory. One of the more successful innovations requiring little human interference was the “bouncing bomb” invented in 1941(). This technology was created by British engineer Barnes Wallis and was intended to skip across water to avoid torpedo nets and explode once coming in contact with a dam or battleship (). This technology was a successful example of man and machine working together where the soldiers were not on the frontlines of the attacks. The collaboration required people to determine the location of deployment for the bomb and fly to the location where the weapon is released and the bomb to skip over the water from this distance to hit the dams. This process allowed the soldiers to destroy the dams from a safe distance by setting up bombs to carry out the explosion themselves when colliding with an object. The execution of the bombs’ explosion can be compared to an algorithm in a coding sequence. The starting point of the bombs is one set by the pilots of the carrier planes which can be considered initializing, or assigning a data value to an object or variable. What generally happens next can be interpreted as either a conditional or iteration sequence in coding. When interpreted as a conditional, the bomb is presented with the algorithm: if the front is touching an object, explode, else, continue skipping. When interpreted as an iteration sequence, the algorithm states: repeat skipping on the water until touching an object, then explode. The simplicity in these algorithms allowed for a successful mission while proving a collaboration between man and machine to be efficient. Despite the success of this mission, the military continued to rely heavily on technology completely controlled by humans that resulted in victory during the war, but with a higher risk of losing lives.

Firstly, having a background human presence in the decision-making process for the technology increases soldier survivability and offers reliable context-based decisions. During World War II alone, over 415,000 U.S. soldiers lost their lives () although it is unknown the exact number of deaths that occurred. A majority of the deaths were a result of violence on the frontlines of the battlefield. Although AI weapons are unable to fulfill all tasks originally designed for combat soldiers, the technology may be used in place of missions too complicated for human operators. Biological or chemical detection, EOD (Explosive Ordinance Detection and Demolition), high-value target recognition and covert tracking, and threat detection and neutralization are only some of the missions and tasks that are eligible. By removing soldiers from the frontlines of the tasks with lower success rates, death rates will steadily begin to decrease. While humans will be present, some AI critics argue that their position will be diminished and that the technology will render fighting “less uncertain and more controllable” () because machines are not affected by the emotions that cloud human judgment. However, working behind the scenes and making critical ethical decisions with peace of mind is easier to maintain when not in imminent danger. The ability to make good decisions on the battlefield can be hindered by an inability to control emotions or a complete lack of empathy. 

Lastly, weapons formed with AI require human interactions to achieve a balance of morality and precision. The two categories of AI weapons are considered “in the loop” and “out of loop”. Human regulation of operations may be maintained at all or part of the time with in-the-loop autonomous weapons. These weapons are less controversial and are presumably less risky and ethically incorrect than out-of-loop weapons. This is due to the fact that once they are deployed, out-of-loop autonomous weapons have no human involvement. They make their own choices and are strictly driven by the code that has been built into them. This is where the controversy about a master algorithm enters the picture, where there are too many possible outcomes to prepare for and include in the code of the programs. Those who oppose autonomous weapons systems frequently express concern about delegating life-or-death decisions to nonhuman agents. The most visible manifestation of this concern is systems that can select their own targets. It will be difficult for autonomous weapons systems to distinguish between civilians and combatants, which is difficult even for humans. Allowing AI to make targeting decisions will result in a high civilian death rate. The “Scientists' Call to Ban Autonomous Lethal Robots” statement was provided by a consortium of physicists, AI and robotics experts, and other scientists from thirty-seven countries. There is no scientific evidence that robots would ever have "the features needed for accurate target recognition, situational awareness, or decisions about the proportional use of force," according to the statement (). This declaration acknowledged the original concerns of creating autonomous weapons of war. In order to eliminate the casualties due to the complex nature of autonomous weapons, the military required human regulation of the technology.

Sorry,

We are glad that you like it, but you cannot copy from our website. Just insert your email and this sample will be sent to you.


By clicking “Send”, you agree to our Terms of service and Privacy statement. We will occasionally send you account related emails. x close