Protecting Genuine Humanity in an Artificial Future (Essay Example)

📌Category: Artificial Intelligence, Science
📌Words: 1246
📌Pages: 5
📌Published: 29 September 2022

Once merely an incredible speculation of science fiction, machines can learn unsupervised and are exponentially increasing in power and speed. Artificial Intelligence(AI) is the general term used to describe computers capable of performing tasks that would normally require human intelligence, such as having a conversation or recognizing images. Since the invention of the first neural network, a basic machine capable of sensing and comparing patterns in data sets and improving its own ability over time, in 1956, AI has undergone evolutionary changes, and now nearly all of us interact with AI technology several times each day. It has the power to make our lives more efficient on nearly all fronts. However, the advances in the technology’s capability generate many concerns and disastrous possibilities. Because of the ethical, economic, and social concerns, we should approach our future of artificial intelligence technology with caution.

After scrolling through your favorite websites for a few minutes, you may become surprised, maybe even unsettled, after seeing an ad about something you were just thinking about. For that you can thank AI, specifically, a few ever-improving computer algorithms which collect your browsing data (including searches, website visits, physical location, demographic information, and more) to send to third-party ad platforms. This technology could have dire implications for harming vulnerable customers-- for example, the advertising of gambling products to a recovering gambling addict. To ensure that the protection and use of this data is ethical, government agencies like the FTC must establish and enforce strict policies that protect consumer data privacy.

An additional concern with the increasing personalization of digital content via artificial intelligence is its potential to be used to build ideological influence. In early 2018 Cambridge-Analytica, a British political consulting firm was found to have been harvesting the private data of Facebook users to build psychological profiles with the intent of using these raw profiles to aid the 2016 election efforts of conservative candidates. In their article, Kang and Frenkel state that “the data of up to 87 million users may have been improperly shared with a political consulting firm connected to President Trump during the 2016 election”, and that “Among Facebook’s acknowledgments was the disclosure of a vulnerability in its search and account recovery functions that it said could have exposed most of its 2 billion users to having their public profile information harvested”(Kang and Frenkel). This scandal perfectly demonstrates the reason for concern about the role of digital technology in the future of democracy. Moreover, these concerns become even more alarming as we begin to see the unsettling potential of automated propaganda accounts on social media. Our susceptibility to these artificial accounts is clearly illustrated in Bruce Schneier’s article when he calls attention to a recent experiment: “the Harvard senior Max Weiss used a text-generation program to create 1,000 comments in response to a government call on a Medicaid issue. These comments were all unique, and sounded like real people advocating for a specific policy position. They fooled the Medicaid.gov administrators, who accepted them as genuine concerns from actual human beings. This being research, Weiss subsequently identified the comments and asked for them to be removed, so that no actual policy debate would be unfairly biased.”(Schneier). The prevention of these risks once again lies in adopting regulations for the technologies we use and how we use them, in addition to encouraging face-to-face interactions with others who share diverse political viewpoints.

As AI technology continues to improve, increased interest in the technology’s applications in mass surveillance is becoming a horrifying trend among countries with authoritarian governments. In a Washington Post article, Drew Harwell and Eva Dou point this out after finding that “The Chinese tech giant Huawei has tested facial recognition software that could send automated ‘Uighur alarms’ to government authorities when its camera systems identify members of the oppressed minority group, according to an internal document that provides further details about China’s artificial-intelligence surveillance regime”(Harwell and Dou). Less extreme methods of automated surveillance and policing are already in place around the world, and we should note that these technologies can be utilized for good--for instance, the use of face detection and social media to track the spread of infectious diseases like COVID-19, or the ability to prevent attacks by identifying those who match with international criminal databases and as a result halting their admission to public transportation and events. However, within our own countries we must ensure that policy and guidelines continue to keep our governments in check and that we all take the civic and ethical duties to ensure that our governments are transparent with our data.

On top of the concerns of AI in advertising and governance, the implementation of AI technology by corporations also has the potential to widen the income inequality gap. As we begin to see executives become more interested in automating the roles of their employees, sooner or later automation might expand from only replacing laborious manufacturing jobs to also replacing white-collar work. According to a report by Brookings, the implications of AI on white-collar work may be much more impactful than previously expected, as they put it: “AI could affect work in virtually every occupational group. However, whereas research on automation’s robotics and software continues to show that less-educated, lower-wage workers may be most exposed to displacement, the present analysis suggests that better-educated, better-paid workers (along with manufacturing and production workers) will be the most affected by the new AI technologies, with some exceptions”(Muro, et. al). After speaking with several executives during the Davos 2019 forum, Kevin Roose observed these exact intentions to automate company roles at all levels, such instances include “Terry Gou, the chairman of the Taiwanese electronics manufacturer Foxconn, who has said the company plans to replace 80 percent of its workers with robots in the next five to 10 years”, and “Richard Liu, the founder of the Chinese e-commerce company JD.com, who said at a business conference in 2018 that ‘I hope my company would be 100 percent automated someday'”(Roose). But this isn't just the distant profit-driven dreams of executives, according to a recent report, the National Bureau of Economic Research estimates that 50 to 70% of changes in U.S. wages, since 1980, can be attributed to wage declines among blue-collar workers who were replaced or degraded by automation (Acemoglu and Restrepo). A common viewpoint in favor of automating corporate processes is that it would offer more opportunities in highly-skilled or creative professions and that such fearful claims of automation have been made since the beginning of the industrial revolution and continue to be disproven. While this may turn out to be true, it is still crucial that companies and governments assist displaced workers through education and retraining with minimized or eliminated costs. We must ultimately ensure that the benefits of automation reach all levels of society, and don’t just feed into the self-serving desires of elites.

In conclusion, we must be thoughtful in our approach toward Artificial Intelligence and the power we will let it hold in our lives. We must consider our policies, regulations, and ethical guidelines regarding the use of the technology among governments and corporations. To maintain a safe, prosperous, and just future for the generations that succeed us we must heavily reconsider what we choose to leave them in terms of privacy, social and economic equity, and the framework under which our basic ethical principles consist of, in doing so we must consider artificial intelligence as an integral factor.

Works Cited

Acemoglu, Daron, and Pascual Restrepo. “Robots and Jobs: Evidence from US Labor Markets.” NBER, 27 Mar. 2017, www.nber.org/papers/w23285.

Harwell, Drew, and Eva Dou. “Huawei Tested AI Software That Could Recognize Uighur Minorities and Alert Police, Report Says.” The Washington Post, 8 Dec. 2020, www.washingtonpost.com/technology/2020/12/08/huawei-tested-ai-software-that-could-recognize-uighur-minorities-alert-police-report-says.

Kang, Cecilia, and Sheera Frenkel. “Facebook Says Cambridge Analytica Harvested Data of Up to 87 Million Users.” The New York Times, 5 Apr. 2018, www.nytimes.com/2018/04/04/technology/mark-zuckerberg-testify-congress.html.

Muro, Mark, et al. “What Jobs Are Affected by AI? Better-Paid, Better-Educated Workers Face the Most Exposure.” Brookings, 9 Mar. 2022, www.brookings.edu/research/what-jobs-are-affected-by-ai-better-paid-better-educated-workers-face-the-most-exposure.

Roose, Kevin. “The Hidden Automation Agenda of the Davos Elite.” The New York Times, 26 Jan. 2019, www.nytimes.com/2019/01/25/technology/automation-davos-world-economic-forum.html.

Schneier, Bruce. “The Future of Politics Is Bots Drowning Out Humans.” The Atlantic, 7 Jan. 2020, www.theatlantic.com/technology/archive/2020/01/future-politics-bots-drowning-out-humans/604489.

+
x
Remember! This is just a sample.

You can order a custom paper by our expert writers

Order now
By clicking “Receive Essay”, you agree to our Terms of service and Privacy statement. We will occasionally send you account related emails.