Even the most enthusiastic technology developers are having second thoughts about some of the wondrous things they have developed. When created, most developers dreamed of all the benefits technology would bestow upon humankind, but it is also true that in the wrong hands, things can go terribly wrong. And the fact is, we don’t seem to have prevented that from happening. Are we doomed to a dystopian technological future?
Article
Techno Dystopia
In the world of dystopian fantasies, we are ruled by technological totalitarians. (And did you ever notice how they depict unkind and also impoverished people in those movies and books?) These fantasies are growing in popularity, depicting a scary and often violent world of the future that tends to be populated by robots1 and ruled by bad guys who have created a society that is racist, anti-immigrant, homophobic, eugenic,2 sexually controlling,3 totalitarian,4 controlled by a military industrial complex of a few,5 and, well, a world that is anti-warmth of humanity. Acts of human kindness are done in secret and often on the pain of death. Lives are surveilled and infringements against the state are “dealt with.”6 Gone is the enlightened United Earth crew of the Starship Enterprise.
In our social-network crazed society, we have turned more and more to our own groups for intellectual sustenance, social connections, dating, medical advice, and the news. These tend to be either enlightening forays, or dangerous enhancements of bias. In fact, in the study called “The Spreading of Misinformation Online,” part of the Proceedings of the National Academy of Sciences,7 the findings stated that online social groups were “homogeneous, polarized clusters,” where the “proliferation of biased narratives fomented by unsubstantiated rumors, mistrust and paranoia” exist. Behind the scenes, the analytics of identity used by these sites (and available to anyone who wants to pay for them), can create messages and so-called fake news, and subvert populations in dangerous ways. We know that governments such as North Korea, China,8 and others propagandize to maintain control. In a twentieth-century example of this, an election was being held in an economically impoverished nation (country name will not be stated) where literacy was low, and opposition groups went into neighborhoods announcing fake positions of the visiting candidate. The population was so irate that it caused riots and injuries and almost fatally wounded the candidate.
Whether to purchase products, determine a course of action for a medical condition, find a date, or pick a political candidate, our information is often filtered in dangerous ways. And it’s all driven by algorithms. Whether or not these dystopian films accurately paint a picture of our future, the growing popularity of this genre is a reflection of our latent and blatant fear of the technocratic future where these technologies are under the control of a few.
In real life, behind the scenes are whole industries and government departments dedicated to cataloguing, collecting, analyzing data, and surveilling you and me. Also, beyond that, the bad actors of various stripes are the twenty-first-century criminals who often seem more tech savvy than governments or the tech companies that host them.
Cases in Point
Unbridled and controlled:
Google’s so-called secret project, Nightingale. Using an ultra-broad interpretation of HIPPA regulations, hundreds of Google employees have access to millions of patient records. (See first text box below.)
Use of advanced security systems to hack systems and steal personal identities or trade secrets
Use of social media to troll, stalk and bully
Use of facial recognition and DNA to target minorities or political enemies
Use of AI to subvert decision making and choice
Big data, analytics, and consolidation of all citizens’ data by government’s so-called Total Information Awareness (TIA) or by social media companies
Offensive robocalls, pornographic emails, and so on
And the latest scare, deepfake videos (See second text box below.)
Sadly, this list could go on and on.
Medical Record Privacy Violations
One example of medical record privacy violation, we think, is Project Nightingale with Google and organizations like the Mayo Clinic. Neither patients nor doctors have been notified that Google employees already have access to much of the data on tens of millions of patients, according to a Wall Street Journal expose.
Non-tech personnel seem to think that since they are limiting the data fields provided in these types of projects, patients are safe. But the fact is that users don’t know how smart Google Brain and AI developers (or many other tech organizations) are at exploiting various data sources to put together very deep learning.
Organizations like Ascension (a chain of 2600 hospitals and doctors’ offices) and The Mayo Clinic should have also been aware that Google has had a variety of data scandals with YouTube and Google — “Last year, the Journal reported that Google opted not to disclose to users a flaw that exposed hundreds of thousands of birth dates, contact information and other personal data of subscribers in its now-defunct social-networking website Google Plus, in part because of fears that the incident could trigger regulatory scrutiny.” WSJ
We know that somewhere in the cloud, actually, in several “somewheres,” there is a heck of a lot of consolidated data about “me” in commercial, government, or medical databases. Mostly, we gave this data away in various forms, sometimes intentionally, sometimes unwittingly. But all the same, through our hunger to be online, convenience-app-driven, and to get free stuff (just sign up here and you get this free), we have provided lots of data.
And, of course, in the pursuit of scientific advancement, boundaries are being broken as well. To quote Dr. Munsterhjelm, at the University of Windsor in Ontario, who tracks Chinese interest in the technology, “In the world of science there’s a kind of culture of complacency that has now given way to complicity.” So, just to increase your paranoia, it seems even the scientific community is unleashing uncontrolled technologies that may lead to the techno dystopia.
It seems this is a freewheeling tech climate in which organizations feel free to increase the boundaries of data collection. In the last decade, we perfected IoT devices, genetic testing, mobile platforms, and AI. Now we have facial recognition, mobile and social tracking, our personal DNA on common databases, and so on. And all this net wave was foisted upon consumers and citizens mostly without their knowing. Privacy advocates10 were ridiculed and considered the enemy. Even today, in the face of various scandals, scant effort is being made by companies to make it easy for users to protect their privacy.11 Want to opt out?12 Write a letter. Really?? Or better still, log in and set up an account so we can collect your data anyway and get your input that you don’t like something. How is that opting out?
And still, we continue to do a lot of this harm to ourselves when we register for yet another blog/chat site that requires us to cross-link to our other social sites as a requirement for usage. Layers upon layers of partners within the BIG TECH labyrinth harvest “me” (my data) — my moves, my searches, my chats, my spending, my opinions. It is not just the commerce site I am on, but the infrastructures they use, from web hosting to telco to financial networks, linking to my social networks, and so on. In my one merchant relationship I may have set privacy boundaries, but that does not exclude the infrastructure platform partners from all helping themselves to the dish of “me.” And most dangerous about all this, is while they are helping themselves, they offer scant protection against hackers13 in spite of any efforts they may employ. We know BIG Tech companies (such as Google,14 Facebook, and so on) have had security breaches aplenty.
Is 1984 Upon Us?
Nations and citizens — whether left or right wing — are increasingly worried about the impact of social media on voters (what to say of shoppers), or the targeting of messages to a specific audience.15 Those audiences have been targeted based on deep analytics which are getting pretty freely shared by various organizations or actors for their own advantage, whether political or commercial. And often the algorithm developers don’t even know for what and how their algorithms are now used.
This issue has become so intense that in the EU and U.S. there is constant discussion and legislation to curtail the power of or even break up the large firms like Facebook and Google. No doubt, these types of firms have extreme power with the unsuspecting (most of us) who use their systems daily. The one bright spot is that these companies have (mostly) resisted the prying eyes of government security organizations who would subvert their data to attack “certain groups.” But that is not universal. China and Saudi Arabia are examples where social network and mobile providers follow the dictates of their government in terms of web content and access and allow government surveillance in order to operate.
Deepfakes — Hijacking Reality
One of the more alarming applications of machine learning technology is the increasingly widespread ability to create deepfakes, where a person’s face, speech, or both can be replaced with another person’s face or speech. Software is emerging that can also replace a person’s entire bodily movements with the movements of another person. In the past, deepfakes have been unconvincing or difficult to make. But now, we are seeing technology that makes deepfakes relatively easy for an average person to create and they are becoming harder and harder to distinguish from genuine video or audio clips.
According to some experts, within the next 6-12 months, deepfake software will be able to create deepfakes that are indistinguishable from genuine, original content. Regardless of the exact timeframe, the ability for many people to easily create deepfakes that look and sound just like the real thing is coming soon.
Some applications of deepfakes are benign, such as its use in Hollywood to have deceased actors (or younger versions of now old actors) appear in a current film.There are many examples budding out of these.
The most downloaded free app in China is Zao (which, readers, you may have heard about) that allows the user to take any clip from a movie or video and replace the original actor’s face with the user’s own face. More disturbingly, one of the earliest uses of deepfakes was to put celebrities’ faces into pornography.
Perhaps the scariest use, though, is in disinformation campaigns. It will become increasingly difficult to tell what is a real vs. fake video or audio clip. It requires a public that is educated and willing to do the work to verify the veracity of what they are seeing and hearing. That willingness to seek unbiased truth seems to be in short supply.
Nowhere has technology been diverted more than in totalitarian China. It appears that China has used much of technology to create a techno-police state. China is pilot testing or implementing a variety of surveillance and identification technologies that will create a total picture of each “citizen” and track their every move withRFID identification cards, barcoding the door to each home (in trial implementation now), and using video surveillance on each street and building. All the internet connections, traffic, and content are controlled by the state.
They are now using facial recognition with DNA to match and ID their citizens.16 Currently, this technology is being used against minorities in the Xinjiang region. “In papers, the Chinese scientists said they followed norms set by international associations of scientists, which would require that the men in Tumxuk gave their blood willingly.”17 Yeah? Right!
But China is not in this alone. In some European countries caught up in anti-immigration, anti-Semitic waves, they are trying to use tech in similar ways. In fact, some of the funding for this type of DNA phenotyping has come from the EU.18
To be clear, this article is not intended to be political. We’re just pointing out that technology can go awry extremely quickly and extremely dangerously.
And lest U.S. readers feel sanctimonious, some of the technology was developed by U.S. companies like Thermo Fisher. No doubt, this was done unwittingly, as the company has said that they will stop selling this tech to China.19 Also in the U.S., Parabon NanoLabs of Virginia creates facial graphics.20 Their well-meaning marketing says its technology is used to help solve crimes. But clearly it can — and is — being used for other ends. And that is the point. We start with the best of intensions, but users’ motives are not all the same.
Globally, we have replaced the iron curtain with a cyber curtain behind which citizens do not have open access to the global internet, news, and information; and their search, email, and phone use are curtained off from the outer world. No wonder misunderstanding, distrust, and conflict, whether on the trade front or politically, are growing. Decades-long alliances and the lessons learned from old wars are being shot down due to propaganda and misinformation. No doubt, the promoters of the world wide web never envisioned the subverting of populations into a Cyber 2024, a techno-1984 world of “propaganda, surveillance, authoritarian politics, or perversions of truth.”21
What Is to Be Done?
In times past, infrastructure referred to roads and bridges. We still need them, of course, but we also have a twenty-first-century infrastructure to protect: our national defense systems, our DNA databases, our patient records, our GPS22 and air traffic control systems, our cyber defense systems, our emergency response systems, our military’s communications systems and weapons systems, to name an important few. Since we don’t seem to respond well to even obvious events, we need much better early-warning systems to prevent or mitigate damage to our current infrastructure. But there has to be some knowledge and will in government to fund and implement them.
Three areas that need greater investment:
Government self-education and funding of critical infrastructure protections. In the West, today, as we noted, the web is the wild west with minimal controls or oversite. Though we know government regulators can often be inept or neglectful, government involvement in protection would be a start. Qatar, in partnership with the UN, has a good example of an AI-powered early-warning system for disasters. Artificial Intelligence for Digital Response (AIDR) uses social media analysis to identify events in motion and mount a more effective response.
Industry self-monitoring and control. To some extent, this can work if key companies in an industry work together. We saw good cooperation on the RoHS and WEEE standards,23 for example. Facebook, Google, Microsoft and others have partnered on AI technology that can detect terrorists and those who would subvert the internet for violence. (We hope this works.) Of course, effort of this kind is not without controversy, since free-speech advocates have voiced some objections.
Costs of Cyber Breaches
Data breaches and security incidents are becoming increasingly costly.
Canadian lender Desjardins Group recently revealed it had spent C$70 million ($53 million) in the wake of a breach early in 2019 that exposed personal information of 2.9 million members.
Manufacturer Norsk Hydro said the final bill for its crippling cyberattack could be as high as $75million.
British Airways and Marriott have had to add $100 million each onto the final cost of their incidents after falling afoul of GDPR.
CSO Magazine, August 29, 2019
Self-/consumer protection, meaning educated consumers who take action to secure their data. But here, too, government can play a role, with more education for consumers and practical programs that save all three constituents — consumers, government, and industry — huge sums of money. As an analogy, the state of Massachusetts recently began offering free nicotine patches to help people stop smoking. Think of the savings to all three constituents if more people quit. Why couldn’t we have a program like that for better cyber security? Perhaps such software should be given for free or its cost compensated to citizens. In just one area, breach costs, this actually would save billions. Insurance companies and retailers, who have to pay premiums or underwrite and compensate when there is fraudulent use of a consumer’s credit cards, bear the financial burden today. But there are also recovery costs and government fines for broken laws. Cost per breech is rarely reported, but using Target an example, the cost was more than $175 million. Part of a program like this where government is paying license fees for citizens, increasing the revenue of cyber security tools, would be a requirement for more partnership among companies to develop even better tools.
Of course, ultimately, we are responsible for our own online behavior. If we are app crazy and impulsive in our use of various social sites and promotional advertising, our data will be used,24 either to increase the wealth of a few promoters, or by rogue actors, or worse, hacked. But there are things we can do personally to protect ourselves.25
Conclusion — Can’t We Divert Our Best Minds to Do Good?
There are loads of examples of doing good while making tons of money. Here are a few we, hopefully, know:
Of course, privacy protection and security applications. This is a growing market with new sectors sprouting as BIG TECH continues to harvest “me” and bad actors keep probing weak points to hack my systems.
Social robots in home care. For many workers, robotics can be bad news. But the “social robot,” ones that can be in the home and that aid the disabled and the elderly are growing in capability and presence. There is no getting around that today, in many societies, the family fabric has become weakened, if not shredded. Thus, being alone cannot only be lonely, but dangerous. This is sad for people. But under the circumstances, having a robotic home health aide can be a real blessing.
Robotic surgery — from hearts to brains, robots are assisting in some of the most sensitive life and death procedures.
Home safety and alarm systems — from Life Alert® and other examples, or cameras connected to our mobile devices
Mobile systems for heart and other conditions that need monitoring
IoT/RFID and sensors in supply chain to maintain product quality and freshness and prevent counterfeits from getting into the market. With so many salmonella and E. coli outbreaks, we need to deploy these solutions more widely. IoT has a plethora of positive use cases, in fact.
Augmented Reality to ensure that workers follow safety precautions in using equipment
Though personal vigilance is essential, making changes of this magnitude requires a huge shift in social awareness, enough awareness to take action! We know bad actors will continue to exist and their efforts will become more and more sophisticated. More sadly, BIG TECH has already proven unable to fix itself. It still leaves too much of our personal information “on offer.” That can change if we change our consumer habits and spending habits that use BIG TECH as the channel. And sadly, as well, many governments are also part of the problem. However, in spite of the insufficiency of each constituency — we, the consumer; government; and tech — it will take all of us to move forward to a more open, yet safe world.
1 Blade Runner — Return to article text above 2 Gattaca — Return to article text above 3 If you are old enough you can remember Alfred Hitchcock’s Consider Her Ways, this surely is a 50-year-old prequel to The Handmaid’s Tale. — Return to article text above 4 Star Wars, 1984, Hunger Games and so on … — Return to article text above 5 As in the classic 1927 film: Metropolis — Return to article text above 6 View the not-so-futuristic The Last Enemy, an early winner for Benedict Cumberbatch. This series from 2005 gave me pause on the technology of IoT devices and big data analytics and where they might go. — Return to article text above 7 Proceedings of the National Academy of Sciences 113, no.3 (2016): 558 — Return to article text above 8 “China had set up bogus personal accounts on Twitter to spread propaganda portraying Tibet as a happy Chinese province. The account profiles were of “Western” people, to suggest that many Westerners supported China’s actions in Tibet. The story garnered headlines around the world, and following hundreds of emails and tweets from Free Tibet supporters, Twitter deleted all the accounts within two weeks. Many of the Twitter identities were used for YouTube accounts used to share and comment positively on videos promoting China as a progressive and benevolent country. Videos included those showing “happy” Tibetans and state-organized “celebrations” in Tibet. YouTube (which is owned by Google) has deleted all but one of the accounts we identified and also removed a number of videos.” From the “Free Tibet” website. — Return to article text above 9 New York Times: Video Games and Online Chats are “Hunting Grounds” for Sexual Predators — Return to article text above 10 For more on digital privacy — Return to article text above 11 Fact is the less data we share about ourselves the poorer they become, so there is little incentive for change. But this is the price we pay for these free “utilities.” — Return to article text above 12 Often you can’t: If you read privacy policies carefully — who does? — you will note the developers have no intention of protecting your privacy: “The privacy policy includes a clause which says that its developer gets a “free, irrevocable, permanent, transferable, and relicense-able” license to all user-generated content, according to Bloomberg. The company has been forced to quickly respond to the criticism, and now says it won’t use its users’ photos or videos for anything other than app improvements without their consent. It will also erase user data from its servers when users delete their data from the app. It’s a similar controversy to the one that surrounded FaceApp earlier this year, when the face-aging app again went viral in July. The app’s developer was forced to clarify its privacy policy, and to offer users the option of deleting their photos off its servers if they wished. In the case of FaceApp, commentators were quick to point out that the app’s privacy policy was no more invasive than many of the most popular mobile apps across the world. The Verge — Return to article text above 13The 5 biggest data hacks of 2019 — Return to article text above — Return to article text above 14 For example: Google says hackers have put ‘monitoring implants’ in iPhones for years ; Google warns BILLIONS of website passwords hacked ; Security Warning For 23 Million YouTube Creators Following ‘Massive’ Hack Attack ; and Unsecured Facebook Databases Leak Data of 419 Million Users — Return to article text above 15 I always thought I volunteered to be an audience, like when I go to the theater. I don’t think those who get 10 to 20 robocalls a day consider themselves “an audience.” — Return to article text above 16 We won’t explore the intense Chinese space build-up. Countries have a right to national defense, but much of the technology has been “borrowed” from willing and unwilling companies around the globe. — Return to article text above 17 New York Times: China Uses DNA to Map Faces, with Help from the West — Return to article text above 18 This paper was published in the journal Forensic Science International. The paper said it was backed by a grant from the European Union — and by a grant from China’s Ministry of Public Security. New York Times — Return to article text above 19 “Thermo Fisher said it would no longer sell its equipment in Xinjiang, the part of China where the campaign to track Uighurs is mostly taking place. The company said separately in an earlier statement to The New York Times that it was working with American officials to figure out how its technology was being used.” New York Times — Return to article text above 20 Their Snapshot service will phenotype DNA and other data to create a composite profile. — Return to article text above 21 Read: Doublethink Is Stronger Than Orwell Imagined. — Return to article text above 22 In another example, spoofing GPS and hacking has also been done. Once we become aware of the imaginative actions of the unscrupulous, we can deploy technology to protect and correct. In the case of GPS, some countries have developed and deployed backup or override systems with newer technology that is much more difficult to jam or spoof, but the U.S. is not among them. — Return to article text above 23 And other standards such as the Lacey Act and Section 1502 of Dodd-Frank better known as “conflict minerals” regulations. — Return to article text above 24 Whenever a service is provided for free, a company is inevitably profiting from your data. Sometimes it’s for better ad targeting, sometimes it’s to train their AI for better facial recognition. You often don’t know.” The Verge — Return to article text above 25 Read: “How we survive the surveillance apocalypse/Online privacy is not dead, but you have to be angry enough to demand it.” Geoffrey A. Fowler, The Washington Post — Return to article text above
To view other articles from this issue of the brief, click here.
Manage Cookie Consent
To provide the best experiences, we use technologies like cookies to store and/or access device information. Consenting to these technologies will allow us to process data such as browsing behavior or unique IDs on this site. Not consenting or withdrawing consent, may adversely affect certain features and functions.
Functional Always active
The technical storage or access is strictly necessary for the legitimate purpose of enabling the use of a specific service explicitly requested by the subscriber or user, or for the sole purpose of carrying out the transmission of a communication over an electronic communications network.
Preferences
The technical storage or access is necessary for the legitimate purpose of storing preferences that are not requested by the subscriber or user.
Statistics
The technical storage or access that is used exclusively for statistical purposes.The technical storage or access that is used exclusively for anonymous statistical purposes. Without a subpoena, voluntary compliance on the part of your Internet Service Provider, or additional records from a third party, information stored or retrieved for this purpose alone cannot usually be used to identify you.
Marketing
The technical storage or access is required to create user profiles to send advertising, or to track the user on a website or across several websites for similar marketing purposes.
To provide the best experiences, we use technologies like cookies to store and/or access device information. Consenting to these technologies will allow us to process data such as browsing behavior or unique IDs on this site. Not consenting or withdrawing consent, may adversely affect certain features and functions.
Functional Always active
The technical storage or access is strictly necessary for the legitimate purpose of enabling the use of a specific service explicitly requested by the subscriber or user, or for the sole purpose of carrying out the transmission of a communication over an electronic communications network.
Preferences
The technical storage or access is necessary for the legitimate purpose of storing preferences that are not requested by the subscriber or user.
Statistics
The technical storage or access that is used exclusively for statistical purposes.The technical storage or access that is used exclusively for anonymous statistical purposes. Without a subpoena, voluntary compliance on the part of your Internet Service Provider, or additional records from a third party, information stored or retrieved for this purpose alone cannot usually be used to identify you.
Marketing
The technical storage or access is required to create user profiles to send advertising, or to track the user on a website or across several websites for similar marketing purposes.