The Cambridge Analytica controversy is the latest sensation in an array of controversies that involved leading tech companies, it included companies like Uber , Twitter, Apple, Google, and last but most important Facebook . As we enter the era of complete digitalization, such incidents only deter our trust in tech companies. Although we are used to their services and products, it makes us attached, dependent, vulnerable and gives us a feeling of being exploited. Additionally, such incidents validate us of that exploitation.
Unlike the popular metaphor, tech companies should not be compared with tobacco companies that supply us with toxic products. However, tech companies have weakened privacy, amplified prejudice, exploited psychological weakness, encouraged harassment, intensified distraction and inflamed political tensions. Also, we can’t click these firms to the curb as their tools improve our civic, economic, social and personal well-being. But despite all merits of their services, tech companies have offended public trust, not once but time and again!
Trustworthiness is accompanied by ethics. Therefore, if companies want to regain user trust, they need to do lots of work. Their future practices should seal the ethics gaps. In this article, we will discuss about a major area that needs upgrade: ethics in the product infrastructure. We will also discuss why it’s difficult for tech firms to follow ethics and suggest ways to improve the current scenario. But before digging deep into the article, we need to address the obvious question i.e. Will ethics actually affect tech companies’ behavior towards user data? Is a draconian shift in the user behavior much needed to stop this data exploitation?
Are Ethics Nothing More Than Moralizing?
Tech companies should be responsible enough to discuss the ways they are using our data that affects our lives. Black Box Society’s author Frank Pasquale, who is one of the most renowned thought leaders in the industry tells that tech companies that possess immense user data will not discuss about respecting user privacy unless strict governmental approaches are imposed on them. Under self-regulatory practices, citizens will be still exploited, deceived and manipulated by tech companies. Pasquale says, “I don’t think tech companies can have these discussions until a regulatory framework forces them to do so. They were warned about the perils of lax application of their own guidelines, and they have ignored or marginalized their critics”. Companies like Facebook systematically overrate Artificial Intelligence, automation and engineering, and undervalue ethics, legal expertise and compliance.
Pasquale adds, “In the U.S., politicians need to empower the Federal Trade Commission and beef up the staffs of state attorneys general. In Europe, data protection authorities need to be enforcing privacy laws with added vigor and to ensure that the general data protection regulation is not strangled in the cradle by crabbed interpretations of its provisions for algorithmic transparency, such as the right to explanation.” Over Cambridge Analytica controversy, he speaks, “Facebook would be facing massive fines for repeatedly violating the trust of its users and would be subject to prudential regulation to assure it keeps its promises in the future. But there are many areas where our sense of right and wrong is emerging or where duties are more moral in form than legal. That’s where ethical education is essential?—?to cultivate judgment and articulacy about values in realms where the obsession with the algorithmic leads to binary thinking (whatever is legal is fine to do) or hacker ‘ethics’.”
Pasquale also wants to recall the event when Sandy Parakilas (in charge of stopping data breaches by 3rd party apps from 2011 to 2012) was not provided with the power even after warning his seniors about a major breach risk. He had conveyed that company’s lax attitude towards data protection will result into a major data breach. Under the light of such incidents, how can public trust firms and believe that now they are ready to hear out their employees who are ethically concerned?
It seems that tech companies (that have access over huge customer data) are intoxicated with power. They tend to be in complete denial the fact that they can fail too. Moreover, tech giants serve many masters like regulators, consumers and shareholders and masters can have their competing preferences. Well public is not against conscientious capitalism as it keeps market going but any company should impart ethics in their structure to prevent strict forceful regulations.
On the other hand, what if companies are now finally willing to introspect their ways due to the latest Cambridge Analytica controversy. What if they have realized Spider-Man’s axiom- “With great power comes great responsibility.”
Why Product Structure Is Crucial to Ethics ?
If companies are determined to regain their trust, at first they should understand that it’s time to revamp the design their products and services with regards to ethical revolution.
Design issues related to a service lessen public trust on tech companies. Impenetrable stereotype terms and conditions create a wall of ignorance around us. Additionally, design preferences are applied in the platform to help companies exploit us rather than helping us on basis of our preferences. They customize their design and advertising strategies accordingly.
Privacy’s Blueprint author Woodrow Hartzog says, “Design (organizational structure) isn’t just an ethical issue because it is everywhere and it is power. Design is a major ethical issue because that power can and always is used for political ends. By definition, it allocates power between platforms and users. And it is never neutral. Every decision to make something searchable, to include certain things in a drop-down menu, to include a padlock icon to give the sense of safety, to nudge people for permission for certain data practices furthers an agenda to make certain realities of disclosure come true. Usually, the agenda is disclosure.”
Andrew “Boz” Bosworth, Facebook’s VP leaked a memo that displayed that design choices were made to optimize the extraction of maximum personal data from tech users.
After learning about the memo Woodrow Hartzog expressed, “It’s a little easier to see how every aspect of the design of Facebook is bent towards its mission to get you to never stop sharing and to feel good about it in the process”. He further said, “The reason behind this design is now such an important ethical issue is that law and policy have thus far had little to say about it. Lawmakers focus on data processing but too often ignore rules for the design of digital technologies. We can do better across the board, and it starts with being more critical about the way our tools are built.”
Strictly speaking, product design issues are so common due to policymakers neglecting user interest. To improve the current situation, a fresh blueprint for privacy values should be accepted across tech companies. However, to take perform such actions, the illusions of self righteous ethics needs to be dispelled by the tech giants. They need to be more transparent than ever before.
Tech companies say that it’s difficult for them to apply strong standards due to different values, user diversity and varied privacy preferences. According to them, some users may not entertain it. However, this statement, which is not exactly true cannot pull out tech companies from the responsibility of managing user data righteously. It’s like celebrities wishing for public attention but refraining from their duty of becoming role models to the society.
As far as tech companies offer product and services that influence user behavior in large-scale, they should accept the responsibility for the power they are releasing upon the world.
There are multiple dimensions when we talk about responsibility. However, according to Hartzog, tech companies should be setting up three key standards: –
- Boosting trust i.e. by promoting less manipulation and greater transparency
- Respecting the secrecy of users by giving more power to users to be selective while sharing personal data
- Considering dignity as sacred by promoting absolute autonomy and abandoning ways to confuse users
It’s time that tech companies should appreciate business ethics and consider it as a serious aspect while developing services or products for massive group of users. That’s the only way they can regain public trust. Both politicians and public are irritated due to the constant strife created by tech companies. It somehow seems like tech companies are growing towards reconsidering their improving their trustworthiness. But they still lack the system that can help them work towards it. Anyway, arrogance, greed and institutional myopia will not be tolerated anymore by public and government.