Increasingly, our lives are run by algorithms. When we run to Google for search for the answer to a question thatâ€™s been plaguing us, weâ€™re tapping into an algorithm that will find us the best possible result. When we open Facebook or Twitter for a rundown of all the latest news and updates from our friends, weâ€™re relying on an algorithm to show us the most relevant content. We can even use algorithms to make predictions, evaluate data, and .
But as we grow more reliant on algorithms, some experts are speculating about the ethical nature of algorithms, and whether we should be applying stricter or more thoughtful ethical standards to these powerful technological tools.
Ethics in technology is an important subject, but itâ€™s much harder to apply ethics to algorithms than it appears on the surface.
The Basics of Algorithms
Letâ€™s start with a basic explanation of what algorithms are, for the uninitiated. Algorithms are basically sets of rules that a machine can apply to achieve some kind of function. For example, letâ€™s say you have a list of 100 different names, and you want to alphabetize them. There are many different sorting algorithms you could use to accomplish this; each one provides a different set of instructions for how to sort these names, with all of them attempting to achieve the end result of a perfectly alphabetized list of 100 names. For example, one algorithm might look at each name individually and place it where it belongs in the list, compared to other names that have already been evaluated. Another algorithm might look for names that start with A, then names that start with B, and so on.
Algorithms can be used for tasks ranging from simple to complex. For example, they can be used to evaluate the position of names in alphabetical order, but they can also be used to evaluate the trustworthiness of a website based on the number, quality, and nature of links pointing to it, as is the case with Google search, hence why sites like Link.Build focus so much on that one aspect of the algorithm.Â
How can these algorithms be â€œbadâ€� if theyâ€™re just tools to achieve a result?
New Ethical Problems With Algorithms
There are several ethical problems that have arisen with algorithms in recent years:
Addiction and repetition. First, we can consider the fact that many algorithms are intentionally engineered to keep people using a specific product for as long as possible. This is perhaps , where algorithms are designed to keep users engaged; with the help of the right algorithm, users can be shown exactly the right types of content in exactly the right order to get them to like, comment, share, and otherwise react. This keeps them using the app for a longer period of time, enabling the social media company to make more money from the user via advertising. But the side effect of this is that each user who falls victim to the algorithmâ€™s whims will be losing time almost unconsciously. In extreme cases, social media users can get addicted to the app, the same way they could get addicted to any drug designed to encourage thoughtless, repetitive use.
Negativity bias. We also need to consider the role of negativity bias as it applies to algorithms. Human beings are negative creatures; we tend to be more affected by a negative event than a positive event, even if theyâ€™re a similar magnitude. For example, losing $10 is associated with stronger feelings than winning $10, even though itâ€™s the same amount of money. Algorithms that provide content to people, in search or social media, tend to be focused on giving them content that keeps them engagedâ€”but in many cases, this means bombarding them with bad news and outrageous stories. This can end up increasing stress and decreasing quality of life for these users.
Polarization and extremism. Algorithms in social media and search also have a tendency to push people to extremes. Over time, an algorithm can learn your philosophical and political leanings; from there, it will almost exclusively show you content you already agree with, and recommend you to visit sites and join groups that agree with you. From there, it will try to push you to new types of content you havenâ€™t discovered, and types of content it thinks you might also agree with. It doesnâ€™t take long before youâ€™re deep into an extremist echo chamber. This gets even hairier when you add cultural biases that can creep into code, especially as companies tend to outsource their software development to other regions with differing cultures.Â
Bias and equality. Many experts have criticized the nature of algorithms for their innate capacity to be biased, whether intentionally or unintentionally. Algorithms coded by a specific demographic tend to favor that demographic, and on a large enough scale, bad coding could end up harming an entire segment of the population unintentionally.
Reducing evaluations to objective outcomes. Algorithms are also problematic for their capacity to reduce complex, nuanced, and somewhat subjective issues to objective and binary determinations. For example, in one school district, 2 percent of teachers were laid off based on the evaluations of an algorithm. The algorithm determined performance based on improvements in standardized test scores; accordingly, there were amazing teachers, with perfect records and rave reviews from both kids and parents, who were let go because they didnâ€™t meet the algorithmâ€™s standards.
Lack of transparency. As a consumer, responsible for algorithmically generating your content or results. You arenâ€™t given a description of the algorithm that chooses which websites to rank in your search results or which posts to list first in your social media newsfeed. This lack of transparency can feasibly make every other ethical issue with algorithms worse, since youâ€™ll never really know whatâ€™s there.
Why Ethics Are Hard to Apply
Of course, ethical standards are hard to apply to algorithms, and for several main reasons:
The impartial nature of algorithms. Algorithms are sets of rules that are executed impartially. Even advanced algorithms that are still engineered to follow a simple set of instructions, or achieve a simple goal. Thereâ€™s no nefarious or questionable motivation to be had; they simply do what theyâ€™re told.
Giving us what we want. In many cases, algorithms are simply designed to give us what we want. If we react more to negative news than positive news, or if we engage more with extremist perspectives than moderate ones, why should we blame the algorithms responsible for leading us there?
AI and blind spots. Many algorithms improve themselves with the help of machine learning and AI, so thereâ€™s a considerable gap between what human engineers designed and whatâ€™s currently operating. This makes transparency more difficult, and leads to major blind posts regarding what algorithms can really do.
Ambiguous intentionality. Bias can be programmed into algorithms and source code intentionally or unintentionally, but how can you prove one over the other? Itâ€™s almost impossible to demonstrate intentionality,
Subtle changes. Most algorithms are constantly evolving, tweaking themselves to better serve customers or decision makers. This makes it hard to pin down specific issues in the present, and even harder to predict how these issues might change in the future.
As algorithms become more complex and more embedded in our lives, ethics related to their development and use are going to become more important. Soon, weâ€™ll have algorithms and providing us with medical recommendations; if these algorithms arenâ€™t fine-tuned to perfection, they could have devastating consequences far beyond things like social media addiction or job loss.
Algorithm ethics are a tricky obstacle to overcome, but in time, we will develop the tools, philosophies, and possibly even the laws necessary to overcome it. In the meantime, we need to work to be more aware of the tools weâ€™re using every day, and push to learn more about the advanced technologies weâ€™re already starting to take for granted.
The post The Problem With Defining Bad Algorithms in Software Development appeared first on ReadWrite.