Skip to main content
BATimes_Aug16_2023

The Tyranny of the Algorithm

A while ago, I noticed that some people changed the way that they were writing LinkedIn posts. Rather than writing in sentences and paragraphs, everything would be written in this weird separated way with everything spaced out in a really unnatural way.  Then certain other common patterns appeared (e.g. using PDFs documents with multiple pages, or ‘carousels’ as some people call them).  Video was huge, then not huge, and so the trends fluctuated.  Some of the formats seemed really good and useful… others… not so much (we’ve probably all clicked on the occasional ‘click bait’ LinkedIn post…).

I gather one of the reasons that people post in particular ways is to get maximum exposure, and to do this you have to pander to the algorithm.  It is, after all, the algorithm that will decide how many people see your post…  and not all posts are created equal.  Those that get more ‘engagement’ will be seen by more people (and, very likely, get even more engagement).  Yet the algorithm decides what counts as ‘engagement’.

There’s nothing inherently wrong with this. LinkedIn is a private enterprise, it can (I suppose) run its operations however it chooses. But take a step back for a moment, and let’s make a hypothesis here:

 

The algorithm has changed the way people write content and interact with others on LinkedIn

I’m making no moral judgment here, and the way that people write and engage with each other has adapted over the years. But let’s follow this to its logical conclusion: social media algorithms have the ability to influence the style and formats in which people communicate. It decides what content gets seen (and doesn’t get seen). Again, as users we might be OK with that. But I hope that there is someone within those companies asking a whole set of ethical questions…

 

Avoiding Biases and Unintended Consequences

In particular, it’s important to consider whether algorithms might lead to bias, and might inadvertently disadvantage or affect particular groups or stakeholders, or whether they might have other types of unintended consequences. For example, I’d imagine that the LinkedIn algorithm probably aims to keep people on the site for as long as possible, and serve them up relevant adverts. But when people learn its nuances, they start to ‘game’ the algorithm, meaning that some folks are more likely to get their content seen than others. Presumably, LinkedIn eventually learns about this too, and adapts the algorithm, and the process repeats.

 

Yet unintended consequences like this aren’t limited to IT or algorithms. Nor are biases  (there are plenty of well-documented cognitive biases that affect people too). Crucially this is an area where BAs can help ask some of the difficult questions, and get beyond (or at least highlight) potential issues.

 

I have often thought it interesting that within most organizations, if you ask the question “who is responsible for regulatory compliance” you will get a clear cut answer. There is usually a legal or compliance team, and often a named individual who is responsible and accountable. Ask “who is responsible for the ethics of this product or project?” and (outside of some very specific domains) you’ll likely get a blank stare. Or, you’ll get a word-soup answer that boils down to “we’re all responsible”.  And when everyone is responsible, too often nobody steps up to ask the hard questions.

 

Advertisement

 

The Ethical Imperative

This is a space where BAs can add significant value. As BAs we’ll be used to conducting stakeholder analysis, thinking in terms of the different stakeholders or personas who will be impacted by a particular proposal. We can extend this thinking by asking “who isn’t here around the table, who is missing from these conversations, and how can we ensure they are represented?”.

We can ask the difficult, but important ethical questions, and ensure that the projects and products that are progressed by the organization are in line not only with its strategy but also its values. If there’s a strategy-execution divide in many organizations, that’s nothing compared to the values-execution divide! (We’ve probably all had experiences with organizations that say they ‘put the customer at the heart of what they do’ that… definitely don’t actually do that!).

 

Often, as BAs, we are able to take a step and ask “what is the impact of this”, and “what does success look like for X stakeholder group?” and “how does that vary from what the Y stakeholder group thinks?”.  By taking a holistic view, balancing different viewpoints and putting an ethical lens on things, we can hopefully reduce the risk of inadvertently introducing bias or unintended consequences.

This involves us having the courage to ask bold questions and keep ethics firmly in mind. If we don’t, there’s a real danger that the ethical dimension will get missed. A situation that I’m sure we’re all keen to avoid!


Adrian Reed

Adrian Reed is a true advocate of the analysis profession. In his day job, he acts as Principal Consultant and Director at Blackmetric Business Solutions where he provides business analysis consultancy and training solutions to a range of clients in varying industries. He is a Past President of the UK chapter of the IIBA® and he speaks internationally on topics relating to business analysis and business change. Adrian wrote the 2016 book ‘Be a Great Problem Solver… Now’ and the 2018 book ‘Business Analyst’ You can read Adrian’s blog at http://www.adrianreed.co.uk and follow him on Twitter at http://twitter.com/UKAdrianReed