Algorithmic accountability | TechCrunch

W

hile there needs to be more diversity on the teams developing software in order to truly take into account the different number of scenarios an algorithm may have to deal with, there’s no straightforward, cut-and-dried solution to every company’s algorithmic issues. But researchers have proposed several potential methods to address algorithmic accountability.

Two areas developing rapidly are related to the front- and backend process, respectively, Barocas tells me. The front-end method involves ensuring certain values are encoded and implemented in the algorithmic models that tech companies build. For example, tech companies could ensure that concerns of discrimination and fairness are part of the algorithmic process.

“Making sure there are certain ideas of fairness that constrain how the model behaves and that can be done upfront — meaning in the process of developing that procedure, you can make sure those things are satisfied.”

On the backend, you could imagine that developers build the systems and deploy them without being totally sure how they will behave, and unable to anticipate the potential adverse outcomes they might generate. What you would do, Barocas says, is build the system, feed it a bunch of examples, and see how it behaves.

Let’s say the system is a self-driving car and you feed it examples of pedestrians (such as a white person versus a black person versus a disabled person). By analyzing how the system operates based on a variety of inputs/examples, one could see if the process is discriminatory. If the car only stops for white people but decides to hit black and disabled people, there’s clearly a problem with the algorithm.

“If you do this enough, you can kind of tease out if there’s any type of systematic bias or systematic disparity in the outcome, and that’s also an area where people are doing a lot of work,” Barocas says. “That’s known as algorithmic auditing.”

When people talk about algorithmic accountability, they are generally talking about algorithmic auditing, of which there are three different levels, Pasquale says.

“In terms of algorithmic accountability, a first step is transparency with respect to data and algorithms,” Pasquale says. “With respect to data, we can do far more to ensure transparency, in terms of saying what’s going into the information that’s guiding my Facebook feed or Google search results.”

So, for example, enabling people to better understand what’s feeding their Facebook news feeds, their Google search results and suggestions, as well as their Twitter feeds.

“A very first step would be allowing them to understand exactly the full range of data they have about them,” Pasquale says.

The next step is something Pasquale calls qualified transparency, where people from the outside inspect and see if there’s something untoward going on. The last part, and perhaps most difficult part, is getting tech companies to “accept some kind of ethical and…

Read more from the source…

Back to Top