inkwell.vue.505
:
Virginia Eubanks, Automating Inequality
permalink #26 of 43: Ari Davidow (ari) Sat 15 Sep 18 06:38
permalink #26 of 43: Ari Davidow (ari) Sat 15 Sep 18 06:38
That takes me to a thread I have been saving for towards the end of our discussion. Virginia, you are part of a project called "Our Data Bodies." It is related to the subjects you raise in the book, but also very different. Can you talk about that project? (https://www.odbproject.org/)
inkwell.vue.505
:
Virginia Eubanks, Automating Inequality
permalink #27 of 43: Virginia Eubanks (veubanks) Sun 16 Sep 18 15:32
permalink #27 of 43: Virginia Eubanks (veubanks) Sun 16 Sep 18 15:32
Ari - Agreed that there is a difference between the two. But don't agree that the difference is necessarily INTENT. While the Daniels administration in the Indiana may well have held negative and punitive ideas about social welfare, the designers in Los Angeles and Allegheny County are very well-intentioned, smart people who care deeply about the well-being of the people their agencies serve. But too often, we believe that designing tools to be NONdiscriminatory -- that is, to not explicitly promote or excuse bias -- is enough. I often tell designers that that's like designing cars without gears and then driving them in a place like San Francisco. Why should we be surprised when they crash? They have no mechanism for navigating the twists and turns, hills and valleys of the landscape. Similarly, if we design "neutral" tools in a landscape shaped by deep inequalities, we should not be surprised when they make inequality worse. Paulo Freire says that neutral education is education for the status quo. "Neutral" technology design is likewise design for the status quo. We have to build gears into our automated systems to deal with the deep inequalities that mar our social and political landscapes. And we have to build them in in such a way -- either technically or in policy -- so that they don't require good intentions by individual designers or policy-makers.
inkwell.vue.505
:
Virginia Eubanks, Automating Inequality
permalink #28 of 43: Pamela McCorduck (pamela) Sun 16 Sep 18 20:45
permalink #28 of 43: Pamela McCorduck (pamela) Sun 16 Sep 18 20:45
Well put. Imagining that technology, in particular AI, was "neutral" was one of my life's biggest mistakes. And I'm made bonkers by the criminal sentencing algorithms that are proprietary! You can't even check how they're set up, what their assumptions are, let alone correct them. I heard Jeanette Wing give a lecture a few days ago on Big Data and its algorithms, and I'll say more about what she had to say, but it's nearly midnight in my time zone.
inkwell.vue.505
:
Virginia Eubanks, Automating Inequality
permalink #29 of 43: Jon Lebkowsky (jonl) Mon 17 Sep 18 14:35
permalink #29 of 43: Jon Lebkowsky (jonl) Mon 17 Sep 18 14:35
Virginia, you said "we have to build them in in such a way -- either technically or in policy -- so that they don't require good intentions by individual designers or policy-makers." That's what we tried to do in Texas when I was in that field, which was in the late 80s and 90s. The system was pretty objective, in fact it enforced objectivity - eligibility was determined by data about income, resources, household size, etc. It was difficult to be punitive as a caseworker, though it was also difficult to provide anything but the prescribed support. I agree with you that technology can incorporate values, though, and I didn't mean to suggest otherwise. What I was trying to say is that, in our specific system, we avoided supporting "animus against the poor," and we were considering the privacy issues as well. And that it's possible to do so, that not every project will inherently be as screwy as Indiana's.
inkwell.vue.505
:
Virginia Eubanks, Automating Inequality
permalink #30 of 43: Dodge (dodge1234) Mon 17 Sep 18 17:41
permalink #30 of 43: Dodge (dodge1234) Mon 17 Sep 18 17:41
I agree about forms. When I started my disability some years ago, I was sent a 15 page front and back document to fill out. Which they rejected. Not until i ended up in the hospital with an advocate did I get any progress at all. I was rejected because of the answer I gave to ONE question which I misunderstood. They threw out my request and I had to start over. Frankly, I don't know how really really disabled people get into disability without someone not disabled to help them.
inkwell.vue.505
:
Virginia Eubanks, Automating Inequality
permalink #31 of 43: Ari Davidow (ari) Tue 18 Sep 18 06:32
permalink #31 of 43: Ari Davidow (ari) Tue 18 Sep 18 06:32
This is the murky area, and I think, Virginia, that I mostly agree with you. If we =want= to enable people to get help, we design forms to help them, and we put in places policies such that when someone misinterprets a question, as Holley did in the post above, we provide help, not punishment. The problem is that this isn't just our attitude towards people in need. There is something deep rooted in our culture that makes it acceptable to use forms to prevent service--just yesterday, having spent an hour filling out a form to request routine software at work, something that should take, maybe, five minutes (but for the fact that I had to try the form multiple times, all previously filled out information disappearing capriciously, and then had to experiment until I got the result I needed), I invoked Dilbert's "Mordor, preventer of IT services" (sp?). Having noted that, I would suggest that maybe the solution lies here. If we can get our helping agencies focused on helping people, and designing software, where software is part of that helping, to facilitate, maybe we learn something broader about ourselves that is worth sharing. But, first, we have to find ways to change attitudes. Virginia, have you found anything in your own work that helps make that change, or that otherwise addresses these issues effectively, or are we still at the "hey, there's a problem here that we need to figure out how to address" stage?
inkwell.vue.505
:
Virginia Eubanks, Automating Inequality
permalink #32 of 43: Ari Davidow (ari) Tue 18 Sep 18 07:39
permalink #32 of 43: Ari Davidow (ari) Tue 18 Sep 18 07:39
A note to people who are not on the WELL who have been following this discussion. You can participate directly by sending email - except that I tried the link posted earlier and it does not seem to be a "mailto" or other useful link--Jon, can you post the correct link? In the meantime, if email is sent to ari at well.com, I will post for you.
inkwell.vue.505
:
Virginia Eubanks, Automating Inequality
permalink #33 of 43: Ari Davidow (ari) Tue 18 Sep 18 07:39
permalink #33 of 43: Ari Davidow (ari) Tue 18 Sep 18 07:39
A note to people who are not on the WELL who have been following this discussion. You can participate directly by sending email - except that I tried the link posted earlier and it does not seem to be a "mailto" or other useful link--Jon, can you post the correct link? In the meantime, if email is sent to ari at well.com, I will post for you.
inkwell.vue.505
:
Virginia Eubanks, Automating Inequality
permalink #34 of 43: Julie Rehmeyer (jrehmeyer) Tue 18 Sep 18 11:54
permalink #34 of 43: Julie Rehmeyer (jrehmeyer) Tue 18 Sep 18 11:54
Again, I'm rather to the side of the main thrust of this conversation, so don't let me derail things. But the very beginning of this conversation was helpful to me in framing this piece and showing how the core of ableism is the same as classism, sexism, racism, etc. <http://www.latimes.com/opinion/op-ed/la-oe-rehmeyer-scientific-ethical-lapses- in-neflix-series-on-chronic-illness-20180917-story.html#>
inkwell.vue.505
:
Virginia Eubanks, Automating Inequality
permalink #35 of 43: Virginia Eubanks (veubanks) Tue 18 Sep 18 16:51
permalink #35 of 43: Virginia Eubanks (veubanks) Tue 18 Sep 18 16:51
I think we're stuck on this idea of intent -- do we "mean" to make a tool that is punitive or surveillant? I think intent is much less important than impact. It might be inferred from the Indiana case that the Daniels administration intended to create barriers to public assistance and kick people off the rolls. That's a story most people are pretty comfortable with -- politicians looking to make hay by picking on the poor, a corporation stepping in to do the dirty work, poor and working-class people taking it in the neck. But that's why I included the Los Angeles and Allegheny County stories. If I wanted to write a book that was all horror stories, I would have chosen very different cases. These are some of the BEST systems we have, not the worst. In both LA and AC, designers and policy-makers are very smart, very well-intentioned who care deeply about the well-being of the people their tools and agencies serve. They have, in fact, done everything most progressive critics of algorithmic decision-making ask them to do: they have been (mostly) transparent about how the tools are built; they retain public control over them, providing a level of accountability; and they even used some principles of participatory or user-centered design. So the bigger question I want readers to consider is: If we are doing everything right, why do the systems we are producing in public services STILL profile, police, and punish the poor? And I think that has to do with the "deep social programming" we have about poverty in this country, exactly what you mention above Ari. We believe that poverty is something that happens to a tiny minority of probably pathological people. So it seems reasonable to "diagnose" the poor - for the first step in social assistance to be submitting to some kind of morals test. But the reality is that 51% of us will be below the poverty line at some point in our adult lives (between the ages of 20 and 64) and a two-thirds of us will access MEANS-TESTED public assistance. That's straight welfare, not reduce-priced school lunches or Social Security. If poverty is a majority problem, them building moral thermometers doesn't make sense. What we need instead is universal floors.
inkwell.vue.505
:
Virginia Eubanks, Automating Inequality
permalink #36 of 43: Pamela McCorduck (pamela) Tue 18 Sep 18 19:18
permalink #36 of 43: Pamela McCorduck (pamela) Tue 18 Sep 18 19:18
And 99% of us are getting screwed other ways. More deep social programming. For the last forty years, our every metric has been economics. The poor have different reasons from the very rich for paying attention to these metrics, but it's always present. I am not hopeful, Virginia. I loved/was horrified by your book, but do I have hopes for change for the better? Not in the foreseeable future.
inkwell.vue.505
:
Virginia Eubanks, Automating Inequality
permalink #37 of 43: Virtual Sea Monkey (karish) Tue 18 Sep 18 19:23
permalink #37 of 43: Virtual Sea Monkey (karish) Tue 18 Sep 18 19:23
The Los Angeles story is as much about inadequate resources as it is about a systematic failure. The system seems to do a decent job of identifying the people at greatest risk and those at least risk, and the resources are spent on those two groups. The people in the middle are left to their own devices to deal with their needs while the automated bureaucracy torments them. This turns the idea of triage on its head.
inkwell.vue.505
:
Virginia Eubanks, Automating Inequality
permalink #38 of 43: Betsy Schwartz (betsys) Thu 20 Sep 18 13:35
permalink #38 of 43: Betsy Schwartz (betsys) Thu 20 Sep 18 13:35
And to pick up on something Pamela said earlier - proprietary algorithms are bad enough, but when we get into self-learning systems it isn't even possible to *determine* the algorithm except by trial and error. If there's a prejudice or other unfairness baked into the system, how would we ever know?
inkwell.vue.505
:
Virginia Eubanks, Automating Inequality
permalink #39 of 43: Ari Davidow (ari) Thu 20 Sep 18 13:57
permalink #39 of 43: Ari Davidow (ari) Thu 20 Sep 18 13:57
Here is a question posed by an off-WELL reader who asked to be anonymous: Sorry if I've misunderstood, but when you said that technology is the wrong tool for the job - a hammer when you're trying to paint a barn - are you saying that technology doesn't just reflect the values of those who create it, but that it's, in its very basic nature, not the tool we should be using to try to construct the solutions in this particular space? If so, if technology is the hammer, what's the paintbrush? Are there alternative solutions? It's an interesting idea and I'd love to hear more about it. Thank you!
inkwell.vue.505
:
Virginia Eubanks, Automating Inequality
permalink #40 of 43: Pamela McCorduck (pamela) Tue 25 Sep 18 06:53
permalink #40 of 43: Pamela McCorduck (pamela) Tue 25 Sep 18 06:53
While we await Virginia's response to #39, a new startup came to light yesterday in the NY Times called The Markup, a news site largely funded by Craig Newmark (Craigslist). The Markup's goal is to study algorithms that have significant social impact in three broad categories: how profiling software discriminates against the poor and other vulnerable groups; internet health and infections, like bots, scams, and misinformation; and the awesome power of the tech companies. I hope you can connect with them, Virginia. Your work certainly fits into their first investigative category.
inkwell.vue.505
:
Virginia Eubanks, Automating Inequality
permalink #41 of 43: Jon Lebkowsky (jonl) Fri 2 Nov 18 08:03
permalink #41 of 43: Jon Lebkowsky (jonl) Fri 2 Nov 18 08:03
I'm dropping in late to note that Virginia didn't make it back to respond - but we really appreciate the time she spent with us, and I encourage anyone reading this to read _Automating Inequality_ and give some thought to issues around how technology can both serve and abuse. Thanks also to all other participants in this discussion.
inkwell.vue.505
:
Virginia Eubanks, Automating Inequality
permalink #42 of 43: Ari Davidow (ari) Wed 3 Apr 19 13:24
permalink #42 of 43: Ari Davidow (ari) Wed 3 Apr 19 13:24
This probably belongs here: Big Data loses a neighborhood: <https://onezero.medium.com/how-googles-bad-data-wiped-a-neighborhood-off-the-m ap-80c4c13f1c2b>
inkwell.vue.505
:
Virginia Eubanks, Automating Inequality
permalink #43 of 43: Tiffany Lee Brown (T) (magdalen) Thu 4 Apr 19 21:52
permalink #43 of 43: Tiffany Lee Brown (T) (magdalen) Thu 4 Apr 19 21:52
fascinating article. thank you.
Members: Enter the conference to participate. All posts made in this conference are world-readable.