Dr. Elizabeth Buchanan is Endowed Chair in Ethics and Director of the Center for Applied Ethics at the University of Wisconsin-Stout. She has a Ph.D. in Applied Ethics, specializing in Information Ethics, Research Ethics, Internet Research Ethics, and Information Policy.
IA Summit 2015 Keynote
Topic(s): algorithms, big data, and ethics
In a world with big data and increasing automation thanks to algorithms, are we able to create ethical algorithms? What should we consider? Dr. Elizabeth Buchanan gives an evening keynote on the matter.
Elizabeth Buchanan: Thank you. This has been a fantastic conference. This is my first time at the IA Summit, and it’s just been a blast for me. I hope you’ve enjoyed your time here in Minneapolis. Are there any locals? I’m just curious.
Oh, yeah, quite a few locals. I’m kind of local, the other side, the other state. That way, you’ve had typical Minnesota weather, I think. I hope you’ve experienced this Minnesota nice.
Elizabeth: Do you all know about this? See, I’m not from Minnesota. I’m from New Jersey.
Elizabeth: Yeah! Woo! Cynics like us really think it’s passive aggression as opposed to any kind of nice.
Elizabeth: Anyway, with that, let me thank our co-chairs. I can’t imagine a better team of people. Happy birthday, Veronica. What a great group of people to celebrate with. Mike and Jessie, thanks so much for the invitation to join you. This is definitely one of my best conferences. I arrived and there was a bag of summit beer waiting for me.
Elizabeth: Thanks to Vanessa. I don’t know if she’s in the room, Vanessa and ASIS&T for their sponsorship. They’ve been wonderful.
We’re going to talk about ethics. As Jessie said, I hold an endowed chair position at the University of Wisconsin-Stout. It’s an interesting and different role. Basically, my whole job consists of infusing ethics across our campus, our curriculum, and our community. I don’t teach a regular course load. I advise students, but I don’t have to sit on a zillion committees. It’s nice for an academic.
I do this by working across departments. One day, I’ll be working with our applied math people. That was a challenge. I thought, “How am I going to bring ethics to applied math?” That was a little weird. I work with design students. I work with hospitality students, all kinds of disciplines across our campus.
The whole goal is to get our students thinking about ethics as they relate to their professions. What does it mean? What are your professional values? How do you take these codes of ethics that we all have and really put them into practice? Do they mean anything? If there are no consequences or if the code isn’t enforceable, what good is it? Does it mean anything?
I try to get students really thinking about what’s possible. Everyone thinks, “Oh, God, here comes the ethics lady. She’s going to tell us everything we can’t do.” That’s not what ethics is. Ethics is about what’s possible. I’m not going to get into the real philosophical stuff today.
Ethics is about what’s good. It’s about what’s just. It’s about what’s appropriate. We tend to hear more about the people who are unethical. We hear about the things that are wrong or bad like some of the examples from the Islam case.
The other thing that we tend to do is we tend to think about compliance a whole lot more than we think about ethics. Is that true? Yeah. Ethics and compliance are not the same things. One of the things I really struggle with, with particularly 18 and 19-year-old’s is getting them to really think that ethics and law are not the same things. Ethics and religion are not the same things. These are all very different, interrelated, but very different concepts.
I want us to use this framework if we’re thinking about ethics, ethics as practice, ethics as method and ethics as commitment. These are principles. They originally came from the Hastings Center which is a bioethics center. They’ve been morphed to fit the practice of ethics education and to provide a direction for how we talk about applied ethics, ethics in our education, ethics in our profession.
This is what I want us to do for the next 30 minutes, whatever the time between me and the bar is out there. We’re going to stimulate our moral imaginations. We want to recognize moral issues. We want to teach how to analyze and apply moral concepts and principles to our work, to our lives. We want to encourage personal, civic, societal, and professional responsibility.
That one is tough. To me, it’s very hard to disentangle one’s personal ethics from one’s professional ethics. I always try to get people thinking about, “What are your professional values?” If you had to define to somebody, what are the values of information architecture? What are they? Tell me what they are. Yell it out.
Audience Member: Clarity.
Elizabeth: Clarity. What else?
Audience Member: Understanding.
Elizabeth: What was that one?
Audience Member: Advocacy.
Elizabeth: Advocacy. Someone said over here…
Audience Member: Instruction.
Elizabeth: Anything else?
Audience Member: Responsibility.
Elizabeth: Responsibility. How different are those things from your life? How do you move between your personal and your professional spheres? How do you interface with people in different zones of your life?
The last thing is teaching how to understand respect and reason around ethical disagreement or uncertainty. We have a lot of disagreements, politically, societally, lot of disagreements right now, and a lot of uncertainty, in a very generalizing view, it seems that we’re much more polarized in a society than we have been.
That might not be empirically true but anecdotally looking in, it looks like we’re much more polarized in our ability to disagree. It’s not working quite so well.
Along that line, we always talk about the gray areas and morals. This is our moral gray cut, moral gray area cut. Think for a minute as you were doing. You’ve identified your professional values. Think about your everyday work. Think about what you do each day. Think about your field. Think about what this represents. What is this conference representing? What ethical challenges do you experience on a daily basis?
Again, the slam case couldn’t have been better. I didn’t know about that. Otherwise, I wouldn’t have decided to come. I would let you guys do it all acted out. You don’t need me up here.
Think about ethical possibilities, not just ethical challenges. There’s ethical possibilities and ways of working. I don’t know again if green team, I don’t think they were on the possibility side of things, but definitely, the others were. Not too long ago, Thom Haller who’s the associate editor for IA in the bulletin of ASIS&T…has some of you read that?
It was a couple of years ago. I don’t know if anyone remember his column, but it came out when I was thinking about this talk. He wrote about his experiences when architectural questions are not asked. Does anyone remember that? Anyone remember what happened? Thom went missing from Facebook. That should scare everybody.
He went missing from Facebook. This very short, little piece in the bulletin talked about how it was and why it was that he went missing from Facebook. It is serious if you don’t tell people you’re going off grid for a while or I’m taking a break, whatever, I’m disconnecting. People get concerned when they don’t see you or social media. That’s the reality.
What happened was Thom, due to a user choice, a choice that he actively made, he went missing. All he did was he checked either all or none or public or private the wrong way. The way he describes it is that the algorithms were working in such a way that his choice wasn’t evident to him, what his choice actually resulted in. That’s a big question. What happens when architectural questions are not asked? Then, I’d like to push us to think about this question. What happens when ethical questions are not asked?
I’ve been doing this information ethics stuff for a long time. What you see on the screen, these are some of the traditional information computing technology ethics issues. They’ve been around. They’ve been identified. They’ve been written about, studied empirically, etc.
I’m not going to go through all of them in any detail, but you can think about them. You can see these in your everyday work. Yes? OK. While all of these are really important, I do want to spend a bit of time just talking about this last one, implicit and explicit bias.
As designers, as information architects, we want to think about the values that we embed, that we sustain, that we don’t even know we’re sustaining. That’s where this idea of implicit bias comes from. Has anyone ever had to do that Harvard implicit bias test where you have to keep clicking a couple of heads saying, “Yes.”
I’m going to do it a little differently and think about implicit bias. I’m not going to make everyone stand up like Veronica did. How many people in the room are racist?
Elizabeth: How many people in the room are sexist? You? You’re going to get them all.
Elizabeth: Homophobic? How many of you benefit from white privilege? Heterosexual privilege? Economic privilege? You could see the hands going up and down, up and down. How different those first couple of questions were from the second way I asked it, but they’re all about implicit bias? We tend to get very defensive when we say, “You’re a sexist. You’re a racist. You’re a homophobe.” We get very defensive.
Here’s the good thing. Implicit baises are malleable, meaning they can be moved. Our brains are incredibly complex. The implicit associations that we have formed can be gradually unlearned. That’s the good thing.
We start unlearning by employing those ethical tenets that we just had on the screen. We employ those ethical tenets in every aspect of our work in order to start unlearning, unembedding those implicit biases.
We have to do a better job. This is what I do. This is my job, to do a better job of teaching others to understand, respect, reason about ethical disagreements, practical disagreements which may have their foothold in cultural stereotyping, cultural conditions, cultural relativism.
Through participatory design models, we can encourage and benefit from difference. One thing I did want to say just as an aside, I go to a lot of conferences and this was the first time I saw a conference code of ethics in a luncheon specifically for LGBTQ and for the first time. Congrats to you, guys, because…
Elizabeth: Those are simple acts of ethics. That’s the practice of ethics. That’s embedding. That’s putting in new values into what you do.
How else do we wrestle with embedded values in our technologies, in our information systems? We do that through a number of approaches. Some of these are probably very familiar to you. Value-sensitive design, community-based participatory design or research, anticipatory ethics, thinking ahead of the curve with the ethical implications, ethical algorithms.
We’ve gotten very used to thinking about algorithms as, one, either neutral which is not correct or, two, really, really bad. I want to give us a way of thinking about ethical algorithms today and then the other action research. These approaches situate our participants, our actors, our users. We call them different things depending on what discipline you come from. It situates those stakeholders as central and as informed, as empowered decision makers.
If you know value-sensitive design work, Batya Friedman states that central to a value-sensitive design approach are analysis of both direct and indirect stakeholders, distinctions among designer values, values explicitly supported by the technologies and stakeholder values, individual group in societal levels of analysis and the integrative and iterative conceptual, technical, and empirical investigations, and finally, a commitment to progress, not perfection.
These approaches allow us to stimulate our moral imagination. They allow us to experience the ethical opportunities in our work. In information ethics, we’re a new field. It was coined in the 1980s. Obviously, our technologies looked very different, right? Things were not quite as small as they are now.
I came across this quote of this particular computer on your right. It says, “In 1983, this was considered svelte, boasting a colossal 256 Kbytes of RAM. It came with the bargain basement price tag of $9,200 and an expression of immovable tranquility for the user.”
Elizabeth: What does that mean? I’m not even sure I know what that means. I’m pretty sure this idea of immovable tranquility is now something that we might be thinking about as movable tranquility. Any of you wearing your new Apple watches yet? They came out yesterday. Nobody went and got one? I don’t believe it. I was going to bring you on stage.
Elizabeth: Ethically speaking, though, we have size, we have power, we have speed, all these technical things that have changed. Ethically speaking, what’s changed? We definitely have pretty, better svelte computing than we did in those days. We want to think about what ethical issues are we now experiencing in light of ubiquitous or pervasive computing.
That’s our new normal. It’s everywhere. Ambient findability. I’ve gone about 20 minutes without saying big data, but I’m going to have to say it because it’s important to talk about.
Algorithms are still algorithms. What’s an algorithm? It’s a process or set of rules to be followed in calculations or other problem solving operations especially via computer. We’d all agree that our algorithms today are smarter, they’re faster, and they seem to be really intimate. They seem to know us pretty well, at least most of the time. If they’re working well, the intimacy works.
Algorithms, in theory, make our computing experiences better. That’s a very philosophical word, too, better. What are algorithms supposed to do? They’re supposed to solve epistemological problems, ontological problems. Those are just fancy ways of saying, ways of knowing, ways of being, ways of existing.
Algorithms help us make decisions about what we do, what we experience, what we share, what we know, and how we exist. That’s the reality. That’s the new normal of the algorithmic world.
Much of our work involves decision making. We can’t get away from that. I don’t want to talk too much about the decision making literature, but let me say this. Our brains appear wired in ways that enable us often unconsciously to make the best decisions possible with the information we’re given. In simplest terms, the process is organized like a court trial.
Sight, sounds, and other sensory evidence are entered and registered in sensory circuits of the brain. Other brain cells act as the brain’s jury, compiling and weighing each piece of evidence. When the accumulated evidence reaches a critical threshold, a judgment, a decision is made. That’s what happening up here.
What’s happening here? An algorithmically manipulated decision looks and feels somewhat different than that process, and maybe that’s a good thing. Why might that be a good thing? Why might it be a good thing to defer to the rise of the algorithmic machine? Well, for some of these reasons. There’s the bumper sticker, “Don’t believe everything that you think.” We do a lot of this that were ruined by our own biases.
When we’re making decisions, we see what we want. We ignore probabilities. We minimize risks that uproot our hopes. Algorithms are what? They’re all probability based. They offer us only what makes statistical sense. Maybe algorithms can undo our biases. All of you, you know that said you have these implicit biases. We all have them.
Maybe algorithms are going to undo our biases, maybe. Does anyone remember [laughs] a couple of months ago, six months, eight months now, the Facebook emotional contagion study? Anybody remember that? OK. Anybody not remember it? OK, a couple of hands not knowing it.
We don’t know if anyone was truly harmed by that study. What do we have, 100 people, 200 people here? I don’t know. There were 700,000 people in the Facebook emotional contagion study.
Of that 700,000 some of them missed some of their News Feeds for the week. They got more positive or more negative news in their Facebook News Feed. They probably didn’t even know.
Would you know if you missed something? It’s a metaphysical question, too, you don’t know what you don’t know. Epistemologically speaking, it’s kind of a wash. If you didn’t know it was there, the tree falls in the forest kind of thinking.
The Facebook study was not the first time that we’ve seen the power of data mining, data aggregation, and algorithmic processing. We’re seeing that more and more on a very social, very public level. It used to be us geeks and the computing world, whatever information world, wherever you fit that we would talk about these things.
All of a sudden, the Huffington Post “This is not ethical!” was everywhere. It was everywhere for a good couple of weeks. As far as news goes, that’s a pretty long period of time and it certainly wasn’t Facebook’s first mea culpa, but it did bring our attention to these processes.
What it helped us think through is how much of our online experiences are technology-mediated experiences, how much of our realities are being shaped. As I said, this stuff isn’t new. Does anybody remember way back, this is 2006, that seems like a long time ago.
Jorge in his keynote yesterday talked about Internet years, dog years, and life years, so 2006 is like a lifetime ago. We can think about this particular case in which an AOL searcher was identified. AOL thought they were giving the research community this gift, “Here’s all this data, here’s this great data set, you can explore it, do what you want with it.”
As researchers want to do, they wanted to figure something out, so mix, match, mine, aggregate, and lo and behold, here comes Thelma Arnold who lived in Georgia and never thought she would be in The New York Times. She certainly never thought that her search terms that ranged from “Numb fingers,” “60 single men,” and “Dog that urinates on everything” was going to be published in The New York Times.
She explained this then in an interview that she routinely researched medical conditions for her friends. She explained her queries, for example saying, “I have friends who need to quit smoking” or “I have a friend who has a dog who’s peeing on everything and I want to be able to help them.” I raised this case just because it’s still a critical case as we look at the history of Internet research ethics cases.
We think about this as a critical moment in privacy discussions, in privacy advocacy. Again, it was 2006. Does anybody remember where you were?
What was your data persona? What was your persona in 2006? Anybody remember? Were you on Twitter in 2006?
I’ve found this image, the 2006 South by Southwest was the first big event to use Twitter. Did everybody know that? I didn’t know that.
That was very cool. Then, lo and behold, what else came out in 2006? That svelte Apple MacBook?
In a few short years, the idea or the reality of pervasive ubiquitous computing has become our normal. We’re always connected, we’re always sending and receiving data engaged in and with information, and possibly engaged in and with knowledge.
By 2015, where we are now, I would suggest that the power and the ethics of algorithms tops our list of information ethics issues. This is the case for many reasons. I want to go through a few of these reasons with you. You can read that.
All right, now what?
Elizabeth: Does anybody remember these images? You know where I’m going with some of this. Anybody remember what your Year in Review was like?
Did your apartment burn down? Did you lose your home? Did you lose your daughter?
I’m going to read this and I certainly won’t do it justice, but this was written by Eric Meyer, “A picture of my daughter who was dead, who died this year. Yes, my little girl looked like that, true enough. My year looked like the now-absent face of my little girl.
It was still unkind to remind me so forcefully, and I know of course that this is not a deliberate assault. This inadvertent algorithmic cruelty is the result of code that works in the overwhelming majority of cases, reminding people of the awesomeness of their years, showing them selfies at a party or whale spouts from a sailing boat or the marina outside their vacation home.
But for those of us who lived through the death of loved ones or spent extended time in the hospital or were hit by divorce or losing a job or any one of a 100 crises, we might not want another look at this past year. To show me Rebecca’s face and say ‘Here’s what your year looked like!’ is jarring. It feels wrong, and coming from an actual person, it would be wrong.
Coming from code, it’s just unfortunate. These are hard, hard problems. It isn’t easy to programmatically figure out if a picture has a ton of Likes because it’s hilarious, astounding, or heartbreaking.
Algorithms are essentially thoughtless. They model certain decision flows, but once you’ve run them, no more thought occurs. To call a person “Thoughtless” is usually considered a slight, or an outright insult, and yet, we unleash so many literally thoughtless processes on our users, on our lives, on ourselves.”
What else about the ethics of algorithms? That one’s pretty much the worst I would say.
In a recent dissertation, Anna Lauren Hoffmann called attention to the ways in which the Google Books’ infrastructure undermines social justice. It’s done through fairly simple ways, through the quality of scans and the metadata, through the visibility of indexes and books preview mode, and through Google’s conception of the value of the information.
Hoffmann shows on a socio-technical account that those three dimensions impact the ways in which we think about social justice through Google Books and information technologies.
How else? What else do we do with our algorithms?
I don’t know if you can read this application for marriage license. There’s space for the male applicant and there’s space for the female applicant. Drop down menus, male/female, radio buttons, “Please select one of these options.”
Believe it or not, we still see, I still see, surveys that come through, I sit on the IRB, the research ethics board. We still see people put through surveys where it’s a binary, and it’s a forced binary because if you don’t click on your demographic your identity, you don’t go forward in the survey. Needless, to say those get sent back from the IRB, but we’re still seeing this.
What else about our algorithms? Does anybody follow this website, Spurious Correlations? To me, it’s the reason the Internet exists. as far as I’m concerned.
Elizabeth: Countless correlations can be found in these large data sets. Big data could be re-framing the ways in which we think about, the ways in which we do research. With big data, we can pretty much find correlations across any range of weird things.
What would an ethical algorithm do differently? How would it make sure that we don’t receive pictures of our deceased child or that we would spuriously accept correlations about the number of people who drowned by falling into a swimming pool with the number of films Nicolas Cage appeared?
Elizabeth: Our research is changing, though. We can find a reason. We could find a research question. We have the answers already, “Oh, here’s the correlation, so now we back into it.” That’s a very different model of research and one that I don’t particularly find very valid.
All right, I want to move and talk a little bit about some movements towards this idea of ethical algorithms. Jeremy Pitt from Imperial College has done a bunch of work and is thinking about what would, ethical algorithms look like in practice. They wouldn’t do what some of those things that we just saw, that’s a given.
I’m going to quote from Pitt one thing that an ethical algorithm would do. It’s about resource allocation, “Finding a way an algorithm can allocate scarce resources to individuals fairly based on what’s happened in the past, what’s happening now and what we might envisage for the future.” Predictive analytics, but used in a very positive way.
“Another aspect is around alternate dispute resolution, trying to find ways of automating the mediation process.” Pitt relates this to a retired judge telling him that a crucial element of a successful legal system is that the loser of a case, despite being unhappy, can appreciate that the process was fair and transparent and that will have no resentment against the system.” Apply that to our work.
A third is what Pitt calls “Design contractualism,” the idea that we make social, moral, legal and ethical judgments, and then try to encode it in the software to make sure those judgments are visually perceptive to anyone who has use of that software.
Three ways of thinking about ethical algorithms, of course, this is idealistic. I want to stay away from these utopian visions about how great algorithms can make our world, our society, because I think again the slam case really shows the potential problems and we do want to always be thinking about those downstream harms that our work could cause.
The other thing computer algorithms can create distorships, they can become the ultimate hiding place for mischief, bias, and corruption. If an algorithm is so complicated that it can be subtly influenced without detection, then it can silently serve someone’s agenda while appearing unbiased and trusted.
Whether well or ill-intentioned, simple computer algorithms create a tyranny of the majority because they always favor the middle of the Bell curve, and only the most sophisticated algorithms will work in those tails.
I want to start closing with some work that’s just coming out from IDEO design. If anybody know IDEO’s work…I hope that our time together has stimulated your moral imagination to some degree, and I hope I’ve given you a way to think about your professional development, your creative development, and your ethical development in a way of keeping those pieces intact that ethics is what’s possible, ethics is creativity.
Let me end with this that ethics is truly about what’s possible. Let me reference what IDEO is doing. IDEO, we know their work. Recently though, they decided to codify their company’s research ethics practices in a little publication. They do these nice, little publications. It’s called the little book of ethics.
As part of their development of this work, they spoke with a range of people, research ethicists, designers, researchers, a whole host of people. I was fortunate because I got to be one of those people to describe to them about research ethics, information ethics, and Internet research ethics at this point in time, how these fields are evolving, and what it might mean for their work and design.
I’m going to just relay what they came out with. This book will be out any day now. They decided based on all these information to articulate their values in three ways with three principles. The first is respect, honoring participants’ limits and the values of their comfort. The second is responsibility, acting to protect the peoples’ current and future interests. Thinking about the kinds of research, the kind of work that we do, how might we think about potential harms, downstream harms. That’s what anticipatory ethics is all about.
Finally, their last principle is honesty. Someone, said that, wasn’t honestly one of your enduring values? Someone say that. Honesty, truthful, truthful in your communications, truthful in your work, how do you not mislead people, how do you write and design in such a way that you’re respecting the dignity of each and every person.
The question that I will leave you with is this and I said it earlier. What happens when ethical questions are not asked? Thank you and I look forward to your questions.
Audience Member: Hi. Thank you so much for coming today. We’re talking about downstream harms and you teach students at the university ethical principles and things. Just taking an example, you’ve Alex from Target, he blew up, he was a 17-year-old, all of a sudden famous because he was out working one day and somebody thought he was cute. His world was turned upside-down.
We were talking about downstream harms. With the millennial generations and people coming into the universities now who have grown up with this, how do you talk about ethics and the impact that your content that you own, that you’re sharing with your friends could potentially have that downstream harm on somebody. The woman who posted that picture didn’t know it was going to blow up and completely turn this kid’s life upside-down. Can you talk a little bit about that?
Elizabeth: Yeah. That’s a really good question, good way of presenting it as well. It’s always a challenge working with any age kids. Getting our kids, our students to be able to distance themselves a little bit from their technologies. One activity and they hate me for doing this. Going three days without Facebook, going three days without being online at all. What does that mean?
Indirectly, I’m getting to your question. To me, it’s about getting people to think differently, to shift our paradigms, to think about how overpowering our technologies have become. If you haven’t read this book, “The Winter of Our Disconnect,” it’s such a great book, wonderful book. It gets us thinking about the moment. This is where we are right now, we’re not thinking about tomorrow. We’re not thinking about yesterday. We’re going to be present.
This sounds a little Kumbaya I know, but it’s true. Thinking about the present and how important the actions, the decisions that we make right now, in the moment are. We try and push them to those. Horror stories don’t work with college kids. We know that. They don’t work. It’s using case studies but not those extreme ones. It’s working through easy…Not easy but not hard cases, the cases that just get us thinking about potential consequences to whether it’s our technological actions or in-person actions.
That, to me, has been helpful. Certainly, we see these hot button cases in the news. Unfortunately, that’s what we see. There’s a zillion other, middle of the road cases that are more appropriate. Does that help?
Christina: Hello. My name is Christina and I teach at California College of the Arts in the interaction design program. It actually emphasizes both social good and ethics very heavily. I see my students go off and work at Jawbone or Facebook or wherever. They really struggle to keep that alive in the face of everyday business realities. I was wondering if you have ideas about how to help them keep them alive as they move into certain times very harsh worlds like one that you have for the examples for.
Elizabeth: Again, a great question. First of all, part of this is now you all go back to your workplaces and prioritize ethics, practice ethics. That’s a starting point. I’m going to answer this in just a couple of ways. One, part of this is the US, the culture in which we live. We live in a very utilitarian-minded country. We’re not worried about the social good so much as other countries. We’re faced with that.
What we have to do is make our own little universes, those places of social good and can only do it if we get people practicing these principles, buying into it, prioritizing the social good, and de-valuating things like competition and that fierceness of the individual. Again, to use ITO as an example, those are some of the principles that they’ve infused through their workplace. They play out in their practices.
The only way this works, to continue to prioritize the social good is to make it a priority and remind your staff about it, remind your co-workers about it, and coming in create a culture. Some of us are embedded in places that it seems like, “Oh, my God, nothing good will ever happen here.” Little by little, you have to carve out those little places of social good and continue to call attention to them.
Audience Member: Hello, great presentation. Thank you. I’m wondering what your thoughts are on social vigilante justice mobs if that make sense. Somebody tweets something that is perceived as racist, inappropriate which is inexcusable but now all of a sudden, they have millions of people attacking them online, they’re docked and they’re afraid to be in their own homes. I’m just wondering what you see for the future of dealing with that kind of thing. Thank you.
Elizabeth: I don’t know if I have a good answer for that. The reality is such that I’ve mentioned the polarization, the divisiveness that exists. To some degree, social media enables that much more readily. There’s no going back at this point with it. I think that part of our challenge as educators is to work through those areas where teaching about disagreement, teaching respectful disagreement, talking about polarization.
I will say in this country, the US in particular, we don’t understand what freedom of expression truly means. If don’t get back to really understanding that balance between our freedom of expression and our rights not to be assaulted, to be harassed, to be hurt. We have these two extremes playing out and you see it everywhere every day.
I don’t know going forward. I suspect it will continue. As our little kids are growing up more and more immersed in social media, it’s our job whether as parents, guardians or as educators to make that a priority in talking about discourse. What are these discourses of hate, what are these discourses of polarization in these online spaces, and then creating spaces that don’t allow for that. Don’t allow it.
Audience Member: There’s a Podcast that I listen to called “99 Percent Invisible.” Let’s give them a round of applause.
Audience Member: It’s called that because it’s mostly about architecture and the idea that there’s so much going on in terms of architecture that 99 percent of it is invisible. They did a story about architects who work on jails.
The ethical boundary, how do you find the ethical boundary because we understand how do you space so well that we can use it to multiply the suffering of the inmates.
With the scope of work that architects are asked to fulfill oftentimes has requirements that could be interpreted as fundamentally against humanity. Of the list that you had of the ways of thinking in terms of ethics, is there one that in our work we could start from to find that line?
Elizabeth: Great comment, question. I recently just did a talk about prisons and various issues. One of the things in my research for that talk, there are pockets of the field of architecture, the space of architecture for social responsibility, architects for social responsibility. There is a growing trend, a push back against those principles of architecture, of design that violate human rights.
It takes courage, though. It takes a lot of courage. If you’re fresh out of college and it’s your first job and that’s what you have, you got to find the power of your conviction. That’s a challenging thing in a very competitive economically driven environment.
I would say that of that list, talking about prison and inmates is so charged and challenging. I truly believe that the participatory based approaches, participatory based community, participatory based action research. They have to be employed. They have to be employed by disciplines that traditionally wouldn’t have employed them.
I don’t know if anybody still in school, grad school. Think about your methods classes. Think about the research methods and your design methods classes. How many of you are talking about this? How many of you are being taught about anticipatory ethics, participatory based research.
Until those things change, there’s a number of changes that need to take place. It’s at the educational level where we start talking more about social justice and our work in the societal implications of our work. It’s moving it into the workplace.
It’s encouraging our students to join architects for social justice and giving them a reason to or you’re mentoring a student or mentoring someone new in your office around these principles. That’s the only systemic way this is going to play out. Thanks.
Facilitator: Thanks again to Elizabeth Buchanan.
Elizabeth: Thank you.