Parvej Sidhu
Access to justice is often conceptualized as a gap requiring a bridge. Artificial intelligence (AI) is helping by bridging another gap, between the justice system and the tech world. By drawing on the lawyer’s knowledge and the software developer’s expertise, AI is helping legal professionals complete their work faster and with greater accuracy, but also helping the public address their legal needs on their own. The ground-breaking Civil Resolutions Tribunal in BC is an excellent example of the latter.
I’ve been learning how to build this kind of AI in Professor Katie Sykes’ class, “Designing Legal Expert Systems: Apps for Access to Justice.” It’s been a welcome exercise in creativity and an exciting introduction to artificial intelligence (made possible by very beginner-friendly software from Neota Logic). It’s also, however, made me question my relationship with technology. In particular, I’ve been thinking about another kind of gap, found between what we wish technology could do for us and what we’re actually using it for in our day-to-day lives.
It’s not always obvious that our relationship with technology evolves as fast as the technology itself, partly because we don’t really make a lot of conscious choices about how heavily we’re going to rely on it. None of us woke up one morning, for instance, and decided to designate our cell phone as our hand-held computer, GPS, and mobile personal assistant. Most advances in tech, whether they be in health, communications or artificial intelligence, creep up on us. When we do make choices, they’re constrained by what we are offered on the market as consumers. I think this translates to a lot of wasted potential. The carefully curated features of the latest “smart” devices out there are hardly a response to our cries for help. Many smart products are designed to solve “problems” that don’t exist for a majority of this planet, if at all. I am reminded of this every time my washing machine decides it needs to lock my clothes inside it and I’m forced to unplug it to win them back.
In the course of solving problems that don’t exist, technology also creates problems we’ve never seen before. Earlier this year, news broke on artificial intelligence that can detect, with considerable accuracy, someone’s sexual orientation just from their photographs. My initial awe quickly gave way to concern about the gross violations to human rights and privacy that would result if this AI were abused. In these murky waters, our relationship with technology devolves further, and we’re relegated from consumers to mere subjects.
As consumers or subjects, what can we really do about useless, invasive or unsettling uses of AI? It’s clear to me that the engineer-consumer divide in how we interact with tech isn’t conducive to socially responsible or responsive innovation. To my mind, challenging this dichotomy is a good place to start, and those of us building “apps for access to justice” have been given the opportunity to do just that. In the legal context there is enormous potential and incentive to harness the power of AI to serve our own needs as well as the needs of our colleagues, our clients, or the public in general. These are specialized needs, and they require tomorrow’s lawyers to experiment as creators and innovators if they are ever going to be met.
Access to justice is a real problem, and real solutions are possible with the use of tools like artificial intelligence. The first step in discovering those solutions is to recognize the role we have to play as creators in control of our tech.