The Future of AI is People-Centered

jenny comiskey
16 min readJun 25, 2021

--

We are facing a pivotal moment. There is a growing wave of evolving AI capabilities, and yet, they are rarely shaped with people in mind. The history of applied AI has yet to unlock profound change in our daily lives, and the stories of failure are not insignificant. When AI solves the wrong thing, is built on faulty data, operates with lack of purpose, is misaligned with people’s needs, creates no room for feedback, doesn’t consider context, oversimplifies nuance, it is likely to fail, and worse, to harm. Some of the hardest problems to solve in the field are related to how AI systems interact with people and social systems. Today, considering people’s needs before building something is common practice. For good reason. When products fail or do harm it’s often because they haven’t started from an understanding of the people they intend to serve, communities they may impact, or the wider context in which they are situated. The current point in time for AI feels very similar to the early days of software, or the shift to mobile. In each of those transformation moments, it was reframing the technology around purpose, context, and what made it great, that accelerated change and impact at scale.

When technology is at it’s best it amplifies human potential. Focusing on technology advancement in isolation often fails to truly see larger opportunities beyond slightly better performance and optimization. Understanding the latent needs of people, and gaps in the current experience unleashes opportunities for transformation. We have to start with understanding people, what they are trying to do, and then identify where AI fits in, and how it can be tuned for impact. This is important not just for the technology, but the product experience, and the outcomes it impacts. It’s in fact something we should be embracing across any kind of development we do, investment we make, and launch we commit to. We need to be informing AI’s scope, role, definition, development and deployment in partnership with disciplines anchored in understanding how it can shape our world negatively and positively. Product designers, researchers, policy makers, data scientists and product leaders all need to partner in identifying the problems to solve, creating new AI solutions and testing them, most importantly the communities impacted by AI need to be brought into the conversation early and often.

My hope is for a shift in how we invest in AI and point our efforts where AI understands people, solves the biggest problems facing humanity, and showcases its tremendous potential for positive social impact. Most importantly AI systems center on enhancing people’s experience, improving their abilities, putting them in control, helping them make better decisions, while recognizing their needs, emotions, desires, and what brings them joy. AI systems can be harnessed to super power human potential. How can we inspire entirely new investments in future applications based on the biggest opportunities and needs inspired by people?

People-Centered AI is one way of changing the practice, culture and impact of the future. It is a way of ensuring we don’t do harm, that instead we align with positive impact, and we have more successes than failures. It can inspire new potential that hasn’t yet even been considered for how we might apply this powerful enabling force.

Defining People-Centered AI

People centeredness can include multiple dimensions. It builds on familiar concepts emerging in the field that attempt to bring a human lens to engineering and computer science. Efforts focused on explainability, responsibility, safety, agency, are all responding to the growing need for a recognition of the part people play in the system, and what they need from it.

While these dimensions are included in the overall consideration of principles and best practice, fundamentally the only way to be people-centered is to truly start from a deep understanding all of the actors needs in the system (eg. individuals, developers, annotators, policy makers, and the communities being served) and shape the future with their ongoing participation. It requires learning from their input and inviting them into the process. Principles are great, but practices are better, and evidence of change even better than that.

“Empiricism, pursued through the empathic observation of people, is the basis for much of the HCAI community’s work. Empiricists question simple dichotomies and complex hierarchies, because these may sometimes limit thinking, and undermine the analysts’ capacity to see important nuances and nonhierarchical relationships. Taking a human-centered approach to AI means evaluating each of your customer touch-points to identify where and how data and intelligence may enable new products and services that improve customer experience by fulfilling previously unmet needs. Ultimately, it’s the value created by the AI in the experience that customers will respond to — not the AI tool itself.” Ben Shepard

People-centeredness has a long history in other contexts. It’s not new. It builds from established practices in product development that have emerged over the years. HCD (Human-Centered Design) was championed by Nobel Prize laureate Herbert Simon, written about by Don Norman, taught by Stanford, and made an established practice by product innovation firm IDEO.

“In the 2008 paper “On Human-Machine Symbiosis” Cooley asserts “Human centeredness asserts firstly, that we must always put people before machines, however complex or elegant that machine might be, and, secondly, it marvels and delights at the ability and ingenuity of human beings. The Human Centered Systems movement looks sensitively at these forms of science and technology which meet our cultural, historical and societal requirements, and seeks to develop more appropriate forms of technology to meet our long-term aspirations. In the Human Centered System, there exists a symbiotic relation between the human and the machine in which the human being would handle the qualitative subjective judgements and the machine the quantitative elements.”

People-centered AI builds on these concepts, taking them further, and considers humans at every stage (on both sides of the experience). It cares more about identifying ways to amplify and enhance human abilities than replacing them. It considers the entire system, the broader social context, and the data feedback loops. It identifies how unique AI capabilities can align with human capabilities and work better together. It inspires the future by looking at opportunities for transformation and impact, beyond just improving the performance or micro optimizations of the current technology. When AI systems areanchored in, and inspired by, fundamental people’s needs, value is naturally embedded and core to shaping the outcome. Starting from a deep understanding of the opportunities to solve for, we ensure the technology doesn’t focus on achieving its own goals vs. being in service of the end user.

There are fundamental people needs to tackle in any AI powered experience

AI systems are not static. It’s a dynamic relationship. People are involved in building it, and using it, and both are continually evolving, generating new data, and learning from one another. It’s important to ensure we are framing the intention of the AI systems upfront and aligning it with outcomes. It’s critical we include a wide variety of people in informing and shaping it. Most importantly it is about ensuring that people are being empowered, enhanced and amplified through the application of AI. Starting with understanding people’s needs, perceptions, goals, and expectations as a source of inspiration for what to solve, and how to solve for it, helps to align fundamental people needs with tech capabilities, ensure we consider the broader context and social systems, and identify ways to amplify and augment. Currently, because of this gap in understanding, the relationship between people and AI is fraying. Often, people feel misguided, disappointed, and frustrated by the lack of control, transparency, and are concerned about the broader societal negative impacts.

Foundational needs for people in their relationship to AI systems include:

Trust: delivering on purpose, value and intent

“I need to understand what this does, why it does it and how I benefit”

AI systems need to be trustworthy. In order to do this, they need to have a clear purpose and people need to see and understand the value. People need to feel understood, acknowledged, and that they can in turn, understand. This is the foundation and starting point of good relationships. Trust is predicated on the treatment of data, and the quality of the results the system provides. AI systems rely on our trust. If we no longer trust the outcome, decades of research and technological advancement is at risk of being meaningless. People need an appropriate level of trust in what the AI system can and cannot do (they also need to know it is powered by AI in the first place). Roles must be clear, and mechanisms put in place for over-trusting the system by providing resources to help people know when to question, override, properly gauge confidence levels or accuracy.

Understanding the goal and impact of an AI system is the first step in trust building. In the case of one common and familiar uses of ML, personalization, the expectations have been raised. Gaps in relevance, understanding and adaptiveness are less tolerated. Poor inference, inaccurate alignment, misdirected objectives not only erode trust in the system, but in the overall brand it’s connected with, and AI as an underlying technology. Increasingly we are seeing gaps in delivery and misalignment of goals and expectations as a driver of frustration, dissatisfaction and broken trust. There is an opportunity to better align the objectives, taxonomy and measurement with people’s expectations, and consider broader human-centric factors (eg. context, intent, interest changes, cultural nuances, etc).

When AI systems are created they need to consider how to:

● Establish a clear purpose and deliver on it

● Align with people’s values, goals and expectations (and adapt as they change)

● Respect privacy expectations and data needs

● Create shared understanding (eg. transparent, explainable)

● Indicate its limitations and the role it is playing (eg. automation, support, assistance, augmentation)

How do we define value and build trust? Building trust starts with being inspired by and grounding AI in people’s unmet, latent needs. What are we trying to achieve (and why)? What problem or opportunity are we addressing?

● What are people’s needs + what is the intention that serves them?

● What are we solving for? Is AI the right solution?

● What are people’s expectations? What is the role that aligns with them?

● How do we carefully consider what data is missing and the limitations of the model in delivering on their goals, needs and expectations?

Agency: enabling control

“I need to have the right level of control, at the moments that matter, to make it work better for me”

People need to have the opportunity for agency at all points, and be seen as participants in the system. They are essential to shaping both the technology and the ongoing data feedback loop. Agency is about allowing people to have active control over aspects of the AI system when they want, and how they want. There is more opportunity for including human-in-the-loop functionality by creating oversight, intervention and control measures, including the ability to disable AI functionality altogether. The end result is to enhance the relevance, reliability, and overall experience. While we hope that machines can infer what we want, they are often wrong, working with limited and inaccurate signal, and we have few ways to explicitly teach them.

When AI systems are created they need to consider how to:

● Empower people with the right level of control

● Offer mechanisms for feedback and fine tuning

● Show the impact of change and input

How do we ensure people have agency in the relationship? Enabling control where people are empowered to actively shape the technology to better meet their needs and offering ways of providing feedback and fine-tuning, people become collaborators rather than recipients.

● What are the right levels of control, visibility, collaboration, feedback, and autonomy?

● How can people be more involved in teaching the machine what they want? How is collaboration structured? How might collaboration, participation, and control introduce new signals and improve outcomes?

● What data is meaningful for them + what is the feedback loop?

● How can controls offer a wider and richer range of data signals to improve the outcomes?

Enrichment: uniquely amplifying human potential

“I need to feel like this makes me better, enhances my experience and minimizes the parts I don’t like”

AI has the potential to fuel future innovation when positioned to extend, amplify and increase human abilities. This is where AI systems can move beyond basic automation and utility in order to truly delight. Differentiators in the future are often related to enhancements and “superpowers” that connect effortless authenticity with new forms of immersion, expression, creativity and ability. For areas like AR, memory, video, and creativity, people desire immersive, participatory experiences that are also low effort, easy, accessible and contextual. The biggest leaps in value for future experience will be extensions to people’s untapped capability and potential that are currently underserved. Finding this alignment means respecting what people are great at, and where they need a boost, or identifying previously unmet needs, untapped contexts, or underserved groups. It also means knowing where technology is best suited to play a role, and where the two collaborating are better together. Here is where AI can be situated for high value. The augmented capabilities provided by the system need to tangibly improve a persons life. AI should be playing a variety of roles to extend and enhance. Not everything should be automated. Most tasks have some parts that are a good fit for AI, and ones that should be left to humans (eg. subjective, relational, ethical, emotional, joyful). Seeing opportunities for collaboration, for enhancement and partnership will unlock more creativity than we have previously explored. Blindly assuming AI should do everything completely, independently and automatically is a disservice to the true potential of leveraging the best abilities of both. It’s also important to understand the unique ways to operate in cases of utility vs. social enhancement vs. enrichment. Coordination, administrative and friction-reducing tasks are an easy win for AI, when it comes to social contexts, emotions and personal enrichment, it takes much more careful consideration on a meaningful application of the technology.

When AI systems are created they need to consider how to:

● Amplify potential

● Augment capabilities

● Identify the systems unique capability and highest value

● Unlock entirely new value, experiences, behaviors, or meaning for a specific need (eg. AI enabled reflection, expression, connection, discovery/inspiration, protection, empowerment)

● Transform a domain and set new ground work, and meaningfully differentiate an experience (eg. AI disrupting commerce, creativity, conversation, learning, wellness)

How do we identify the opportunities for AI to enhance? AI can super power potential, beyond assisting with mundane tasks, it can amplify our memory, creativity, conversations, knowledge and health, but we need to look closely at the foundational needs and opportunities to surface the highest potential intersection of AI capability and people’s potential, aspirations and desires.

● What are you enhancing, amplifying or augmenting? How? Why?

● How might we amplify capability or enhance the experience?

● What are the best alignments of capabilities across people-machine? What is missing that AI can extend? What are the aspects that bring them joy that we can help increase and not diminish? What aspects is AI uniquely suited to take on in order to improve the overall experience?

Responsibility: benefiting all people and society

“I need to know this was created with, for, and by, people like me” “I need to know this won’t cause harm, and has positive impact on me and the wider community”

Being “responsible” means looking at the social systems and impacts of technology as part of the process. AI systems need to orient not just around future innovations and opportunities for enhancement, but explore inspiring new ways of also operating responsibly. Because of the role it plays and can play, any AI powered experience needs to be shaped inclusively, and consider broader social context and potential system impacts. Vulnerable populations can be most at risk if not considered and included throughout the process of development. How does bias and/or skewed data play a role? How might they be harmed or at risk? We need to think about how people with diverse backgrounds experience our AI systems, and how those systems influence, represent (or don’t) and impact them. We need to take a broad view of diversity, including: inherent characteristics such as age, race, ethnicity, nationality, cultural background, gender and sexual orientation; and acquired experiences such as education, geographic location, socio-economic status, military experience and religious beliefs. Technology is not neutral, and cannot be decoupled from the outcomes that are shaped and enabled as part of the system (or embedded in the creation of it). The communities who are impacted need to be engaged in inspiring, informing, and shaping AI systems.

When AI systems are created they need to consider how to:

● Be safe, equitable and inclusive

● Ensure representation throughout the process

● Identify opportunity for positive impacts

● Anticipate, prevent, mitigate and protect from harms

● Align around impact and outcomes vs. outputs

How can we shape the outcomes of AI intentionally? How do we ensure we maximize positive and reduce negative? Responsible AI needs to connect technology to its outcomes, social systems context, and proactively pursue positive impacts, and mitigate negative.

● What are the ideal outcomes? How can we ensure we are taking an outside-in, people-centered perspective when defining the focus of our work and anticipating impact?

● How are we considering “all”? How are we ensuring representation and inclusion in all stages of shaping future AI experiences and systems? How can we create opportunity for underserved and marginalized groups (and protect vulnerable groups from harm)? How can we be learn from and inspire future solutions from the most complex, highest need contexts?

● How do we consider the positive/negative system effects?

● Are we solving the (right root) problem? Progressing toward goals? Measuring the right things?

Ensuring platforms and practices make innovating responsibly easy

None of this happens without fueling the experiences behind the scenes, of building, and how we build. People-centeredness also includes deeply understanding the needs of developers and customers in having access to state of the art ML developer experiences, tools, and platforms to support the development of ML/AI. It also means considering the human aspects across the entire system of delivery, and those who may be shaping AI, including annotators. The experiences ML Engineers and Research Scientists have, what supports them throughout the process, should and can be best in class, and enable high impact outcomes. Currently the complexity of the process creates more friction than ease. A people-centered lens on the development experience will help us identify the core underlying issues and create a vision and system for the future.

Having an integrated practice of user research is fuel for being people-centered

People scientist should be working alongside computer scientists. User research is a discipline focused entirely on understanding core needs of people to define future options, evaluate potential options, and test them to further shape the solution. Working with this as a basis of great product experiences is not new. The basics of research are also still core to working with AI systems. Doing (user) research early and often helps you understand who you are building for, what they care about, and how and why they use what you build. It focuses development on what matters most, connecting the potential of breakthroughs with the context and needs of communities we might serve. It helps avoid costly and time consuming misalignment with impact or potential harms that might be overlooked. It inspires the future by observing latent and unmet needs, and identifies unique foundational opportunities. If we build products without understanding people’s needs, we run the risk of creating things that don’t have impact (eg. wasted time, effort and failure to align with a clear purpose and intent), even worse they have a negative impact by not considering their needs. User research proactively surfaces needs that we should be solving for, identifies new territories, or anticipates bad behavior on our platforms we should be catching but we are unaware of, areas we have blind spots around, and breaks assumptions when people lead with what they personally experience or believe or feel (vs. what is inclusive, appropriate and grounded in evidence).

The good news is that many traditional research tools can help build understanding of people’s needs to inform AI, test how it works for them, and understand reactions to, and interactions with, AI powered experiences. Ethnographies, focus groups, prototype research, customer surveys, and logs are all still relevant. However, AI systems differ from classical systems in that they are context aware, personal, and able to learn over time. They also have limited accuracy, data considerations, and unique failure modes. Some of the enabling technology may not exist yet, and when it does, it may be largely “invisible.” These things introduce new challenges and opportunities for conducting research. We may lean more heavily on methods that explore future behaviors, understand fundamental first principles, test low fidelity simulations of experiences, co-design feedback loops, explore failure modes and understand mental models. User research can inform both what to solve for, how to solve for it, and shape the model itself. “A well-designed architecture for AI also requires a deeply embedded theory of data, in which data sources and data collection strategies actually mirror the eventual decision strategies of the AI.”

There are a few ways research practice may be uniquely tuned to AI systems.

Systems and stakeholders needs: Shifting from a traditional view of “user,” to understanding multiple stakeholders needs, and the relationships between them

Diverse and inclusive: Ensuring that we take an inclusive, multicultural, diverse approach in the research conducted

Participatory: Engaging people in directly shaping the future system early on through co-design and lo fidelity prototyping

Future oriented: Looking toward future changes, and beyond current products to identify innovation opportunities, new territories and future domains that have yet to be unlocked

Holistic data considerations: Examining data along the entire loop, we can understand needs/gaps, as well as broaden the defined signals

Outcome aligned: Understanding social impacts and outcomes at scale

People-centered practices drive change in shaping the future of AI systems

Some of the most powerful businesses and products are AI-first, and increasingly that means AI being people-first to align with value, context, and balance both pursuit of innovation with responsibility. Spotify, Pinterest, Google, TikTok, Amazon, Apple, Netflix, and many other leading organizations have AI central to their brand, products and services. Salesforce, Google, Microsoft, and IBM all have practices dedicated to advancing AI innovation and responsibility from the perspective of people’s needs.

How we are putting this into practice today includes tighter integration of a wider range of disciplines and functions in framing the opportunity, imagining the future, building responsibly and defining impact, and learning continuously along the way.

AI is woven into people’s daily lives and a core part of many companies products, services and experiences. We need to ensure it improves people’s experiences, is equitable and inclusive, and promotes positive impact. We also need to ensure we are addressing the core needs of trust, agency, enrichment and responsibility in any AI experience. People-centeredness can be a key aspect of advancing the field in its next stages of growth, and establishing leadership in AI that sets new standards in innovation and responsibility. We should never stop asking: what is this meant to solve? What opportunity or problem are we addressing? Why is AI the right solution? What is the data strategy? What are the ideal outcomes? In fact, these questions might unlock the next set of breakthroughs fueled by new perspectives, bigger challenges, and ambitious impact on people’s daily experience. We have an opportunity to change the conversation, and put people at the heart of how we advance technology, embracing this would be truly transformative.

--

--

jenny comiskey
jenny comiskey

Written by jenny comiskey

Humanity + tech. Helping create a people-centered future. Led insights at Stripe, Meta AI, Strava, IDEO and McKinsey.

No responses yet