jenny comiskey
7 min readAug 4, 2023

We recently hosted a research leadership discussion focused on the changing AI+Research landscape.

Our discussion had two primary parts; research as a means of shaping and directing AI, and opportunities to leverage AI in our own practice.

Our guests included leaders in the field across all types of AI applications, from AI start-ups and large enterprises, on the consumer and b2b side:

  • Selena Chan, Grammarly
  • Savina Hawkins, Meta
  • Jess Holbrook, Meta
  • Ellen Kolsto, IBM
  • Arianna McClain, Cruise

3 top highlights

Re-purposing and Guiding AI: There is a tendency to become overwhelmed or intimidated by the technical underpinnings of AI systems, but one of the biggest potential areas for research is going back to fundamentals in understanding people’s expectations, motivations and aspirations. Grounding AI systems in real world context, purpose and meaning holds tremendous value. AI unlocks new potential, and researchers should be inspiring that future.

(Non)Technical Translation: Our role is not only defining what AI should be but shaping the entire development process (eg. purpose, model, outputs, metrics). More often we are working side-by-side with engineers, data-scientists, and computer scientists to build together. We help translate the “people” part, and create guidelines for both benefits and risks to be considered at the outset, as well as throughout the development and refinement of AI powered experiences.

Maintaining Control and Authorship: The opportunity for AI in research practice is in carefully and selectively applying it to boost, accelerate or support our work. It’s not handing over tasks, taking shortcuts, putting data quality at risk, or compromising the irreplaceable aspects of meaning making, but surfacing small burdensome tasks that can be sped up or complimented by AI tools. The researcher should always be leading the way, using AI in an assistive manner.

Discussion in detail:

Amongst the hype and fury of the rapidly accelerating AI first future, grounding advancement in what matters to people and what amplifies their potential uniquely, presents a big moment for research. How can we ensure we are at the forefront of defining when, how and why AI is effectively woven into products and businesses? How can we help surface potential risks and harms? How can we help companies reimagine what is possible? How do we partner with AI to augment and boost our own practices?

AI in this conversation included its many forms, from the latest developments in GenAI and LLM’s, the fundamentals of ML systems, incorporating AI into workflows, the tools we can create for developers, innovating with and productizing AI for users, as well as informing the guardrails and principles to address potential risks. AI takes us all into new territories, both in what we are able to help companies build, how they build it, and how we can leverage it in our own work.

Re-purposing and Guiding AI

“Research is part of the end-to-end development” “We identify what users want, we improve the end to end workflow, we help design the experience between AI models and users” Arianna McClain

The biggest theme of the discussion was coming back to the core value that research can offer in shaping AI powered experiences. The role research can play is substantial, and the biggest part is in the fundamentals. Understanding people in their real world contexts, in all their nuance and using that insight to inform the purpose, outcomes and interactions with any AI systems is where research can have significant impact. Aligning AI with people’s mental models, expectations, and preferences can ensure AI systems are effectively partnering and supporting them. Just like any great product or business, research is key in laying the groundwork for what problem is being solved, opportunity pursued and the value being developed. AI without a purpose or compass is destined to be an investment not fully realized.

We can, and should, also be shaping AI throughout the entire system and process. A great example is how Arianna’s team is woven through all aspects of development, connecting to rider expectation, and supporting engineers in building tools. Her team is building user understanding to define model thresholds, defining the tools developers are building with, helping people have the right expectations of AI, and offering ways to provide the inputs and feedback to help guide toward better outcomes. Savina also shared an example of how early feedback on a model allowed for pivoting to a high value new direction before further investment. AI is expensive, getting it wrong is hard to walk back from.

There is also an exciting and sometimes under-recognized potential in the possibilities to re-imagine entire experiences and businesses. This is a wider landscape that can propel our roles even further and ask critical questions of what business we are even in. Should we, and how should we, strategically be integrating AI to create new value? Or is the risk too significant? Is it a distraction or a real opportunity for our business or product area? Helping organizations navigate these bigger questions and ground back in the meaning or purpose of the pursuit is where we can lean in. Amidst the complexity of AI we can get lost, distracted, intimidated or caught up in the hype narratives. We shouldn’t forget the creative inspiration that research and insights can help fuel and that AI might be well positioned to uniquely solve for. Never forget your unique and magical power of asking why, and showcasing insights about the experiences real people aspire to have.

“It’s an opportunity for research to embrace invention. Not tech for tech sake or here’s a clear, immediate user need. AI is a big driver of invention. Focusing on aspirations and opportunities — what are people actually aspiring to? What version of their future self are they trying to become?” Jess Holbrook

Non-technical Translation

“How do we influence the builders at the start? Are they connected to the human experience and needs? How can we help them develop the right intuition? Building in isolation is the biggest risk” Selena Chan

Working on AI powered systems often offers opportunities for collaborating in new ways with data scientists, computer scientists and engineers. Generally builders of AI systems are looking to make a world changing positive impact and there’s “no villain twirling a mustache” seeking to undermine humanity. That said, technical partners are deep in their own area of expertise, and their inputs are limited to other research scientists, those in their field, and the scope of vision can be narrow. Bringing in real world context and starting to create shared understanding and language is where the two practices can partner to make something significant. Clarifying the benefit and value is one aspect of our role in the partnership, but also creating tools, practices and guidelines to begin to operationalize responsibility principles, where privacy, transparency, and equity are considered, are ways to maintain an eye on real world impact and outcomes. How can engineers and product builders ask the right questions from the outset? How can they consider the tradeoffs of potential impacts? How does it ladder up to bigger metrics? We should be there helping ask these questions alongside them. It is a matter of starting in small steps and progressing into action vs striving for perfection. It’s the reality on the ground to be attentive to. Jess’ team has made progress in establishing system cards, which requires real challenging work to make happen, and then continued attention to scale and make meaningful. A series of recent ships from the team demonstrate the small essential steps forward in progress. Safe use guidance for Llama-2, community forums applying deliberative democracy to AI systems and ads fairness improvements.

“With AI it’s continuous. It’s not one and done. There is constant updating and refining the model.” Ellen Kolsto

Maintaining Authorship and Control

“We’re at an inflection point. Foundational models will be like the internet and every business and product will be changed by this.” Savina Hawkins

Much of the concern of AI is impact on jobs, potential replacement, and challenges to entire areas of practice. Research is not immune to changes in workflow, but the opportunity is more one of partnership, augmentation and small moments of acceleration than replacement. It’s more of a case of AI supporting and super powering our work. Our roles will, if anything, be expanded, to address more things systemically and strategically. It’s not using AI to take shortcuts, or to do the work for you, but rather surfacing small tasks that can be sped up or complimented by AI tools. For instance, Savina uses AI to cluster, categorize, and to boost protocol design (with prompt engineering) or summarize communications after the work is done. It’s helpful as a tool to test your own thinking, as an assistant to check in with. It’s not eliminating the parts that matter most (eg. real people, real context, deep meaning, learning with and alongside). Big risks include using it when there is a data privacy risk, when it is not fit for purpose, is overreaching it’s limitations and capabilities (eg. synthetic users can be “echo chambers of bias”), and the limits and bias inherent in the existing and often outdated training data. AI will not surface new data, meaning or recognize important and powerful extremes. With LLM’s in particular it takes careful attention to asking the right questions, without proper prompting LLMs will respond ineffectively, just like how the questions we ask in interviews can bias subjects’ response. See AI as the iron man suit that you fully control, weaving the right elements in the right way. Researchers, and platform providers building for researchers, that embrace the right aspects will be able to make new expansions to how we make progress as a field and introduce new ways of advancing what we do well, and easing the things that bog us down.

References from our panelists:

jenny comiskey

Humanity + tech. Helping create a people-centered future. Led insights at Stripe, Meta AI, Strava, IDEO and McKinsey.