In analyzing the research outcomes of the ethnographic studies two unexpected outcomes were revealed:
Firstly, traditional notions of collaboration are insufficient to describe the level of coordination required for effective AI implementation. Traditionally, there were clear lines between those who created technology (developers) and those who used it (users). However, with AI, these lines are blurring. This is because AI systems can learn and adapt on their own, which means that the process of creating and using them becomes intertwined. This blurring of boundaries challenges traditional ways of developing technology, which typically involves developers talking to experts in a particular field to understand their needs. With AI, this process is more fluid and continuous.
Secondly, the researchers examine the idea of "boundary spanning," which is when people from different groups work together to share knowledge. While this has been seen as beneficial in many cases, with AI, it's not always effective. In fact, having intermediaries between developers and users can sometimes create more barriers, especially with AI systems that are complex and difficult to understand. To harness the potential of AI in knowledge work, there's a need to empower individuals to develop new collaborative practices.
Together with practitioners the team developed practical recommendations for organizations and launched Managing AIWISEly — a new program designed to train professionals as AI Polymaths, equipped to co-produce data, co-explain, and co-deploy AI.