Two Sessions at FACT 2026
Reflections on qualitative knowledge, AI efficiency pressures, and what gets lost in translation
Two sessions at ACMI’s FACT Symposium in February 2026 revealed a fundamental tension at the heart of cultural institutions navigating technological change. Not the familiar debate about whether to adopt AI or not, but something more structural: the gap between what we say we value as cultural organisations and what our decision-making pressures actually reward.
The Case for Slowness: A Toolkit for What Can’t Be Quickly Counted
Caitlin McGrane and Jacina Leong from RMIT University’s Museum Digital Social Futures project presented their forthcoming toolkit designed to support cultural organisations in gathering, analysing, and translating qualitative audience engagement data. On the surface, this sounds like standard evaluation methodology. But their framing revealed something more fundamental about what kinds of knowledge get legitimised in cultural institutions.
The toolkit focuses on what they called “informal data” – the conversations happening at front desks, anecdotal feedback shared during public programs, observations of where audiences linger and what they photograph, how they interact with staff and other visitors, social media comments. These informal insights, McGrane and Leong argued, “offer rich and often overlooked information about how people experience, interpret and connect with cultural spaces, including what motivates them to visit such spaces in the first place.”
Their research, developed through interviews with staff from diverse organisations and workshops at previous FACT symposiums, identified a persistent pattern: while organisations collect this rich qualitative information, “it’s often overshadowed by hard data in reporting – not because it lacks value, but because it can be more difficult to translate into shared understanding and action.”
Under conditions of organisational precarity – limited resources, competing demands, staff without specific training in data analysis – this marginalisation accelerates. The knowledge that front-of-house staff accumulate through daily interaction with audiences remains structurally undervalued. As McGrane put it, these staff are “often the closest to audience experience, but least structurally empowered to interpret” what they witness.
The toolkit’s three-part structure – gathering, analysing, and translating qualitative insights – isn’t just methodological. It’s fundamentally about legitimising forms of knowledge that don’t fit neatly into the metrics-driven frameworks that dominate institutional reporting. McGrane was explicit about this: “Across the sector, many organisations are increasingly required to justify cultural value in systems that favor what is easily measurable. When qualitative knowledge isn’t supported structurally, it becomes marginal, regardless of how central it is.”
The toolkit’s premise is that this tacit, experiential, relational knowledge should become central rather than marginal to how organisations understand their impact. Building “evaluative muscle” means recognising front-of-house staff as “knowledge holders” rather than data collectors, and legitimising “analytical work that’s already happening on the ground” rather than treating interpretation as something that only happens in management meetings or external evaluation reports.

Sitting Between Worlds: The Evening Panel
Several hours later, the “No Harm Done #4: GLAM, FOMO & AI” session co-chaired with Dan Hill brought together three practitioners navigating AI adoption pressures in very different institutional contexts. The panel composition itself embodied the tensions between organisational logics that the morning toolkit presentation had surfaced.

Sewon Chung, Head of Digital and Content Initiatives at M+ in Hong Kong, brought a career trajectory spanning the Exploratorium in San Francisco and Samsung in Silicon Valley – someone who has navigated both the deeply mission-driven, exploratory, educational culture of science museums and the “results-driven content strategies” of Fortune 500 technology companies. Her presence on the panel raised an implicit question: what knowledge, what practices, what values survive the translation between these profoundly different organisational contexts?

John O’Shea, Creative Director and Co-CEO at the National Videogames Museum in the UK, opened with what he called a provocation: comparing the unchecked adoption of AI in creative industries to the disaster of mad cow disease. The analogy worked because both are fundamentally about losing traceability – what happens when you can no longer track origins and provenance.

O’Shea’s focus on provenance – which he defined as “a secure and transparent trail of ownership and governance, integral to systems as diverse as museum collections management and food security, but currently absent in AI-augmented imagery and code” – created an unexpected conceptual bridge back to the morning’s toolkit presentation.
Provenance isn’t just about tracking the origins of digital assets or training data. It’s about the attribution of knowledge itself: whose insights, from where, interpreted by whom, in what context. The toolkit McGrane and Leong presented is fundamentally about protecting knowledge provenance – ensuring that when front-of-house staff observe how audiences navigate space, those observations don’t just get harvested as decontextualised “data” but remain traceable to specific people in specific institutional contexts interpreting specific experiences.
When AI systems synthesise or generate insights without maintaining these chains of attribution, we lose not just accountability but the contextual richness that makes qualitative knowledge valuable in the first place.
The Efficiency Imperative
Angela Stengel, an independent digital strategist and former Digital Content & Innovation Lead at the ABC, shared a case study that made the tension between efficiency and other values starkly visible. She described a hiring manager who used AI not to screen the traditional pile of 300 CVs, but to take a completely different approach: asking the AI to identify who should be offered the job directly, bypassing conventional recruitment processes entirely for maximum efficiency.

The hiring manager went and had coffee with the AI-identified candidate and hired them. When someone in the session questioned whether this was fair, Stengel’s response was revealing: “Is it their job to be fair to everyone, or is it their job to hire somebody for their company in their most efficient time?”
This wasn’t presented as a recommendation but as an example of the lateral thinking AI enables – doing things differently rather than just speeding up existing processes. But it also crystallised a fundamental question: when efficiency becomes the primary decision-making criterion, what happens to other values that institutions claim to hold?
For cultural organisations, this question isn’t abstract. They operate with explicit missions around access, equity, public value, educational impact – values that often create friction with pure efficiency logics. The toolkit McGrane and Leong presented requires time, interpretive capacity, structural power for frontline staff – precisely what efficiency pressures systematically undermine.
Breakthrough Bingo: Simulating the Pressures
The session concluded with something remarkable: “Breakthrough Bingo,” a competitive game designed by Tobias Revell from Arup that simulated the investment fund pressures driving institutional AI adoption.

The game’s mechanics were elegantly brutal: teams competed as investment funds racing to complete their bingo card of AI “breakthroughs.” But each round, costs doubled while opportunities vanished forever. Players faced an escalating choice: achieve ROI (return on investment), get eliminated as a “FOMO Fund” (adopting AI out of fear of missing out rather than strategic purpose), or survive long enough to reach the “Third AI Winter” (the hypothetical next collapse in AI hype and investment).
By gamifying these pressures, Revell made visible something usually obscured in institutional AI discourse: the extent to which adoption decisions are driven not by careful alignment with organisational mission and values, but by external investment cycles, competitive anxiety, and the accelerating pace of technological change that makes waiting seem like falling behind.
The game created a space to experience – in compressed, playful form – exactly the conditions that would drive institutions away from the slow, relational, qualitative work the morning’s toolkit was designed to protect.

What Translation Loses
The temporal and structural mismatch between the two sessions was striking. The toolkit requires:
- Time to notice what’s happening in audience interactions
- Time to interpret what those interactions might mean
- Structural power for those who witness experiences but don’t manage strategy
- Legitimacy for knowledge that can’t be quickly quantified
- Organisational culture where “purpose precedes method” rather than methods being adopted because competitors are using them
- Recognition that “qualitative insights only become impactful when they are shared, discussed and productively interpreted”
But “Breakthrough Bingo” simulated the reality most institutions actually face: decisions made at pace, under investment pressures, in conditions of competitive anxiety, where the slow work of building evaluative culture and attending to frontline knowledge becomes an unaffordable luxury.
Neither session suggested cultural institutions should avoid exploring AI tools. Stengel explicitly advocated for “sandbox” experimentation, for being curious about what tools can do before implementing governance frameworks. O’Shea’s provenance critique wasn’t anti-AI but pro-traceability. And the toolkit McGrane and Leong presented could itself potentially be enhanced by computational tools that help identify patterns in qualitative data.
But held together, the two sessions revealed a deeper question: what happens when the tools we adopt to solve capacity problems systematically devalue the kinds of knowledge that require capacity to recognise?
The Provenance Question
Consider the specific forms of knowledge the toolkit aims to legitimise:
- Front-of-house staff noticing how audiences navigate space and what that reveals about wayfinding, accessibility, or emotional responses
- Informal conversations at the admissions desk that reveal what motivated someone to visit today, what they’re hoping to experience, what prior knowledge they’re bringing
- Anecdotal feedback from public programs that shows not just satisfaction levels but the intangible impacts of encountering certain ideas or objects
- Observations of where people linger, what they photograph, how they interact with companions – all the phenomenological richness of embodied experience in cultural space
These are precisely the knowledge forms that don’t fit efficiency logics. They’re contextual, interpretive, relational, emergent. They require what O’Shea called provenance – traceable attribution to specific observers in specific contexts. They resist the kind of aggregation and synthesis that makes “insights” scalable and actionable at institutional level.
Yet they’re also what distinguish cultural institutions from content platforms. They’re the knowledge that emerges from institutions being physical places where embodied humans encounter objects, ideas, and each other, mediated by staff who understand both the collections and the communities they serve.
The Question Both Sessions Left Open
If cultural institutions adopt tools and frameworks developed primarily for Fortune 500 efficiency – tools designed to maximise throughput, minimise friction, optimise conversion, scale insights – whose knowledge survives the translation?
More specifically: when AI is applied to “audience insights,” what happens to the provenance chains that make qualitative knowledge trustworthy and meaningful? When efficiency pressures reward quick synthesis over slow interpretation, which staff voices get amplified and which get filtered out? When investment cycles create FOMO around adoption, how do institutions maintain alignment between technological choices and organisational values?
And perhaps most fundamentally: without provenance – without maintaining secure and transparent trails of whose knowledge, from where, interpreted how – how do we even know what we’ve lost in translation?
The FACT Symposium didn’t answer these questions. But by placing these two sessions in the same day, in the same building, it made the tensions impossible to ignore. The cultural sector’s challenge isn’t whether to engage with AI – that ship has sailed – but whether we can do so in ways that enhance rather than erode the forms of knowledge that make cultural institutions culturally valuable rather than merely efficient.


















