This blog post discusses an oversaturated subject (AI in higher education). It’s quite lengthy, so I’ve set out the headlines upfront so you can choose whether or not to spend time reading it (or, in the spirit of the subject matter, you could just paste it into a GPT and ask for a succinct summary instead):
- AI is one of the hottest topics in HE at the moment, but the sector discourse, debate, and strategy planning routinely excludes technicians, despite the enormous opportunities that it offers to improve productivity and performance.
- The pedagogies of university technicians, and how they develop, use, and role-model engagement with AI technologies within their disciplinary specialisms, are an under-researched but important aspect of student education, experiences, and graduate outcomes.
- Technician roles are among the least susceptible to automation, offering limited opportunity for AI or robotics to offset workforce gaps.
- A fit-for-purpose, institutional AI strategy (inclusive of technician roles and responsibilities) should seek to rebalance admin and education by simplifying and automating routine and repetitive tasks to complement, extend, and improve higher-level student-facing activities (easy to say, hard to do).
- This isn’t a one-size-fits-all situation: at the institutional level, each university needs to understand the technician’s contribution to its mission when developing and updating its AI strategies. What should be automated, and what should be augmented?
- Technicians should be supported in building and configuring their own unique GPT agents, tailored to their context, discipline, facilities, workflows, priorities, and responsibilities.
This blog post explores how these points fit together in the context of the blind spots in recent policy literature.
I recall writing my undergraduate dissertation in the late 1990s and being bemused by the annoying paperclip icon in MS Word that would interject “it looks like you are writing a letter” regardless of whether I was or not. I had recently transitioned from a typewriter, so this, for me, and I suspect others of my vintage, marked the emergence of automated assistance that would improve my writing through enhanced formatting, spelling, syntax, and grammar tools. These capabilities were expanded to include structural text analysis and generative rewriting when Grammarly launched in 2009. A further major step change was the public release of OpenAI’s ChatGPT in 2022, accompanied by the evolution of machine learning, large language models, and generative AI.
Similarly, my creative practice followed a comparable trajectory; my first ‘proper’ 35mm camera (a fully manual Pentax K1000) had no frills at all. No knowledge meant no photographs, but as the technology advanced, camera manufacturers automated many features (focus, exposure, white balance, flash control, bracketing, motor-drive, and more).
These advances made photography more accessible and easier to learn for beginners, while also extending the possibilities of the medium and modes of engagement for the more capable and highly skilled. I recall the unsettling turbulence of the mid 2000s as much-loved analogue (film) gave way to less-loved digital (pixels). As film became commercially obsolete and difficult to learn skills applied in the dark, became relatively simple to apply in the light, identities were disrupted and reshaped. The evolution of the discipline felt every bit as transformative as the AI-infused camera technologies, post-processing, and generative AI methodologies that are reshaping photography in the present.
AI is now increasingly embedded in the everyday tools we use, for photographers, this includes Adobe Firefly, Lightroom, and Photoshop (and therefore increasingly hard to opt out of). AI tools are already very good at doing many of the tasks we’ve historically asked students to engage with to demonstrate their learning. Things that would take expert practitioners’ hours in the past, and techniques that used to take weeks or months to teach, can now be replicated by the unskilled in a matter of moments. This phenomenon was termed as ‘false mastery’ by the Organisation for Economic Cooperation and Development (OECD) in their report ‘Digital Education Outlook: Exploring Effective uses of Generative AI in Education’.
False mastery is when an impressive outcome masks weak underlying skill and lack of understanding. In this blog, I argue that through the neglect of the technician voice, institutional AI strategies and corresponding development programmes are themselves, at risk of being institutional examples of false mastery.
The concept of mastery is important when educating future practitioners. It invites association with the medieval guilds, in which apprentices were taught by experts, to become experts. Education was built on know-how, and applied practice. Learning to do things well rather than simply talk about them eloquently. Mastery of this nature is particularly relevant for technical staff, whose roles and identifies are often defined by their authentic mastery, built over time using a combination of training and experience, often in a domain they are passionate about.
Students too, often perceive technicians as disciplinary masters, not just in the context of advanced ability and technical skills, but also, increasingly, by observing how they learn, engage, and role model the use of AI. A textile designer for example, would approach AI with different needs to those of an illustrator, or computer games artist. Framing these needs with domain knowledge, appropriate disciplinary conventions, and semantic precision is critical to informing effective prompt engineering (giving AI a clear and actionable brief), and interpretation, challenge, and enactments of the resulting outcomes. However, this is a critically important element that I haven’t seen reflected in any AI policy paper or strategy that I’ve read to date.
Instead, the sector appears to be preoccupied with assessment, academic integrity, policy, and risk, while also grappling with the ethical questions (privacy, copyright, bias), the sustainability questions (energy, carbon, digital footprints), and the human questions (fear of job loss, fear of being ‘found out’, fear that the craft of learning will be hollowed out). These are understandable priorities, and the new challenges to these pillars of HE points helps explain why AI has become the most dominant theme within my LinkedIn feed.
The technician shaped hole in the tsunami of reports, guidance, opinion pieces, and policy papers
Every week seems to bring several new reports or guidance. Last week brought the Government-backed ‘AI Skills for Life and Work’ summary report, which identified that in the general population, awareness of AI is high, but confidence is low. Only 21% of people in work feel confident using AI, and just 17% say they can explain it in detail. I’d expect that to be a little higher in the technical HE community, but it serves to show that while AI might appear ubiquitous in policy and ideas, it is, at best, patchy on the ground and in implementation.
This is not to say that the sector is not taking major steps to learn from what is actually happening with AI. Indeed, last week brought the 2026 AI in Higher Education Symposium Australia and New Zealand. Educators were invited to share their creative authentic uses of generative AI to improve teaching, learning, assessment, and curriculum. The report contains detailed perspectives from academics and students from 38 different presentations, but like so many others, it fails to recognise, understand, or reflect the contribution, risks, and opportunities of the technician workforce in HE.
Most articles seem to focus on one of two approaches; which is either ‘clarifying how AI can be used in learning, teaching and assessment’ or ‘automating professional services functions’. In this post, I consider AI from a technician perspective, which spans both, in the spirit of the Burning Glass Institute discussion paper ‘Beyond the Binary: How Automation and Augmentation Are Combining to Reshape Work’. This paper proposes that rather than some roles being automated and others being augmented, both models can be used within the same role to extend and enhance performance. With their diversity and complexity of responsibilities, technical roles appear to be highly suited to prove this hypothesis. But in doing so, care must be taken in what can be automated. Not least because technician jobs were identified as among the least susceptible to automation, in another Burning Glass institute paper ‘Mind the technician gap: fixing the UK’s hidden labour crisis’.
Comparable perspectives are offered by the European University Association (EUA), in ‘Adopting AI that serves the needs and values of universities’ This report includes a helpful caution “efficiently is rarely a means for achieving a pedagogic goal”. However, for technicians it can do exactly that. Efficiency can be pedagogical; when you remove the daily friction and repetition of admin systems and processes, it is possible to create additional capacity for the human interventions and interactions of practice-based learning and teaching activities. However, before getting carried away, to achieve these gains, institutions must come to know what should (or should not) be automated, and what should be augmented.
The untapped potential: technicians as educators
Technicians are all too often swept into the catch-all category of ‘non-academics’. A central problem with this is that in many universities, technicians teach a critical and sizable element of the core knowledge and skills that practitioners will need in their professional lives; whether it is in the ‘academic’ curriculum or not. Acknowledging this point matters because the way institutions design AI strategies depends on what they think technicians are. If they’re seen as passive technical support for equipment and processes, AI becomes a stealthy procurement exercise aimed at delivering institutional efficiency and savings. However, if they’re regarded and developed as practical educators, which, research increasingly shows many are, AI becomes a pedagogical collaborator with the potential to improve quality, capability, efficiency, and the student (and staff) experience.
This sounds fine in theory, but to make it real and relatable in practice, I’ll define and delimit the point by conceptualising technicians’ roles and responsibilities using my framework of ‘enabling, supporting and delivering’ learning and teaching. In brief, these can be thought of as:
- Enabling: designing, configuring, and running specialist environments (physical and digital), tools, and materials so learning can happen safely, accessibly, and at the appropriate standard.
- Supporting: the unplanned, reactive student-led encounters, troubleshooting, side-by-side making, supervising, confidence-building, and the social and pastoral activities that characterise studio and workshop learning and community building.
- Delivering: pre-planned teaching, including inductions, demonstrations, progressive tutoring, seminars, lectures, sign-ups, and skills refreshers.
The ‘enable, support, and deliver’ framework represents the foundational infrastructure upon which many universities rely upon to deliver their core business of learning and teaching (whether they know it or not). And, as this blog post argues, when AI strategies are informed by, and reflective of, the critical-yet-understated role technicians play in universities, they can provide genuine and meaningful enhancement to both institutional and individual performance.
We need a better AI narrative for technicians
The HEPI collection ‘AI and the Future of Universities’, here: https://www.hepi.ac.uk/reports/right-here-right-now-new-report-on-how-ai-is-transforming-higher-education/ is helpful in this regard because it surfaces many of the aforementioned tensions: AI as opportunity, AI as threat, AI as strategic imperative, AI as assessment disruptor. It calls for AI literacy, guardrails, and institutional readiness, but, it should think a little deeper regarding the professional services framing. One essay argues professional services must embrace GenAI for ‘productivity gains’, acknowledging the likelihood of ‘fewer jobs’ through automation. Another suggests identifying high-impact use cases in repetitive, rules-based processes and high-volume queries, with ethical safeguards and auditing to mitigate bias and harm. This seems like a coherent and logical approach to some functions, but, in order to do so, universities need a better understanding of what technicians can do, and which elements of their roles can (or cannot) be made more efficient, or result in higher quality outputs.
In practice-based education, the technician role is not primarily rules-based processing of completing spreadsheets. It is about diagnostic and formative assessment, judgement, risk, role modelling actions and aptitudes, tacit knowledge, embodied knowing, socialised and relational teaching. Accordingly, technician-focussed approaches to AI should be developed and engaged in support of these uniquely human traits, rather than compromise or homogenise them. We must also recognise its limits in these environments, even the best AI resources (chatbot, video, VR, and whatever comes next, at least for the foreseeable future) cannot smell, feel, hear, or care in real time, in the way that technical teaching requires. In my experience, no AI has been able to feel the point at which clay becomes leather hard, or observe the tack and drag of paint or smell the point at which a vac former begins to overheat plastics, or sense that a student is about to exceed their competence and do something unsafe.
AI as a collaborator that enables, supports and delivers learning
To return to the framework, ‘enabling’ learning is an important foundation, but elements within it are also often the most ‘behind the scenes’, repetitive, administrative, routine, and least enjoyable (from my experience) aspects of the role. Areas where AI has the potential to automate include tasks such as document management and record keeping (maintenance, inventory, email correspondence, health and safety, and so on). It can also be particularly effective at distilling and summarising the vast volume of information that flows into technicians’ inboxes. Thought of in these terms, the conventional AI discourse is broadly applicable: mundane technician admin is largely comparable to everyone else’s.
In the context of ‘supporting’ students in their learning, in most scenarios, AI isn’t regularly engaged by technicians in a first encounter, as many problems are presented and solved ‘at the bench’. Issues are often known and foreseeable, and are resolved through push/pull coaching to build understanding rather than providing a solution. But as technical facilities have become more complex and disciplinary boundaries have blurred, it is increasingly common for problems to extend beyond the technician’s knowledge or experience. However, not having the answers in these situations is unproblematic. Through their knowledge and understanding of the domain and ability to ask the right questions (of learners, concerning their issue), evaluate and select the most appropriate AI for the situation, and articulate and explain technical terms, and colloquialisms, my research has shown that technicians are particularly well-positioned to conduct web-based research effectively and expediently, with, and behalf of, their learners.
To put this in a theoretical context, AI offers parallels with Vygotsky’s concept of ‘scaffolding’ learners, in what he describes as the ‘Zone of Proximal Development’. This is explained here: https://www.open.edu/openlearn/languages/understanding-language-and-learning/content-section-6. Proximal in this regard relates to how close a student is to mastering a particular skill or technique, and the gap between what they might be able to achieve unaided, compared with what they might be able to accomplish with the assistance of a ‘more knowledgeable other’. And, just as technicians scaffold learners at the limits of their knowledge, AI can provide helpful scaffolding for technicians at the limits of theirs.
These engagements have become increasingly complex. Early AI transactions (a prompt and a response) have given way to agentic workflows in which AI systems have greater autonomy and adaptability, iteratively ‘speaking back’ to the user about a task, learning from these interactions and negotiating outcomes towards stated goals. For the first time, the tools are collaborating with users in ways that may be considered creative, extend the user’s creativity, or at least simulate credible creative outputs.
In the context of ‘delivery’ (technical teaching), once the function of tools has been introduced, much of what technicians teach is embodied, sensory, and judgement-led. Learning is multimodal and tacit: making, testing, iterating, documenting, reflecting, and critiquing. However, we should not forget that AI is a tool, and technicians teach learners how to use tools, overtly but also implicitly through their actions and responses, and therefore they can be integral to passing on AI literacy in a general sense to learners, while also modelling discipline specific approaches to engagement.
AI cannot replace the human traits of technical teaching, but it can assist in a multitude of ways, not least through preparation, lesson planning, and developing materials that reinforce learning. This varies by discipline, but in my own teaching practice, I’ve come to think of AI as a form of prosthesis that I use to extend my knowledge and capabilities rather than substitute for them. In doing so, AI has become a collaborator, reviewing and offering feedback on my lesson plans, exemplars, and activities. Accordingly, AI has become part of the skillset of teaching and, therefore, part of the pedagogy.
The blind spot of ‘createch’
My focus is on the creative arts sector and I read many papers on how AI is impacting this domain. One article, relevant to this blog that caught my eye recently was the ‘Demand for Creativity and AI Skills in the Post-ChatGPT Labour Market: Evidence from UK Job Vacancies’ by the Creative Industries and Evidence Centre.
This paper examined the evolving relationship between employer demand for creativity and AI skills in the UK labour market. Its findings, based on 168 million job postings between 2016 and 2024 (pre-and-post-public release of ChatGPT), show that labour markets with greater demand for AI skills also tend to exhibit greater demand for creativity. The authors concluded that, rather than replacing creativity with AI, employers are seeking hybrid skill profiles that integrate both skill sets. It also sets out how, for policymakers, the convergence of technology and creativity, sometimes referred to as ‘createch’, is set become a core driver of economic growth, productivity, and employment in the UK.
There is plenty of focus from the authors on the need for change and the reimagination of curricula accordingly, but little on how this might be done, and nothing at all relating to technicians or their pedagogies. For me, this is another example of the sector-wide blind spot.
Grand claims of what universities and academics should do, but without the consultation or involvement of many of those technicians that will actually do it. Combining technology and creativity could be considered the raison d’être of creative arts technicians and their pedagogies.
A better approach to realising createch specific graduate outcomes won’t materialise through policy and strategy alone. These outcomes will be built through practice and experiences as students repeatedly experience the combination of (1) domain knowledge, (2) technical fluency, and (3) creative judgement in studios, workshops and labs, with tools, and consequences.
So when institutions talk of ‘embedding AI’ and ‘future-proofing graduates’, I’d argue that they should absolutely draw on external policy and best practice, but they should also look internally, and consult with their technicians, and broader professional services teams concerning how AI, createch, and other desired-practical outcomes look and feel like in the technical and applied elements of the disciplines. A recent Arts Council report, ‘AI Technologies and Emerging forms of creative practice’ identifies that is often practitioners rather than theorists who help define what we should be asking about AI and its impacts.
The hyperbole around AI that floods my LinkedIn feed reads well in grandstanding documents, and strategies, but not as well as the authentic and measurable results that flow from inclusive and informed approaches (which are conspicuously absent).
This all sounds great, but I’m a technician, how does this help me in my workshop?
Having an AI policy developed and overseen by a multi-stakeholder group inclusive of the technical voice is a valuable asset in leveraging institution-wide benefits. But technicians are often highly practical people, working in specialist fields, and to gain value from it, policy must provide real, useful tools that can be applied for tangible benefits.
Which brings me to my final, and most practical point: personalised GPTs for technical teaching and operations. Around a year ago, I was taught how to create and train my own custom built GPT by a computer science professor. This relatively short, and surprisingly non-technical experience has proven to be transformative in terms of my productivity and confidence in many aspects of my work. While AI can seem intimidating to new users, it’s really not a complex process, and requires little in the way of prior knowledge.
An emerging theme from the aforementioned 2026 AI conference in Australia and New Zealand was that custom AIs drive learning. The outcomes describe numerous examples of educators building their own custom AIs to help students learn the way that they need to, through scaffolding, coaching, reflection, questioning, and more.
So what does a personalised GPT actually dofor you? I think of it in the same three-part framing: ‘enabling, supporting, delivering’ learning and the real value is that it’s uniquely yours: a GPT is pretty decent by default, but it’s super powered when you train it using the key documents, templates, and resources that are most relevant to you, your role, and your institution. This can include anything from local SOPs, inventories, health and safety materials, inductions, assessment briefs, lesson plans, emails, and the reality of how your space actually operates on a daily basis.
Enable (the infrastructure that makes learning possible)
A custom GPT can become your ‘operational assistant’ for the specialist environments: drafting risk assessments and COSHH summaries; turning manufacturer manuals into plain-English checklists; generating induction scripts, signage, and competency questions that match your facilities; and producing maintenance logs, parts lists, and procurement justifications fit for your institution’s approaches and approval chains in minutes rather than hours. Indeed, an hour or so spent training your GPT will save tens of hours later, and ensure your AI interactions draw on the full capability and potential of the ChatGPT architecture, while being uniquely tailored to your context, requirements, and style as trained by you.
Support (the reactive, relational work at the point of need)
A custom GPT can also act as a triage partner for troubleshooting and ‘what’s gone wrong?’ moments, prompting diagnostic questions, suggesting troubleshooting steps and sequences. A further benefit is that it can do so in a variety of accessible formats (simplified language, step-by-step, translated versions, dyslexia-friendly structure, and more).
Deliver (teaching, learning and assessment in practice-based education)
For delivery, a custom GPT can help you plan demonstrations, and write lesson plans aligning session outcomes to unit or module descriptors, suggest progressive practical exercises, identify reasonable adjustments to mitigate disadvantage, and create reflection prompts. It can also support formative and summative assessment: helping to translate assessment criteria into teachable moments. You can also build and share custom agents with learners tailored to your teaching sessions, pre-loaded with your content, learning materials, and locally specific workflows, or submission requirements. In doing so, you can help them create their own, and promote ethical and sustainable AI practices, and role model protections of data, safety and quality.
Combined, this is how I see ‘createch’ actually being built and realised as part of the wider learning eco-system: creativity, grounded in effective and responsible technical learning, teaching and support.
How Tim Savage Consulting can help: AI Strategy
Contributing the technician voice to the AI strategy in a way that remains credible and authentic to the speakers, while also being articulated in a format that senior academics and decision-makers can take forward, can be challenging, particularly for internal staff who may lack the capacity and/or critical distance. I can help set out the risks and opportunities, with a Technical AI Readiness Audit that:
- maps your technical ecosystem (people, spaces, kit, workflows, bottlenecks)
- clarifies what “enable/support/deliver” looks like in your institution (what’s actually done, not just what it says in the job descriptions)
- identifies high-impact, low-risk opportunities to automate routine tasks without eroding safety, craft, or teaching quality
- builds a prioritised roadmap: identify quick wins and longer term change priorities.
How Tim Savage Consulting can help: Professional development (AI for Technicians)
I have developed a half day course that shares the insights that I have gained from my work with AI in a creative technical field. This includes teaching technicians how to reflect on how they enable, support and deliver learning, to identify what can be automated and augmented, to create, train and share their own unique custom GPTs.
Additional details of this course, and my other training offers are available here: www.timsavageconsulting.co.uk
If you wish to comment on this blog post, please add your thoughts to the original LinkedIn thread. Available here: https://www.linkedin.com/posts/dr-tim-savage-pfhea-b968782b_theres-a-technician-shaped-hole-in-your-share-7426599850554785792-cN97?utm_source=share&utm_medium=member_desktop&rcm=ACoAAAZbg6IBl-9AP3wGw-BmopG_SAtGrQW1h_U

