The Human at the Top of the Pile: Using AI as a Research and Writing Collaborator
There are two conversations happening about artificial intelligence and creative work, and I find both of them exhausting.
The first is the anxiety spiral: AI is going to replace writers, researchers, educators, artists. The work we’ve spent careers developing is being automated out of existence. The second is the dismissive shrug: it’s just autocomplete, serious people don’t use it, the outputs are shallow and the whole thing is overhyped. Both positions have some truth in them. Neither reflects what’s actually possible when you approach these tools with a clear head and a clear principle.
I’ve been developing a workflow over the past year that sits somewhere more interesting than either of those poles. I want to describe it here (not as a prescription, but as a practitioner’s account of what I’ve learned). I lead theatre tours and deliver lectures for educated adult audiences as well as teach undergraduate university students. I research and write about theatre history and actor training. I’m also an academic with a PhD and a career’s worth of developed material to manage, update, and occasionally rebuild from scratch. AI has become a genuine collaborator in that work. Here’s how, and why it matters that I’m careful about how.
The principle first
Before I describe any workflow, I want to name the principle that governs it, because without the principle the workflow is just a set of tricks.
The human creator stays at the top of the pile.
What I mean by that is simple: AI works on material that I have gathered, assessed, and synthesised. It does not go off and do the research independently and hand me the results. It does not generate content from thin air and ask me to put my name on it. It works with what I bring to it (and what I bring to it has to be rigorously sourced, because the integrity of the output depends entirely on the integrity of the input). This is not a new idea. It’s just good research hygiene, applied to new tools.
The pipeline: gather, synthesise, present
My workflow has three distinct stages, and the sequence matters.
The first stage is gathering. I use Google’s NotebookLM to collect my own and other source materials (production notes, critical writing, historical research, background on playwrights and companies). NotebookLM works with sources you upload, which means it’s working with material you have deliberately chosen and can account for. It’s “grounded” in what you give it. It doesn’t wander the internet generating plausible-sounding content of uncertain provenance. What comes out of it reflects what went in, and what went in is material that I’ve read and assessed as a researcher.
The second stage is synthesis (and this is the stage that is entirely mine). I take what NotebookLM has helped me organise and interrogate, and I write it up: structured documents that represent my analysis, my argument, my understanding of the material. These synthesis documents go into Google Drive. They are the knowledge base I bring into the next stage. They are my thinking, not the machine’s.
The third stage is presentation, and this is where AI enters. I share my synthesis documents into a platform called Claude. Through back and forth “conversation” we work together on the drafting: shaping lecture notes, building structure, finding the right voice for a particular audience, updating existing material. The AI is working from my verified research, not generating its own. It accelerates the labour of assembly and drafting. It does not replace the expertise.
What this looks like in practice
Recently I was preparing two lectures for a Sydney theatre tour (four productions across two companies, and a group of educated adult theatre-lovers who wanted context, not just plot summaries).
The first lecture needed to draw connections across a very diverse programme: a solo Homer adaptation, a new Australian musical based on Miles Franklin’s novel, a two-hander about Bette Davis and Joan Crawford, and a Jez Butterworth chamber drama. I had gathered research notes on each production using NotebookLM, then synthesised those notes into a thematic analysis document (five connecting threads I’d identified across all four plays). That document went into Claude, and we built a lecture from it: thumbnail sketches of each production, the five themes written as flowing essay prose, and a set of discussion questions for participants to carry into the theatre.
The second lecture was a refresh of existing material (a lecture I’ve delivered before on how a play gets made, from the first idea through to opening night). Here the process was different: the machine read the existing lecture, assessed what was working and suggested what might needed updating. We had a conversation about where the gaps were. We then targeted web searches to verify current facts (the STC’s new artistic director, changes in the producing landscape post-pandemic, the emerging debate about AI and authorship in theatre) and new passages drafted to drop into the existing document without disturbing the voice or the structure.
In both cases, the thinking was mine. The expertise was mine. The editorial judgment at every step was mine. The voice is mine. What Claude contributed was speed, organisation, and the labour of drafting (which, when you’re preparing for a tour, is not nothing).
Three Things I’ve learned
The bottom line is that collaboration depends on what you bring to it, like a good relationship. When I arrived with well-structured synthesis documents and clear questions, the output was genuinely useful. If you arrive with vague requests, the output will be generic. The old “garbage in, garbage out” holds still. AI will reflect back the quality of preparation by the human. That’s humbling, actually (it means you can’t use these tools to shortcut the hard work of thinking). You can only use them to move faster once the thinking is done.
The second thing to learn is that maintaining your own voice requires active attention. It’s easy to let the draft drift into a kind of confident, well-organised blandness that sounds like an AI wrote it, because it did. I have trained the machine with many of my lectures and writing so it has a good grasp on the way I write material. In addition, prompting the machine to write in a particular style e.g., as if speaking aloud (my standard test for whether something sounds like a person talking or a document being generated) or in a more academic style catches this quickly. A few rounds of adjustment and it will come back to something I’m happy with. Iterative work pays off as the machine refines its responses.
The final thing, and perhaps the most important: transparency about the process is not a disclaimer, it’s an integrity practice. I know what I made, what I brought, and what the AI contributed. That clarity is part of what makes the work mine.
None of this is revolutionary. It’s a workflow, developed by trial and error, that happens to work for the kind of research and writing I do. If you’re a researcher, an educator, or a writer trying to find a principled way into these tools, the starting point is the same as it’s always been: know your sources, do your own thinking, and keep your name on what you actually made.
The AI is a very capable collaborator. It just needs you to be in charge.