AI Companion / Cowriting
With your environment working, the obvious task to start with is co-writing a strategy document with an LLM. A good strategy document is not just readable, but has a clear view of your current challenge and how to address it.
This chapter will cover:
- Using meta prompting to optimize our initial prompt for writing a strategy
- Incrementally building a strategy document by prompting each step with our meta prompted prompt, and the outputs of prior steps
- Cleaning up our generated strategy into something well-formatted
- Remembering that the quality of a strategy rests in your thinking, and that quickly generating bad reasoning won’t solve real problems
By the end of the chapter, we’ll have written a reasonably good strategy document in a surprisingly short period of time.
Developing our prompt
To co-write a strategy with an LLM, the core of the work is stepping through each of the five steps of building a strategy: exploration, diagnosis, refinement, policy, and operations. Before we can do that, we need to develop the prompt we’ll use for those sections.
The simplest prompt we could imagine creating is along the lines of:
How should I explore a new engineering strategy that answers:
how do we make software architecture decisions in our company?
This prompt will provide an answer, but my experience is that it will be very general about the process of exploring a strategy theme. That’s useful to some extent, as it reminds us of the strategy process, but it doesn’t really help us co-write our strategy.
From this basic starting point, we can improve this starter prompt in several ways. First, we can develop a problem statement that summarizes the problem our strategy needs to solve. A helpful prompt to create that problem statement is:
I want to write a 2-3 sentence problem statement for
a new engineering strategy. Asking me one question
at a time until you can write a complete statement for me.
This will lead to a series of back and forth questions and answers until the LLM has collected enough information to write its problem statement.
Question and answers with Claude.ai to develop a problem statement
Excluding the initial prompt asking for the problem statement, I needed to answer three questions to generate this more helpful problem statement:
Our engineering organization lacks a clear, consistent process for
making software architecture decisions, leading to friction between
engineers who feel excluded from decisions versus those who feel
slowed down by lengthy approval processes. This ambiguity around
decision-making authority—particularly when a few highly opinionated
engineers can effectively overrule others' work—is reducing overall
engineering velocity and creating frustration across the team.
Now we can enhance our initial prompt to include the problem statement instead of our ad-hoc one line problem statement from above:
# Problem statement
{text of problem statement}
# Request
How should I explore an engineering strategy to solve the
above problem statement?
This will tend to generate more helpful output, for example giving concrete suggestions about which book to read or what questions to asks peers.
Example of recommendation exploration for strategy
That being said, the generated text could still be more helpful. Let’s start by meta prompting as discussed in Foundations of collaboration to improve the output.
Meta prompting to improve strategy co-writing prompt
The improved prompt is quite long, so I encourage creating your own or retrieving it from this Gist. (Here’s a version with the embedded problem statement.) Now that we have an improved prompt, we can get started with the first stage of strategy creation: exploration.
Exploration
The first step of exploration is to take our prompt and update it to specify exploration:
{... updated prompt ...}
**Your request:** Write an exploration section to
address the above problem statement.
This generates a surprisingly comprehensive summary, whose full text you can read in this Gist.
Excerpt from generated strategy exploration section
It’s important to note, that while the exploration is pretty interesting, there are a number of areas where the output sounds reasonable but is, to the best of my knowledge, not particularly accurate. For example, I’m familiar with the general meaning of Amazon’s bar raiser and Amazon’s two pizza teams concepts, but either these are overloaded terms–which might well be the case–or they are only very abstractly relevant to technical decision making.
As a result, the next step is taking what was written, and editing it down to parts that you actually agree with and can vouch for. For parts that I can’t vouch for, I spent time doing some quick research to prove or disprove them.
In my edit, I ended up with about half the original content, but the remaining portions are useful. What I particularly appreciate, is that it did a fair amount of synthesis of the different approaches, and created a reasonably good framing of the options. Working with LLM it’s easy to fall into a sunk cost fallacy, where you accept the output as good enough, even though you don’t think it’s that good. Your defense is maintaining the same quality bar you’d impose on a peer rather than degrading your standard to accept what the LLM has generated.
Diagnosis onward
With the exploration completed, the next step is return to our optimized prompt, additionally including the exploration beneath the problem statement. That results in this prompt.
Diagnosis generated exploration and optimized prompt
Altogether, the full diagnosis is a surprisingly good set of factors that would come up when dealing with this problem. This is how in-context learning from relevant examples goes so far in shaping better content. That’s not to say the diagnosis is perfect, once again, it requires a meaningful editing pass to make it accurate to your circumstances rather than the more generalized ones generated by the LLM.
From diagnosis onward, the steps remain the same for policy and operations. Copy the edited contents of the prior step into your growing prompt–including the original optimized prompt and all the subsequent sections–and ask it to complete the next step. To speed things up a bit, I prompted to generate both the policy and operations in one step, which it did a reasonable job at.
Policies for improving architectural decision making
Overal, the policy and operational mechanisms are pretty reasonable. They lean heavily on the sorts of approaches featured in Crafting Engineering Strategy, which is why I believe that including this sort of book–from an author whose approach you trust–is what makes this sort of approach a useful one.
Cleaning it up
At this point, we have all the individual sections of our strategy, which I’ve collected into one file. Now we need to clean it up a bit.
To do that, I’m attaching the above file to this prompt within our working project:
Clean up this strategy document (structure it properly
as a strategy writeup, make it read smoothly, remove duplication).
This generated this output, which is remarkably good in my opinion for the amount of time this approach has taken. It’s overall too much, so I took an editing pass to pare things down into something that I would actually recommend, which is available for reading.
Summary
In this chapter, we’ve cowritten a valuable strategy with an LLM. To do this, we’ve meta prompted an initial prompt into a more useful tool. We’ve then used that optimized prompt to perform each step, supplementing the initial prompt with each step as we completed it. In the end, we used the LLM to clean up our strategy document as well.
The important thing to recognize is that each step of a strategy builds on the prior steps. A great exploration creates a powerful diagnosis. A faulty diagnosis ruins the following policy and operations steps. Just because an LLM can help you write quickly, doesn’t mean the quality is worth using. However, it’s a powerful brainstorming tool, and–in at least this example–did a surprisingly good job.
Next chapter: AI Companion / Reviewing and Editing
Previous chapter: AI Companion / Foundations of Collaboration
First published at lethain.com/ces-ai-cowriting-with-an-llm/