Understanding Your Auto-Generated Milestones (and How to Make Them Your Own)
Overview
When you create a role play in Brevity, the system automatically generates a set of milestones based on what you describe. For most scenarios, these work well out of the box. This article explains what gets generated, what the Target Milestone is and why it matters, and how to get better results — either by giving the system more to work with upfront, or by refining what was generated after the fact.
What Brevity Generates for You
When you describe a role play — whether that's as simple as "SaaS discovery call" or as detailed as a full scenario description — Brevity uses that input to generate:
- A conversation name and description for your reps
- A set of milestones representing the key steps a rep should move through
- AI Instructions that defines how the AI persona will behave during the practice conversation
The milestones are designed to reflect a realistic progression for that type of conversation. A cold call will look different from a discovery call or a renewal conversation. The more context you provide upfront, the more tailored the output.
The milestones generated are a starting point, not a final answer. They're built to be reasonable for the scenario described — but they won't know your team's specific language, process, or priorities unless you tell them.
Understanding the Target Milestone
Every role play has one milestone designated as the Target Milestone. This is the "win" condition for the conversation — the moment that, if reached, means the rep successfully completed the core objective.
A few things to know about it:
- Only one milestone can be the target. It should represent the primary goal of the conversation, not just a step along the way.
- It drives your win rate reporting. Conversations where the rep passes the Target Milestone count as wins in your team's performance data.
- Brevity selects it automatically based on your scenario description, but you can change it if the wrong milestone was designated.
Choosing the right Target Milestone matters. If it's set too early in the conversation (e.g., "Build Rapport"), reps will show high win rates without completing the full conversation. If it's set too late or too strictly, win rates may underreport genuine progress. A good Target Milestone reflects the moment a real manager would say "that was a successful call."
What to Expect From Auto-Generated Milestones
Brevity's generated milestones tend to be well-structured but intentionally general. The system avoids being overly specific to keep milestones flexible across different reps and conversation styles.
In practice, this means:
- The milestone names and sequence are usually solid and need little or no adjustment
- The completion requirements (the
notefield) are where you're most likely to want to make changes — particularly if your team has a specific way they approach a step, or if reps are not passing milestones you'd expect them to pass - The Target Milestone is usually correct for standard conversation types, but worth reviewing for more customized scenarios
If the generated milestones feel generic, that's often a reflection of a generic input — see the section below on how to improve that.
Getting Better Results From the Start
The quality of your auto-generated milestones is directly tied to the quality of your scenario description. A few things that make a meaningful difference:
Be specific about the objective. "Discovery call" generates generic discovery milestones. "Discovery call with a mid-market CFO who has an existing vendor and is skeptical about switching" gives the system enough to generate milestones that reflect real friction points.
Include the primary goal. What does a successful rep do in this conversation? Booking a follow-up meeting, uncovering a specific pain point, and presenting a proposal are all different objectives — and they'll produce different milestone sets.
Name the key objections. If your reps regularly face price objections, timing concerns, or "we're already using a competitor" pushback, including those upfront means the system can build milestones around handling them — rather than generating generic objection-handling steps.
Describe the prospect's context. A warm inbound lead behaves differently than a cold outbound prospect. The more the system knows about the respondent's situation and mindset, the more realistic the practice conversation will be.
Refining After Creation
If the generated milestones don't quite fit, you don't need to start over. The most common refinements are:
Updating the note field on one or two milestones — this is the most frequent adjustment and often the only one needed. If a milestone isn't passing the way you'd expect, the note is usually the place to look. See How to Write Milestones Your Reps — and the AI — Will Understand for guidance on what makes a strong note.
Changing the Target Milestone — if the wrong milestone was designated as the win condition, this can be updated without changing anything else.
Adding a milestone — if your team's process includes a step that wasn't generated (e.g., a specific compliance disclosure or a branded framework step), you can add it manually.
Removing a milestone — if a generated milestone doesn't reflect how your team actually runs this conversation, removing it keeps the role play focused.
Avoid over-customizing. The more milestones you add or the more complex you make the notes, the harder it becomes for both reps and the AI to track completion cleanly. If you find yourself writing very long notes or adding many new milestones, it's worth asking whether the original scenario description needs to be more specific instead.