Why we don't grade OKRs weekly
When OKRs are used well, they are a natural part of every conversation that an organization has around its goals. In every cycle, we (1) craft our OKRs, (2) commit and share our OKRs, (3) check-in and review our OKRs, and (4) at the end of the cycle, we grade and reflect on our OKRs.
As John Doerr writes in Measure What Matters, “OKRs are driven by data. They are animated by periodic check-ins, objective grading, and continuous reassessment—all in a spirit of no-judgment accountability.”
In this piece, we dig into the difference between “checking-in” on your OKRs and “grading” them.
Common OKR review questions & standards
Weekly 1:1s are a great space to encourage deeper conversations, soliciting feedback, and providing recognition. We like to call our 1:1s our CFRs (Conversations, Feedback, and Recognition). These conversations are more than just your typical status report.
Common CFR questions include, How are your OKRs coming along? Are there any blockers that could stop you from achieving your objectives? What OKRs need to be modified—or appended, or eliminated—in light of changing priorities?
When these questions are asked and answered far in advance of grading OKRs, we’re informed whether we’re taking the right actions or if we need to refine them.
This “action to get it back on track” is what reviewing is all about, and the fundamental difference from grading OKRs. And while CFRs work best at least weekly, formal sessions to discuss OKRs should happen on a regular rhythm (ie. quarterly) during each OKR cycle.
Reviewing OKRs without judgment in a candid conversation without fear of penalty rather than waiting to ask, “Did we succeed or fail?” inherently changes teams because it emphasizes the real ingredients for success: building skills and helping people deliver. Doing OKRs well is an act of collective commitment to what’s most important.
In reviewing an endangered objective or key result, the focus is on how to support each other moving forward, not on how the OKR was met or not met. The tracking itself is secondary to having open, ongoing meaningful dialogues that set the team up to adapt in this cycle or to perform better in the next cycle. You want to encourage teams to share, especially when an objective is not being met. The more reviews become simply about team learning and less about personal performance, the sooner the team will begin to stretch towards audacious goals.
Grading OKRs spurs new OKRs
Grading OKRs, on the other hand, are conversations about judging performance on specific objectives — but the judgment is not personal. Unlike the dialogue-oriented reviews, grading OKRs is a binary process: “a goal is met or it’s not.”
This sort of pass or fail methodology provides concise, empirical proof of delivery or non-delivery of goals. If the team has been reviewing OKRs on a regular schedule, it’s very likely the grading process will be quick and without surprises. That’s because the main purpose of grading OKRs is to provide the impetus to craft new OKRs for the next OKR cycle.
After quickly grading OKRs, it’s time to reflect. Great questions include things like, If we achieved our goal, what added to our success? If we didn’t accomplish it, what barriers did we encounter? And, If we were to rewrite the goal, what would we change?
This whole process can be done in front of a whiteboard with your team or using OKR software.
These questions provide the foundation for setting OKRs for the next cycle and can stifle sandbagging and encourage more ambition. As Larry Page says in “Measure What Matters”, “If you set a crazy, ambitious goal and miss it, you’ll still achieve something remarkable.”
Where can I get more information?
What sort of OKR cycle are you on? Are you checking in weekly or at least regularly? Let us know by emailing us here.
Or, if you’re looking for an OKR coach, check this out.
Related QuestionsSee all