There are good observations and suggestions in this paper about OKRs: "Objectives and Key Results in Software Teams: Challenges, Opportunities and Impact on Development":
"Rather, in order to successfully use OKRs in the software industry, managers need to be properly trained on the framework; learnings need to be shared and consistent; data needs to be accessible; and goals at all levels need to be transparent and clearly communicated."
https://arxiv.org/abs/2311.00236
Mastodon Source 🐘
One observation that really resonated with me is Measurement Cost to understand if the output had the intended effect of supporting the goal. Instrumenting the various layers and getting access to reliable, complete, and timely data is often more expensive than the work itself:
"We saw in the verbatims that sometimes accessing data was so challenging a person would do a mental-math trade-off on whether it was even worth it to instrument the feature with data in the first place."
Mastodon Source 🐘
One of the signals positively correlated with OKR effectiveness is Hypothesis Driven Development:
"Key components of our Modern Engineering Score are working with an experimentation mindset, always learning and using a hypothesis. These elements allow one to come up with an idea, experiment, and learn from the results. This is essentially the scientific process which also has failure as a fundamental element - it is how we learn."
Mastodon Source 🐘
From a working backwards angle, the cost to build the machinery to detect the impact of input (changes shipped to customers) is both very valuable and can be expensive to sustain.
Reminds me of similar cost/benefits of distributed tracing and context propagation.
They both help teams respond quickly to new information, but they are typically not in scope for each team's deliverable.
Mastodon Source 🐘
This paper also highlights that importing reified processes, bypassing the organizational learnings that produced those processes, doesn't guarantee similar results.