Marcel Krčah

Fractional Engineering Lead • Consultations • based in EU

Group Decision-Making: Three Areas to Check for Less Friction

A lot of my attention recently has gone into trying to better understand what causes frustration and loss of engagement in group decision-making and group exploration.

While there are many dynamics at play, there are three that have become clearer to me.

First, criteria. Criteria for decision-making might not be clear, or there might be no intention to clarify the criteria. If that's the case, people tasked with exploration may invest effort, enthusiasm, and energy in things that don't actually add value to the decision. That can be deeply frustrating: people want to contribute, yet see their work effectively go to waste.

Second, authority and hierarchy. There could be someone in the group who has a higher position and is, in reality, the decision-maker. If this person has already made a decision on part of the problem, but communicates the whole process as collaborative and democratic, it can create confusion. People might assume the process is egalitarian and that they can democratically steer the outcome, when in fact they cannot. It seems healthier to acknowledge these roles explicitly, so the group can support the decision-maker by filling in gaps where they actually matter.

Third, consolidation. Many people seem to appreciate consolidation, that is, higher-level summaries of what a group has learned so far. I think this is because people value big-picture clarity. With the big picture, people can better orient themselves in the problem space, see the next steps and also see how their effort contributes to the overall effort.

Some questions one can ask:

  1. Criteria: Are people aligned on the decision criteria? Do people interpret the criteria, and their weights, in the same way?
  2. Decision ownership: Is there an implicit owner of the decision? How is that role recognized within the group? How are decisions and remaining open points communicated back to everyone?
  3. Consolidation: Is learning being consolidated and summarized at a high level?

Debiasing decisions

To increase the chances that your decision leads to success, try identifying at least one honest downside to the approach you're suggesting.

I hang out a lot with people who mak![[stakeholder-mapping.png]]e decisions. Many base their choices on criteria that are entirely one-sided.

I understand that decisions often need to be made quickly or with limited information; that's just part of the game. But I worry that one-sided decision-making clouds the fuzzy, complex reality we actually live in. That cloud often leads to extra effort needed to achieve success. Hell, it can even actively undermine success.

Remember the goal: to learn about the reality as much as possible under the given (time) constraints.

Acknowledging at least one honest downside in a proposal can help you see the situation more clearly and help achieve success faster.

one-sided-decisions

Stakeholder mapping

When I face a complex project with many stakeholders, stakeholder mapping has been an indispensable practice over the years.

stakeholder-mapping

I typically map stakeholders in Miro. This example is an anonymized version from a project involving 20+ people across 3 companies.

When mapping, I try to capture the essentials: name, role, company, and any important context. I then cluster names by team and place high management at the top.

The mapping doesn't need to be complete, its purpose is to help me and the team to quickly navigate the stakeholder landscape.

Subjects of sentences

A few years ago, I watched a talk by Larry McEnerney on writing. In the talk, he made a point that has stuck with me:

What are the subjects of your sentences? Underline the subjects of your sentences and ask yourself if they match what the reader is interested in.

Does this translate to engineering work as well? When we work, what are the subjects of our daily discussions? Are the subjects things like: ticket, sprint, points, goal, review? Or are they more like: customer, error, decrease, bet?

Could it be that the subjects of our sentences signal what we actually value and what we optimize for in our work?

Going beyond surface (video)

A simple question can be a gate to something more fundamental.

For example: "Should we estimate with t-shirts sizes or fibonacci?". Looking beyond the surface, one might start talking about organisational constraints, AI, planning, incentives, and how decisions actually get made. Such a discussion might reveal that estimates aren't really about numbers, but about who we're trying to serve, and for whom.

Miro board from the video

Going beyond the surface

To go beyond a problem's surface, you can try the following approach:

  1. Write down the problem or question you're facing
  2. Try to understand the underlying need
  3. Brainstorm approaches to address that need (at least two)
  4. Write down what you think about each approach
  5. If the problem is difficult or complex, introduce more structure: define criteria, then evaluate each approach against those criteria

Example: How should we do estimates: fibonacci, t-shirt sizing?

Many teams discuss on the level of this question, but we first need to ask something else: What underlying need are we actually trying to address here? It could be something like:

  • Predictability for the business?
  • Engineers aligning on scope and approach?
  • A requirement to follow a given process?
  • Something else we're not seeing yet?

These discussions often feel like stepping into the unknown, which can be uncomfortable. Some people tend to rush through.

So let's say it's predictability, and we choose not to explore that need further. How can we make the work more predictable? Some approaches:

  1. Point estimates: might still be too micro-level to be truly predictable?
  2. Fixed project time (e.g., 6-week cycles like Shape Up): requires the ability to adjust scope accordingly
  3. Continuous evaluation: requires incremental delivery and ongoing alignment with the business
  4. No change, keep doing what we're doing: maybe suitable if there's too much other stuff going on right now?
  5. Something else?

Capturing this thinking and decision-making process in writing or in a diagram can be helpful as well.

When you make a decision after this kind of exploration, you'll likely have a clearer understanding of both the problem you're trying to solve and the solution space. As a result, the choice you make is likely to be stronger.

Overlooking the underlying problem

Example of how overlooking the underlying problem can lead to wasted effort.

Recently, I was involved in an initiative to adopt contract tests, and the conversation went something like this:

Me: We're talking about adopting contract tests. What's the underlying need we're trying to solve?

X: To prevent integration errors after release, and reduce the number of incidents we see afterward.

Me: That sounds great. ...at the same time, I wonder, it seems we don't currently have a clear overview of all incidents happening in the team. Is that right?

X: Yep, we still need to work on that.

Me: Hmm. I'm curious, what did the last few incidents actually look like? What caused them?

Y: We had a severe one caused by a misconfiguration. And another big one was also caused by a misconfiguration.

Z: And there was another major one, that missing regression unit test, you remember?

Me: Yeah.. So it sounds like the major incidents we remember weren't caused by service integration issues, but by misconfigurations and missing tests. Would that be right?

X: Yep.

Me: So it seems that if the goal is to lower the number of incidents, adopting contract tests might not actually help at this point.

X: That seems to be the case, yes.

Rather than diving straight into contract tests, the team would probably be better off starting with tracking incidents and understanding their causes. And maybe focusing on misconfiguration issues instead.

contract-tests-exploration

Who owns business value?

Who is truly responsible for delivering business value to the organization: the product owner or the engineering team?

I recently spoke with a product owner about his experiences working with engineers who are also focused on outcomes rather than just delivering requested features. Here's what he shared:

  • "When engineers focus on the outcome, I find myself trusting them more with technical decisions."
  • "Focusing on outcomes opens up new opportunities and ideas. The outcome is the key to having discussions about solutions."
  • "I love discussing outcomes with engineers because they simply have a different perspective than I do."
  • "I can create tickets for all these ideas, and then we'd have to discuss them. But then an engineer looks at it and thinks: I can fix this in half an hour. These everyday low-hanging fruits are really valuable."

The role of a product owner, as we know it, is to serve as the exclusive conduit of business value between IT and the business. But what if that model isn't working as well anymore?

Here's an excerpt from Mark Schwartz's The Art of Business Value on the topic.

art-of-business-value-excerpt

Different customer types

I recently discussed with an engineering lead on how to become more customer-centric. They mentioned there was a lightbulb moment for them when we zoomed in on different customer types.

Many people see customer-centricity as being about external customers, that is, the people who pay money for the services. But it turns out there are other types of customers as well. In our exploration, it turned out the team was doing work for four different types of customers:

  1. paying customers
  2. internal ops (customer care, finance, etc.)
  3. other engineering teams
  4. the team itself (improving efficiency, speeding up onboarding, etc.)

customer-types

(Although not technically a customer, we included the team itself as well, because a lot of energy has been put into improving the team's way of working.)

We went further in the exploration:

  • Who's driving value for which customer type?
  • Does the team want to serve all these customer types?
  • Who's responsible for measuring impact for the different customer types?

Understanding who the customer is seems to have given the lead more clarity, particularly about responsibilities, roles, and the team's focus.

From features to outcomes

If you want to go from output to outcome, you can start bottom-up. For a thing you work on, try to understand:

  1. What's the intended outcome?
  2. How can success be observed/measured?

output-vs-outcome-examples

After that, mentally let go of the output you were working on. (This detachment could be hard.) Instead, look into the measure and try to identify the best opportunities to achieve success. Choose the best opportunity and execute it. Afterwards, validate if the opportunity has moved the measure as intended.

Going through these steps, you might run into issues, such as:

  • There are multiple outcomes we are after, which one to choose?
  • How to measure something intangible?
  • What if the opportunity will positively impact one metric, but negatively impact another metric?
  • We currently don't have anything to measure this or see where the opportunities are.
  • The measures are out of our team's control.
  • Is this outcome something we want to focus on right now?

This newly gained unclarity is good. You have just turned unknown unknown into a known unknown.

I have found these steps to be generally applicable, from low-level engineering work to higher-level business decisions.

Measuring inconsistency

Some product issues arise from inconsistencies between two systems, where a single source of truth is hard to reach. For example:

  • Invoice status synced in two cloud systems
  • Code ownership defined in two places
  • Customer data living in two systems

inconsistencies-between-systems Some inconsistencies are ok, some are not. Without clarity on the required degree of consistency, you might find yourself in confusing situations.

If inconsistencies are not ok, then you might need two things:

  1. Measuring the degree of inconsistency
  2. Ensuring the inconsistency stays within an acceptance range.

Once you open up to the possibility of measuring inconsistency, you might find that measuring could be straightforward. For example:

  • A SQL query across two dbs that have already been ETLed to the shared data analytics db
  • A script in the CI/CD pipeline scanning for code ownership

Having the measurement at hand also helps communicating your technical work to your non-technical colleagues, who now have a better overview of the problem.

But again, it could be that inconsistencies are ok. Inconsistencies might have low business impact, or the systems are planned to be replaced with something else.

As with other things, you might benefit from understanding the surrounding technical and business context to choose the right action and the level of consistency that is acceptable to your particular situation.

Measuring intangibles

In the book How to Measure Anything by Doug Hubbard, this section has stuck with me for a long time. clarification-chain In engineering, things often seem impossible to measure. Especially the soft, intangible things. But if something is valuable, it should be observable in some way.

After this book, I started asking much more: how can we observe the success we are after? (Which often opens the question of what success actually is.) And then trying to incrementally find a way to capture that observation into a measurement.

Simple wireframes

When building something new, try starting with an embarrassingly simple UI wireframe. It doesn't have to be perfect, just trying to capture the bare functional minimum. The goal is to start cross-functional conversations as early as possible and get early feedback from the end user.

simple-wireframe-example

Such wireframes seem especially important if there's no UI/UX designer available, or if the designer doesn't have the capacity to understand the project in depth. The wireframe doesn't have to be started by a product manager or a frontender; a backender can do as well.

For the initial wireframe, I like google sheets. We can focus on functionality over form, and people are familiar with the tool. Also, we often already have a project-specific gsheet, so it's a natural fit.

Cross-functional discussions over the wireframe often open up interesting questions. Sometimes, the questions touch upon strategic choices about customer experience, the frontend stack, or the backend architecture. For example:

  • This task is async on the backend. Do we notify customers async, or do we keep customers waiting for the result to appear on the page? What are the downsides of switching to sync? How are we handling async in other flows?
  • Is our data model on the frontend/backend/db aligned with the domain language concepts we are working towards? Do we have all the data we need on the backend?
  • How will we handle unhappy paths? What will we show the users? How do customer care agents currently handle async unhappy flows? What's their experience? How can we improve that?
  • For internal projects: Would new colleagues understand the process if there's employee turnover?

All in all, wireframes seem like a cheap and fast way to help prevent surprises down the road and increase cross-functional alignment.