As a developer, it’s often too easy to get lost in the design phase of production.
Whether you’re a front-end engineer working on visual design or a back-end specialist working on architecture, developers must, at some point, decide when to stop designing the system they’re going to build and start building.
But how do you know when the time is right? Khris Persaud, a senior data engineer at Pearl Health, uses a process that NASA abides by.
“NASA followed the iterative design process in the 1960s when they designed and built the Mercury flight software,” Persaud said. “It works as a ‘loop,’ where you cycle through different steps until arriving at a solution that sufficiently fits the problem.”
Kelsey Leftwich, a senior software engineer at Mantl, relies on the MoSCoW method.
“The MoSCoW method helps you determine what a product must do, should do, could do and won’t do,” Leftwich said. “This prioritization ensures that everyone involved understands what we are actually building, and maybe more importantly, what we aren’t building for this iteration.”
Both of these processes help to ensure that developers don’t get stuck in the design process.
“Done is better than perfect,” Sager Davidson, VP of engineering at Nayya, said. “Never take the time to get to 100% certainty. Get close to certainty and iterate on your solution with feedback from users.”
How do you know when it’s time to stop designing and begin building and iterating?
Every situation is going to be a little different, so it’s less helpful to have hard-and-fast rules than a set of standard tactics to test your design and establish thresholds for completion of testing.
If you’re trying to get out a quick solution, you might aim for 50% certainty and validation from a colleague as the threshold for wrapping up design testing. On the other hand, if you have a bit more time until your deadline and your solution needs to be scalable for a large audience, you’ll want to be closer to 90% certain — which will likely require some experimentation in addition to verbal validation.
It’s less helpful to have hard-and-fast rules than a set of standard tactics to test your design.’’
What’s an example of your design-and-build process in action on a recent project?
We recently tackled an issue of internal efficiency. We have been manually sorting our users into categories to give them access to the correct set of products they are eligible for, but we had a hunch that we could use data and some simple logic to our advantage, saving hundreds of hours of labor.
We knew that the solution would be a combination of using existing data, gathering more data from users, and relying on an algorithm that would make a decision for us. As for the details, we were pretty uncertain before doing more digging.
When did you know it was time to stop designing and start building?
We started by ideating with folks from product, customer success and engineering. Once we agreed on overall data- and user-flows, we created what we call a design document. This document holds acceptance criteria, assumptions we’re making, open questions, database schema adjustments, and even a high-level description of the structure of the code needed to introduce the change.
We then set aside one day to build a quick spike based on the design we had in place. From this effort, we learned that our data model was slightly off, and that part of the code design was going to be inefficient. At this point, the technical solution was relatively clear, stakeholders understood and agreed with the broad strokes of the user flows, and we felt pretty confident.
We broke the work into stories, did some sequencing, and got to work!
Nayya is a benefits experience platform that allows its consumers to choose and use their benefits through personalization.
How do you know when it’s time to stop designing and begin building and iterating?
I have two processes to know when to start building. The first is using MoSCoW analysis with stakeholders to determine what a product must do, should do, could do and won’t do. This prioritization ensures that everyone involved understands what we are actually building and maybe more importantly, what we aren’t building for this iteration.
Once we have prioritized requirements from MoSCoW, the next process is taking each requirement and writing its acceptance criteria. Criteria should be in non-technical language and include reproducible steps. Our acceptance criteria for this requirement should be specific and easy for all stakeholders and contributors to understand.
Once we know what we’re building (MoSCoW) and how we’ll know when we’ve successfully implemented it (acceptance criteria), we can begin work on design and technical deliverables.
What’s an example of your design-and-build process in action on a recent project?
We recently began work on a dashboard where bank employees can view tasks to open a bank account. The stakeholder and development team began by meeting to decide what this product “must do.” We understood it must support multiple products. Then we asked, “What account types should we support?” This question represents our “should do.” It is unreasonable to expect any product to support all variations of non-trivial business processes. “Should do” requirements are valuable to our customers but there is some flexibility in how and when we deliver these features.
It’s tempting to take these requirements and get to work. Resist this temptation! Take the time to say what this iteration “could have” and “won’t have.” These are critical in understanding where we need to pause and take stock of what we’ve completed. When we have our “must haves” and “should haves” completed, it’s a perfect time to hand over the product to user testers. We won’t know if we’ve reached this point if it’s ambiguous.
After defining “could haves” and “won’t haves,” we wrote acceptance criteria. I write acceptance criteria like a QA test, defining the steps a user will take and system output. Now we can begin building with the end in mind.
We are constantly getting accurate feedback because we know what our measuring stick is.’’
When did you know it was time to stop designing and start building?
We knew it was time to build when we unambiguously understood what we were building, what we weren’t building, and how we’d test it when it was implemented. We had a shared understanding of what our product would be when this iteration was finished. From this state, product development flows forward organically. We are constantly getting accurate feedback because we know what our measuring stick is.
MANTL is an enterprise SaaS company helping traditional financial institutions modernize and grow.
How do you know when it’s time to stop designing and begin building and iterating?
It varies based on the scope of the problem, but for bigger-sized projects (where we add core components, or it’s a multi-sprint effort), we use the iterative design process.
In phase one, design, we work with a project manager to collect requirements and eliminate as many unknowns as possible. Once we’ve learned enough, we create a draft design and we document it. That documentation serves as our compass through the system’s life cycle. We keep our documentation up to date whenever we make a design change, so the docs always provide a birds-eye view of the current approach.
In phase two, implementation, we translate the design into working code.
In phase three, test, we ensure that we’ve met the requirements and that the implementation is doing what we intend. Finally, in the evaluation phase, we review the solution and determine if we gained new information.
What’s an example of your design-and-build process in action on a recent project?
At Pearl Health, we pay our customers. Payments are a critical part of our software platform. We designed and built it using the iterative design model. Payments are calculated using a data set that we receive from a third party on a quarterly basis. In our first pass through the iterative design model, we designed and shipped a solution that gets a one-time historical snapshot, and then adds new records each quarter.
However, when we got our second data set the following quarter, we saw that the vendor included modifications to historical records! Our original design hadn’t accounted for this. So we went back to iterative design and started a new iteration. This time, we made two changes. We revised the data model to store the complete data set, including historical data, every single quarter instead of just saving new records. Finally, we revised our payment algorithm to use the most recently-seen historical data.
We repeated the iterative design model the following quarter when we received the latest data set. We validated that the new design worked as intended.
When did you know it was time to stop designing and start building?
Questions Persaud asks himself to gauge whether his team is ready to move on:
- Are there things we don’t know and will only discover during later phases?
- Can we make assumptions about those unknowns? And if so, what are the risks of those assumptions? Are the risks acceptable?
- Do those assumptions affect the project’s delivery date?
- What’s the cost of waiting until we can learn more?
In the payments process, we knew the designs were incomplete during the first two iterations, but we did not have the luxury of waiting for the data to arrive or building for all potential outcomes. The risk of not delivering payments to our customers or inaccurate payments was more significant than the effort needed to refactor the system later.
Pearl Health is democratizing access to value in healthcare, providing physicians with the tools to provide great care.