How GlossGenius and Tildei Have Mastered the Art of Fast, Safe and Meaningful AI Releases

Learn how a philosophy of code ownership, progressive rollouts and other practices drive high-quality automation at GlossGenius and Tildei.

Written by Olivia McClure
Published on Feb. 24, 2026
A software engineer points to a line of code on a computer monitor screen while working with another engineer in an office
Photo: Shutterstock
Brand Studio Logo
REVIEWED BY
Justine Sullivan | Feb 25, 2026

At GlossGenius, a company that provides software for appointment-based businesses, engineers follow one simple rule when it comes to ensuring fast, safe and meaningful AI releases. 

“Every engineer owns their code from commit to production, with progressive rollouts that catch problems before they reach customers,” Vice President of Engineering Braden Allchin said. 

To gauge the “quality” of their tech stack, Allchin shared that he and his peers track observability across the platform and measure qualitative data, such as tickets filed by the company’s customer experience team, in order to fully understand both system and product performance. This practice enables Allchin’s team to continuously improve the company’s platform and avoid issues in the future. 

Meanwhile, at marketing software provider Tildei, Lead Software Engineer Austin Bruch and his teammates ensure fast, safe AI releases by incorporating concentrated, well-tested changes and leaning on a multi-environment deployment pipeline to ensure quality and compatibility. 

“Releases must satisfy comprehensive test suites focused on mission-critical features and code paths,” he said. 

So, how does Bruch’s team determine if a release works? He said they pay attention to the ratio of successful deployments compared to those that experience deployment issues, create degraded service or require rollback. 

Taking a well-strategized approach to AI releases has benefited engineers at both GlossGenius and Tildei, enabling them to ship new automation features that have greatly impacted their teams and companies as a whole. Read on to learn more about how Allchin’s and Bruch’s teams tackle fast, safe and meaningful AI releases.  

 

Image of Braden Allchin
Braden Allchin
Vice President of Engineering

The GlossGenius platform is designed to help appointment-based businesses grow their operations and maximize their income, offering features such as appointment scheduling, payment processing and client management. 

 

What’s your rule for fast, safe releases — and what KPI proves it works?

Our rule is simple: Every engineer owns their code from commit to production, with progressive rollouts that catch problems before they reach customers. We develop our products incrementally and always ship to production, but under a feature flag. Every production deployment goes through a canary rollout using Argo Rollouts, starting with a small percentage of traffic. From there, wemonitor error rates. If something goes wrong, we roll back immediately. Even if a bug slips through, it affects only a fraction of users while we detect and respond.

 

“If something goes wrong, we roll back immediately. Even if a bug slips through, it affects only a fraction of users while we detect and respond.”

 

The KPIs that prove this works come from DORA (deployment frequency, lead time, change failure rate and mean time to recovery). We target daily deployments of our shared codebases, but frequency alone can be misleading as you could deploy and break things constantly. Change failure rate measures how often deployments cause incidents requiring rollback, keeping us honest. We also track lead time for changes, which is the duration from merge to production. This tells us whether our pipeline is getting slower or faster with automation paying off.

 

What standard or metric defines “quality” in your stack?

We don’t have a single number; we find it to be the discipline of measuring, reviewing and improving constantly. We track observability across the platform — 4xx/5xx rates, latency and memory/central processing unit usage — combined with more qualitative data like tickets filed by our customer experience team to get the full picture of both system and product performance. Every customer issue is an opportunity to better our product and determine what we could have done differently to preempt such an escalation from happening again.

 

Name one AI/automation that shipped recently and its impact on the business.

A recent one is our AI Growth Analyst, an intelligent business analytics agent we launched to all 100,000 businesses on our platform in December 2025. Our customers are busy professionals who lack time to dig through dashboards. They have questions like, “What were my top services this month?” but should not have to learn complex analytics tools. The AI Growth Analyst lets them ask in plain English and get immediate answers with visualizations, covering metrics across revenue, sales, services, clients and retail. The customer impact has been immediate. Professionals are now engaging with their business data regularly, discovering insights they would not have found manually. 

The deeper impact has been on our engineering capabilities. To ship this, we built an Agent Platform that enables any team to create AI-powered features. What took months for the growth analyst can now be built in weeks. Given that AI systems are non-deterministic, we had to invest in evaluations that give us visibility into agent performance in production. The rollout demonstrated our move-fast philosophy through closed beta and early access before full release. This feature represents our commitment to using AI to genuinely empower small- and medium-sized businesses.

 

 

Related ReadingHow These Financial Services Companies Are Modernizing Their Products and Workflows With AI

 

Image of Austin Bruch
Austin Bruch
Lead Software Engineer 

Tildei’s platform is designed to enable marketing teams to build AI agents that automate workflows across their entire marketing stack. 

 

What’s your rule for fast, safe releases — and what KPI proves it works?

My rule is small, frequent releases with concentrated, well-tested changes, going through a multi-environment deployment pipeline to ensure quality and compatibility. Releases must satisfy comprehensive test suites focused on mission-critical features and code paths. The KPI that provides it is the ratio of successful deployments compared to those that experience deployment issues, create degraded service or require rollback proves this works.

 

“My rule is small, frequent releases with concentrated, well-tested changes, going through a multi-environment deployment pipeline to ensure quality and compatibility.”

 

What standard or metric defines “quality” in your stack?

Checks and tests at every stage of the software development lifecycle. This includes: type checking; test coverage (unit, integration and end-to-end); durable execution (Temporal) history testing; code reviews from Claude and Microsoft Copilot, allowing engineers to focus their reviews on higher-level topics; and multiple human reviews and approvals for release promotion, all of which must be completed/passed, with no exceptions.

 

Name one AI/automation that shipped recently and its impact on your team and/or the business.

Recently, we shipped a built-in framework for rapidly defining custom tools for agents to securely engage with external integrations. This has enabled us to quickly integrate with many third-party services to provide value to any customer or prospect, regardless of their existing stack.

 

Responses have been edited for length and clarity. Images provided by Shutterstock and listed companies.