Share this article

Speeding up the slowest page in the Vanta app by 7x
Accelerating security solutions for small businesses Tagore offers strategic services to small businesses. | A partnership that can scale Tagore prioritized finding a managed compliance partner with an established product, dedicated support team, and rapid release rate. | Standing out from competitors Tagore's partnership with Vanta enhances its strategic focus and deepens client value, creating differentiation in a competitive market. |
Like many fast-growing startups, Vanta had to move quickly and ship a ton of features. At first, things were fine. Pages loaded quickly enough and everything generally just worked. But as Vanta continued to scale up, cracks began showing in assumptions we made about the amount of data the app handled. Soon, we hit a wall trying to launch our largest framework to date and needed to re-examine our engineering and design patterns with performance in mind. By the end, we’d sped up the slowest page in the Vanta app by a remarkable 7x, taking it from “rage-click-until-it-responds” to smooth and snappy.
A new challenger approaches
In February 2023, we planned to launch a framework from the National Institute of Standards and Technology called NIST 800-53 on Vanta. It’d be the 20th framework on the platform so we’d thought we had ironed out all the kinks in the launch process by then. How different could this one be?
As it turns out, very. NIST 800-53 was unlike the other frameworks we had launched before because we severely underestimated its size. In terms of requirement count, NIST 800-53 was three times bigger than the next-biggest framework on the platform, FedRAMP, and a whopping 16 times bigger than the most popular framework, SOC 2.

The framework detail page, one of the most important in the app, used to take a second or two to load in our dev environment but now timed out after 30 seconds with NIST 800-53. As the go-to destination for customers to see all their security commitments, that page is especially critical during audits where one status change can put success at risk. On the infrastructure side, the team also worried about cascading failures caused by overloaded web servers. Clearly, we had to make drastic changes before rolling it out.
To get NIST 800-53 launch-ready, we focused on the low-hanging fruit first. Analyzing the API requests, we identified inefficient query patterns and applied our own advice to streamline them. We also moved status calculations from the browser to the server, resulting in fewer bytes sent over the wire.
Fortunately, these targeted fixes made the page performant enough to meet launch deadlines (and more importantly, not take down the servers in the process). Loading the page was painful, but we still planned to revisit it and make foundational improvements once there was more breathing room in the product roadmap.
Measuring page performance with Datadog
Over the next year, Vanta’s customer base continued to grow. With more established companies signing up every day, app performance shot up to the top of our priorities list. As a first step, we looked for a dead-simple way to measure current performance levels, eventually settling on Datadog’s loading_time metric. It wasn’t perfect (far from it in fact) but did check a few important boxes:
- Ease of setup: it came mostly out-of-the-box with our Datadog integration
- Customer focus: it measured how users actually experienced the app (compared to a metric like server response time)
- Debuggability: engineers could easily find session replays for slow-loading pages
After collecting initial measurements, we found that the framework detail page was one of the slowest in the app. It routinely took over 8 seconds to load and, for supersized frameworks like NIST 800-53, up to 20 seconds. Anyone demoing NIST 800-53 would feel beads of sweat forming as the loading screen kept spinning and spinning and... spinning.
We tried more backend optimizations but none really moved the needle on page load times. The bottleneck had to be elsewhere in the stack.
The bottleneck is coming from inside the browser
Once we changed course and turned our attention to the frontend, the problem immediately jumped out at us: we’re putting way too much content on the page.

Each requirement (e.g. “AC-1 Policy and Procedures”) displays a table listing what the customer is doing to meet it. NIST 800-53 has over 1000 requirements so we slap 1000s of tables on the page and call it a day. No wonder the browser locks up trying to render it all.
This also checked out in Datadog. It frequently registered the page load event after some mysterious “Long Task”s, likely from React rendering a ton of content.

The general strategies we’ve used for other pages with lots of data wouldn’t cut it here. Incremental loading doesn’t work because users don’t navigate the page linearly from top to bottom. Virtualization proved difficult to implement because the tables had variable heights–we tried twice before giving up because the code got too gnarly and unmaintainable. Both ideas were dead-ends.
Now it was time to bring out some new tools: React profiler and useWhyDidYouUpdate.
Down the rabbit hole of React rendering
After recording a page load, React profiler found something stunning: we rendered the entire page not once, not twice, but three times on initial load.

The render at 3.6s kicks off once the browser finishes loading framework data from the server but what about the two later ones? Using the useWhyDidYouUpdate hook, we tracked down the root causes for those extra renders:
- We send a follow-up API request after the initial render to populate a dropdown but updating the dropdown re-renders the entire page (oops).
- Hydrating state from the browser’s local storage triggers an unnecessary navigation event (double oops).
Using a carefully placed React.memo and tweaking the local storage logic to skip the unnecessary navigation event brought the p95 page load from 7 seconds down to 5.5. A nice win but we needed more.

We’d cut out the extra renders but now had to tackle that expensive initial render. There was no avoiding it—the only way to make the page feel snappy was by fundamentally changing the UX. Working with the design team and our expert front-end engineers, we aligned on a promising solution but weren’t sure how much it would improve the page load time. There was only one way to find out.
We hacked together a prototype where framework requirements loaded in a “collapsed” state by default and gave it a test drive. Now that we weren’t loading 1000s of tables on the page, all the unending loading animations were gone. Even with NIST 800-53, the content popped up on the page almost instantaneously. The difference was night and day.
Here’s a side-by-side comparison of the before and after:

The new experience felt like a stepwise improvement but would it hold up with real users and real data? We committed to polishing up the rough edges and shipping it to production so we could measure its impact. A few iteration cycles and design tweaks later, the new experience was ready to go live. We turned on the feature flag and eagerly waited for the data to come in.

After collecting a week’s worth of data, the impact was clear as day: a 60-70% improvement in page load times across the board. The vast majority of users would see the page finish loading much faster than the old experience. As a bonus, all interactions on the page (clicking buttons, typing in the search input) got noticeably snappier now that there weren’t 1000s of tables weighing them down.

A/B test results with half of users on the new experience and half on the old
Lessons and takeaways
At the beginning of our journey, back when loading NIST 800-53 timed out in dev, we couldn’t have possibly imagined the huge performance improvements we’ve made since. From these experiences, we learned (and lived through!) a few valuable lessons:
- Performance is best as a team sport: We found our most impactful improvements when designers, product people, and engineers of all stripes worked together. Engineering-only solutions might have made some progress on page performance but certainly not the stepwise improvement we ended up making. And bringing everyone along for the ride changed performance from just an engineering problem to everyone’s problem.
- Be prepared to chase performance bottlenecks across the stack: Optimizations can be found everywhere; no line of code should be safe from scrutiny. That means not letting fear hold you back from digging into unfamiliar parts of the codebase or unfamiliar layers of the stack. If we hadn’t questioned our assumptions about the backend being the bottleneck and pushed through the discomfort of learning to debug frontend performance, we’d still be stuck with three expensive renders on page load.
- Don’t underestimate the power of prototypes: It took a day or two to prototype the new experience but it was worth it. We’d all been used to waiting forever for the page to load and seeing the content pop in so quickly galvanized the team to ship that experience to customers. The first demo blew everyone away and the feeling of momentum it gave us was incredibly energizing.
Performance work is never truly done, but every little contribution makes the user experience smoother and snappier. Even recently, we found another optimization in our database query patterns which further dropped the page load time. But long-term, the keys for app performance are to stay curious, work together, and never stop pushing for better.
Interested in joining us on this journey? We’re hiring!





FEATURED VANTA RESOURCE
The ultimate guide to scaling your compliance program
Learn how to scale, manage, and optimize alongside your business goals.