CMP / Cookie Banner and web performance: comparison of 11 tools

Eroan Boyer

13 minutes

Consent Management Platforms and Cookie Banners, hereafter referred to as CMPs, can have a significant impact on web page performance. Several Core Web Vitals metrics can be degraded depending on the technical choices made by their publishers:

  • Increase of TBT and FID due to JavaScript execution and DOM manipulation at initial page load.
  • Increase of INP due to DOM manipulation, JavaScript Event Listeners and asynchronous data transfers.
  • Increase of LCP, where the final candidate becomes an element of the CMP, at the expense of site-specific UI elements.

When choosing a tool, it’s essential to consider the impact of CMPs on performance independently of their ability to bring you into compliance with the various regulations (RGPD, ePR, CCPA, LGPD, CNIL…) or their price. That’s why we’ve drawn up this comparison.

Why did Agence Web Performance make this comparison?

As part of our audit, optimization and web performance support services, CMPs are almost always a focus in their own right. We are regularly called upon by our customers (fr) to provide recommendations on the choice of the best cookie management tool to ensure optimal performance.

And yet, none of the existing comparisons provide any relevant answers: most of them simply repeat the arguments put forward by one or other player. Or, when they do adopt a technical approach, they include only two or three points of comparison, missing out completely on certain critical factors that a web performance expert simply can’t ignore.

Google search for "best CMP performance".
Between affiliated content, self-promotion and AI-generated lists, it’s hard to find real information to compare CMPs.

With this CMP performance comparison, our aim is to provide as many people as possible with an objective comparison based on concrete elements. This requires several prerequisites:

  • Agence Web Performance is not affiliated or partnered with any CMP: we have no financial interest in promoting one tool over another.
  • For maximum transparency, we have taken care to detail our methodology in the following section.

What methodology is used and how are scores calculated?

The methodology deployed for this comparison is based on the one used for our performance audit and optimization services. Given the specificity of CMP tools, only 12 checkpoints have been selected, as opposed to the usual hundred or so.

In order to obtain a percentage performance score for each tool, the various components were prioritized according to their impact on overall performance. This distribution is probably the most “debatable” aspect of our work: assigning a weight to a criterion in an overall score necessarily requires a decision to be made, which we did after careful consideration.

Here are some additional details on the methodology used:

  • The tests were carried out on the tag as supplied by the CMPs, not on integrations via Tag Managers. They therefore reflect performance with a standard installation, as recommended by the publishers. The scores therefore do not reflect the maximum potential of each tool, particularly in the JavaScript component, where all tools can be loaded asynchronously.
  • Tests were carried out under Google Chrome 113 with a standard configuration. This choice is justified by the browser’s popularity against its competitors Firefox, Safari and Edge.
  • The tests were carried out from a French IP address, enabling CMP to geolocate us in France, thus triggering the corresponding functions and applying the translations.

The 12 checkpoints

Each checkpoint is associated with a percentage. For each checkpoint, we have taken care to share the scale used to calculate the scores for the various tools. This ensures maximum transparency as to how the final scores are calculated.

Criterias weight over 100
Adding up the 12 scores gives a score out of 100.

The list is sorted in descending order, from the most impacting criterion (a quarter of the score) to the least impacting (just 2%).

Asynchronous JavaScript volume


The volume of JavaScript executed by a CMP is one of the key factors in its performance. As asynchronous loading was the norm in the panel tested at the end of May 2023, it was given higher priority than synchronous loading, even though the latter obviously degrades performance more.

As this weight increases, two bottlenecks come into play:

  • On the network side: resource takes longer to download
  • On the user resources side: script execution takes longer and consumes more CPU

Until download and execution have taken place, the CMP cannot be displayed. We have evaluated the file weight in kilobytes with the original compression, Gzip or Brotli.

Incidentally, no CMP recommends the use of defer, which would offer an even higher level of performance than async (see this article (fr), which details the differences). Scripts would no longer be able to block page rendering, significantly improving overall performance.

0<= 20<= 40<= 60<= 80<= 100<= 120<= 140<= 160<= 180> 180
Checkpoint weight as a function of data volume expressed in kilobytes (compressed)

Synchronous JavaScript volume


The same logic that applies to the main script also applies to any JavaScript called synchronously (via a <script> tag without async or defer attributes), with the difference that the latter blocks rendering on download and execution. Their impact on loading time performance metrics (FCP, LCP, Speed Index…) and on Blocking Time is therefore far greater.

Fortunately, very few CMPs provide synchronous code. On the other hand, one of them requires the synchronous code supplied to be integrated at the very top of the <head>, which is the most penalizing scenario for loading times.

0<= 10<= 20<= 30<= 40<= 50<= 60<= 70<= 80<= 90> 90
Checkpoint weight as a function of data volume expressed in kilobytes (compressed)

CSS weight


All the CMPs tested adopt the same approach when it comes to formatting: CSS is injected via JavaScript into an inline <style> tag. This is indeed the most appropriate solution, and the use of external stylesheets or style="…" attributes on DOM elements would have resulted in a lower score.

We have therefore only used the criterion of weight, comparing the volume of CSS generated by each CMP. We could have gone further by evaluating the number of CSS selectors and their complexity, but here again there seems to be a form of consensus: CSS selectors are generally found in the form #identifier-cmp .classe-cmp.

0<= 10<= 20<= 30<= 40<= 50<= 60<= 70<= 80<= 90> 90
Checkpoint weight as a function of data volume expressed in kilobytes (uncompressed)

Use of a cdn


For a script as critical as a CMP, inevitably hosted on a third-party domain, the use of a cdn makes perfect sense. With rare exceptions, CMPs comply with this requirement, relying on the network infrastructures of players such as Akamai, Cloudflare, Cloudfront, Google or BunnyCDN.

Loading scripts from a simple web server running NGINX or Apache is highly penalizing for this factor. Performance will fluctuate much more depending on the geolocation of users, degrading the quality of their browsing experience if they connect to the server at a great distance.

Checkpoint weight according to presence or absence of criterion (proratable)

Browser cache policy


A high browser cache lifetime is a relevant lever for improving CMP performance. If the cache life is too low, users will have to re-download the script regularly, adding connection delays (dns resolution / tcp connection / tls negotiation) and download latencies.

A good half of the players in this comparison have opted for an http header cache-control of 3600 seconds, in other words an hour. Others are limited to a few tens of minutes, while the most daring go as far as a week. Naturally, it’s the latter who get the highest scores in this aspect.

>= 2624400>= 874800>= 291600>= 97200>= 32400>= 10800>= 3600>= 2700>= 1800>= 9000
Checkpoint weight as a function of browser cache lifetime in seconds

DOM volume


Injecting DOM into a page via JavaScript is never trivial in terms of performance. The larger and deeper the DOM manipulated, the poorer the performance. We therefore counted the number of DOM nodes present in the initial interface of each CMP. This number can increase with the opening of additional panes or windows.

The result is generally consistent with expectations: the panel ranges from 40 to 200 nodes. To take things a step further, we could have analyzed how the DOM is injected via JavaScript, but this choice was not made due to its complexity.

<= 30<= 40<= 50<= 60<= 70<= 80<= 90<= 100<= 110<= 120> 120
Checkpoint weight as a function of the number of DOM nodes injected

HTTP Protocols


Using modern HTTP protocols ensures that you can take full advantage of improvements in HTTP header compression and parallelization of multiple downloads. For the purposes of this comparison, HTTP2 is the norm, with a bonus for resources uploaded via HTTP3 and, on the contrary, a penalty for those sent via HTTP/1.1.

Checkpoint weight based on protocol use (proratable)

Json and XHR exchanges


By their very nature, CMPs are designed to exchange data between the user’s browser and the tool’s servers. As these exchanges can vary greatly depending on interactions with the interface, we have chosen to measure data exchanges occurring on initial loading, before any validation or denial of consent.

While most players are comfortable with just a few kilobytes, some upload tens of kilobytes of translation or configuration files, which then have to be interpreted via JavaScript to display the customized French interface. This penalizes overall performance.

0<= 2,5<= 5<= 7,5<= 10<= 12,5<= 15<= 17,5<= 20<= 22,5> 22,5
Checkpoint weight as a function of data volume expressed in kilobytes (compressed)

DOM insertion method


In the absence of an analysis of how the DOM is injected into pages, we opted for a simpler criterion with an equally significant impact: the use of Shadow DOM, also known as DOM encapsulation, vs. a more traditional injection into the document DOM.

This modern method, used by a handful of CMPs, ensures a much higher level of performance. As a result, it has a significant impact on the scores for this component.

Shadow DOMClassic
Checkpoint weight according to method use

Images weight


Few CMPs display images in the default interface, which is an excellent point. For those that do, we’ve downgraded the score by taking into account their total weight, and therefore indirectly their number.

It’s worth noting, however, that for most image-based CMPs, SVG is the predominant format. Vector-based and compressible, SVG is the most appropriate choice when compared with Jpeg, PNG and, even more so, Gif.

0<= 10<= 20<= 30<= 40<= 50<= 60<= 70<= 80<= 90> 90
Checkpoint weight as a function of data volume expressed in kilobytes (compressed)

Number of distinct domains


Each connection to a new domain generates additional latency, since it is necessary to go through the dns resolution, initial connection and SSL negotiation phases. This translates into between 500 milliseconds and one second for a mobile connection. Loading scripts from different domains mechanically delays the display of the CMP.

CMPs that have adopted a centralized approach around a single domain accordingly obtain the maximum score. Those using two, three or four domains see their score negatively impacted. The average score for the 11 tools tested was 2.5.

12345>= 6
Checkpoint weight as a function of the number of distinct domains called up

Compression algorithms


The difference in weight between a Gzip-compressed file and a Brotli-compressed file can be as much as 25%, which has a direct impact on download times. Although the two most important checkpoints already include the impact of compression (weight of asynchronous and synchronous JavaScript), we have chosen to include this choice as an additional criterion.

Since most CMPs use Gzip, we thought it logical to highlight the players who have deployed the more modern, higher-performance Brotli algorithm.

Checkpoint weight based on algorithm use (proratable)

Factors not retained

While 12 control points were selected, we considered integrating others to obtain even more exhaustive performance scores. Here are those that seemed interesting at first glance, but were not retained for lack of relevance.


We expected to come across at least one poor performer using fonts hosted on a remote server, but this was not the case. All the CMPs in this comparison display their texts using CSS inheritance mechanisms (the ideal solution), or with “Safe Fonts” available in all browsers, such as Arial or Helvetica.

Even if this renders obsolete a control point with a potentially major impact, it’s excellent news for the performance of the tools being compared.

Repaint/reflows at interaction

We also expected to observe undesired repaint and/or reflow behavior when elements such as buttons, toggles, etc. were hovered over or clicked. This was not the case on any of the CMPs tested: all take care to inject their DOM as a direct child of body, eliminating any risk of style recalculation that would go up the DOM tree.

This is also an excellent point, as these problems are very common and can have a negative impact on page interactivity, especially in terms of the INP metric.

How were the compared CMPs chosen?

We have decided to include 11 tools in our comparison. To compile this list, we used several complementary sources:

  • The tools used by our customers, whose wide-ranging profile logically reflects the variety of what can be found more widely on French sites.
  • The Core Web Vitals Technology Report published by http Archive, reflecting the variety of tools implemented on hundreds of thousands of sites worldwide.
  • Multiple comparisons of CMPs to identify those considered to be essential.

If a major player is missing, don’t hesitate to let us know in the comments, so that we can incorporate it later as part of an update.

Which are the best-performing CMPs (results)?

Performance scores by CMP
Aggregated scores from the 12 checkpoints

We now move on to a detailed analysis of the CMP rankings based on our 12-criteria scoring grid. The list is sorted from the best to the worst performers.

Osano, the great winner


The Cookie Consent tool developed by Osano is clearly the most surprising entry in this CMP performance comparison. However, its top score is not due to a particularly performance-oriented approach, but rather to its transversal technical mastery: unlike its competitors, Osano makes no costly missteps in terms of points.

Combined with a number of clever choices that are still too few and far between, such as a high browser cache lifetime (one day) and the consistent use of Brotli compression, this is enough to make Osano stand out from the crowd. Finally, it’s worth noting that Osano doesn’t take advantage of the Shadow DOM to inject its CMP, a cutting-edge technique used by only two of its competitors in this comparison.

CookieYes, Didomi, Iubenda and Quantcast Choice, the challengers


Several CMPs share the second step of the performance podium, with very similar scores that wouldn’t justify an individual ranking: a reduction of just a few kilobytes on the JavaScript side would be enough to make their positions change. This quartet is nonetheless interesting insofar as it offers a wider choice for equivalent levels of performance.

The four tools demonstrate a high level of technical maturity on most subjects, but generally fall short on a critical point that costs them crucial points. CookieYes misses the mark when it comes to DOM volume, injecting almost 140 nodes when competitors are averaging 84.

On Didomi’s side, it’s the volume of CSS that is the biggest concern, with a <style> tag that incorporates 80kb of CSS (uncompressed). Quantcast Choice multiplies Json data exchanges (83kb). Iubenda, finally, displays a more multi-factorial profile, with an accumulation of small point losses on several key factors.

In any case, all of them can be considered with peace of mind: the installation of their CMP will not be synonymous with a major degradation in the performance of your pages.

Axeptio, One Trust and UserCentrics, the outsiders


A second group of CMPs stand out in the ranking, again with very similar scores, making them interchangeable for the purposes of this comparison. The profile of these players is broadly similar, but their technical mastery of front-end issues is uneven. Axeptio and UserCentrics, for example, are the only two CMPs in the ranking to take advantage of the Shadow DOM. And yet, they make unforgivable mistakes in other key areas.

Axeptio, for example, loads three images, for a total weight of 51.6 kB. Meanwhile, One Trust and UserCentrics load a substantial volume of asynchronous JavaScript: 124.1 kB for the former and 206 kB for the latter. This raises legitimate questions about the content of these files.

Cookie Law, Cookiebot and Sirdata, the losers


As with any ranking, there will be winners and losers, and the latter group includes three players whose tools revealed technical weaknesses. As their scores were once again fairly close, it seemed obvious to us that they should be grouped together.

Cookie Law is penalized by a high DOM volume (208 nodes) and multiple data exchanges (42.4 kB). Cookiebot provides 34 kB of synchronous integration code, injects 209 nodes into the DOM and sends a cache-control header lasting just 11 minutes (the shortest in the ranking).

Finally, Sirdata provides a partially blocking integration code and, even more importantly, only partially uses HTTP2: it is the only player in the ranking to load 146 kB of resources from a domain not hosted on a cdn and using the old HTTP/1.1 protocol. An unforgivable failing that has a very negative impact on its final score.

Ranking conclusion

This comparison of CMP web performance is not intended to push you towards a particular player without further discussion, nor to point the finger at any potential “bad apples”. Our hope is that it will help you in your decision-making process, in addition to other criteria specific to your organization.

But our greatest hope is to see the players mentioned react by taking ownership of the performance issues presented and upgrading their tools. As CMPs and Cookie Banner are by their very nature impacting web performance and User Experience, every effort made by publishers will benefit the entire Internet.

We’d be delighted to hear from you, and answer any questions you may have: feel free to post a comment below, or send us an email via the contact form. We’ll be sure to get back to you with our expertise as front-end web performance specialists.

1 reactions sur « CMP / Cookie Banner and web performance: comparison of 11 tools »

  1. Year-end update based on the Agency’s findings over the last few months, using RUM data on a large sample of sites. What’s interesting is that the CMPs are different from those initially analyzed.

    CMPs are ranked by INP at the 75th percentile, from least to most impactful:

    🥇 UserCentrics (26 ms)
    🥈 CookiePro (49 ms)
    🥉 CookieLaw (58 ms)
    🟩 CookieYes (60 ms)
    🟩 (171 ms)
    🟩 Funding Choices (200 ms)
    🟧 CookieBot (241 ms)

    CookieLaw is in a good position here, which was far from the case in our original analysis. This can be explained in a number of ways, the most obvious being that CMPs regularly update their scripts, particularly in view of the inclusion of INP as an official metric within Core Web Vitals.


React to this article

You would like us to accompany you on your project?
Contact us now
You're interested in being accompanied by Agence Web Performance?