User-Centric Focus of Metrics
In 2020, Google introduced the new "Lighthouse 6.0" algorithm, which automatically measures the quality of web pages based on how users experience page loading. This approach is officially called User-Centric Performance Metrics.
Loading Speed vs. User Perception: The Difference

Prior to 2021, the primary metric for measuring loading speed was the "load event," which recorded the moment when all page resources finished loading. While important, this metric overlooked the actual loading process.
What's the issue? For clarity, consider two websites loaded simultaneously. Both take the same time to load completely, yet users perceive the second site as faster. Why? Because during loading, users could already see content on the second site, while the first provided no visual feedback.
Key elements of website loading, as defined by Google:
- How quickly the user sees the initial loading begin;
- How fast content becomes visible and readable;
- The time between loading start and when the user can interact with the page;
- Whether layout shifts occur, how large they are, and if they cause accidental clicks.
Google's Performance Metrics

Google developed a comprehensive system of metrics based on these key loading elements:
- TTFB (Time to First Byte): Measures the time from request initiation until the browser receives the first byte of the page.
- FCP (First Contentful Paint): Records the time from when loading starts to when any content appears on the screen.
- LCP (Largest Contentful Paint): Measures the time from when loading starts to when the largest visible element (image or text block) is fully rendered.
- CLS (Cumulative Layout Shift): Quantifies unexpected layout shifts during and after loading, providing a cumulative score.
- FID (First Input Delay): Measures the time between a user's first interaction (click, tap) and when the browser can respond.
- TBT (Total Blocking Time): Calculates the total time between FCP and TTI when the main thread is blocked by long tasks, affecting interactivity.
- TTI (Time to Interactive): Measures how long it takes for the page to become fully interactive.
- Speed Index: A visual metric that measures how quickly content is visually displayed during loading.
Among these, three metrics are used in Google's ranking algorithm and are grouped under Core Web Vitals:
- LCP (Largest Contentful Paint)
- FID (First Input Delay)
- CLS (Cumulative Layout Shift)
However, loading as a process should be evaluated holistically. Optimizing only Web Vitals is not sufficient.
Website Loading: How to Measure Data Correctly

The simplest and most effective way to measure performance is using Google's Lighthouse or PageSpeed Insights reports.
Important! High scores don't necessarily mean fast loading in real-world conditions, and vice versa. Poor scores may still correspond to good user-perceived speed.
This is because Google uses real user data (field data) rather than lab test scores.
Consider two examples:
Example 1: A website with excellent mobile performance scores but focused on legacy PC games. Most users access it from older PCs with outdated browsers. Despite high mobile scores, real-world desktop performance is poor, leading to low Google field metrics.
Example 2: A website with subpar mobile performance and low loading speed scores, but primarily used by desktop users who experience fast response times. Google assigns it above-average field performance ratings.
These examples show the difference between Lab (theoretical) and Field (real-world) data.
Lab Data – Theoretical (Controlled Environment) Metrics
Sources like Lighthouse, PageSpeed Insights, and Gtmetrix generate results from single URL tests. Advantages:
- Allow tracking of individual URLs;
- Useful for measuring performance before and after optimization on test environments;
- Provide actionable recommendations for improvement.
Disadvantages:
- Cannot track page groups or provide comprehensive site-wide statistics;
- Don't reflect real user experiences accurately (though this can be mitigated with field data integration);
- Sometimes provide irrelevant recommendations that don't impact real performance.
Field Data – Real-World (User Experience) Metrics

Field Data reflects actual user experiences with the highest accuracy. However, it cannot analyze individual URLs or page groups in detail. Data is updated monthly, every second Tuesday.
Data is collected via browser APIs and sent to Google Analytics or other sources using Web Vitals events.
This report enables analysis of individual URLs, page groups, and overall site performance. Conveniently, data can be viewed daily, weekly, or monthly. It allows sorting and analyzing both positive and negative user experiences.
Drawbacks:
- Requires developer or Google Tag Manager assistance to implement;
- Generates excessive data with low immediate value for average users.
Effectiveness of a Combined Approach
Since neither measurement method is perfect, combining data sources yields the best results:
- Use Lab data for development and optimization testing;
- Use Field data to track real user performance;
- For large-scale projects (regional, device-specific, or time-based analysis), implement Web Vitals events.
Experts agree that during development and even at the planning stage, the focus should be on the user experience, not just scores from testing tools. This user-centric approach is exactly what Google's search algorithm rewards.
Summary:
- Google began using Lighthouse metrics as ranking factors starting June 2021.
- These metrics prioritize user perception over raw loading speed.
- Two data collection methods exist: theoretical (lab) and practical (field), representing users accessing the site from various devices. Combining both methods delivers optimal results.
- Perform optimizations with Google's user-centric approach in mind.