You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: _posts/2024-11-26-designing-and-evolving-a-new-performance-score.md
+8-2Lines changed: 8 additions & 2 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -114,6 +114,8 @@ while the other doesn’t, yet their values are near identical!
114
114
</tbody>
115
115
</table>
116
116
117
+
{% include cross-sell.html %}
118
+
117
119
I wanted to make sure that any score I designed was sympathetic to both
118
120
scenarios.
119
121
@@ -200,7 +202,7 @@ absurd! INP is measured in **hundreds of milliseconds**, LCP is measured in
200
202
inordinate weighting to INP.
201
203
202
204
<figure>
203
-
<imgsrc="{{ site.cloudinary }}/wp-content/uploads/2024/11/new-metric-01.png"alt="Google Sheets screenshot showing three domains whose Core Web Vitals scores have been summed, leading to completely inappropriate scoring outcomes."width="1500"height="194"loading="lazy">
205
+
<imgsrc="{{ site.cloudinary }}/wp-content/uploads/2024/11/new-metric-01.png"alt="A spreadsheet comparing three fictional websites—www.foo.com, www.bar.com, and www.baz.com—using various performance metrics: LCP (Largest Contentful Paint), INP (Interaction to Next Paint), and CLS (Cumulative Layout Shift). The table includes two scoring columns: ‘Ordinal Score’ (higher is better) and ‘New Metric’ (higher is worse), with colour-coded highlights (green and red) to indicate performance levels. The metrics have been summed, leading to completely inappropriate scoring outcomes."width="1500"height="194"loading="lazy">
204
206
<figcaption>A naive summing approach awards the lowest score to our highest
205
207
performer and the highest score to our middlemost. This is completely
206
208
useless.</figcaption>
@@ -217,7 +219,7 @@ why don’t we try normalising them?
217
219
Let’s convert our INP into seconds:
218
220
219
221
<figure>
220
-
<imgsrc="{{ site.cloudinary }}/wp-content/uploads/2024/11/new-metric-02.png"alt="Google Sheets screenshot showing similar summing as before, only this time with quasi-normalised inputs."width="1500"height="194"loading="lazy">
222
+
<imgsrc="{{ site.cloudinary }}/wp-content/uploads/2024/11/new-metric-02.png"alt="Google Sheets screenshot showing similar summing as before, only this time with quasi-normalised inputs leading to marginally better outcomes."width="1500"height="194"loading="lazy">
221
223
<figcaption>This is marginally better—we’re now attributing the best to the
222
224
best, but we’re now awarding the worst to the middle.</figcaption>
223
225
</figure>
@@ -238,6 +240,8 @@ quite narrow (i.e. we’re unlikely to compare a 1.5s LCP to a 1500s LCP), we ca
238
240
probably use the simplest: rescaling, or [**min-max
0 commit comments