Skip to content

Commit f962257

Browse files
committed
Add cross sells to post
1 parent 4ce89bb commit f962257

1 file changed

Lines changed: 8 additions & 2 deletions

File tree

_posts/2024-11-26-designing-and-evolving-a-new-performance-score.md

Lines changed: 8 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -114,6 +114,8 @@ while the other doesn’t, yet their values are near identical!
114114
</tbody>
115115
</table>
116116

117+
{% include cross-sell.html %}
118+
117119
I wanted to make sure that any score I designed was sympathetic to both
118120
scenarios.
119121

@@ -200,7 +202,7 @@ absurd! INP is measured in **hundreds of milliseconds**, LCP is measured in
200202
inordinate weighting to INP.
201203

202204
<figure>
203-
<img src="{{ site.cloudinary }}/wp-content/uploads/2024/11/new-metric-01.png" alt="Google Sheets screenshot showing three domains whose Core Web Vitals scores have been summed, leading to completely inappropriate scoring outcomes." width="1500" height="194" loading="lazy">
205+
<img src="{{ site.cloudinary }}/wp-content/uploads/2024/11/new-metric-01.png" alt="A spreadsheet comparing three fictional websites—www.foo.com, www.bar.com, and www.baz.com—using various performance metrics: LCP (Largest Contentful Paint), INP (Interaction to Next Paint), and CLS (Cumulative Layout Shift). The table includes two scoring columns: ‘Ordinal Score’ (higher is better) and ‘New Metric’ (higher is worse), with colour-coded highlights (green and red) to indicate performance levels. The metrics have been summed, leading to completely inappropriate scoring outcomes." width="1500" height="194" loading="lazy">
204206
<figcaption>A naive summing approach awards the lowest score to our highest
205207
performer and the highest score to our middlemost. This is completely
206208
useless.</figcaption>
@@ -217,7 +219,7 @@ why don’t we try normalising them?
217219
Let’s convert our INP into seconds:
218220

219221
<figure>
220-
<img src="{{ site.cloudinary }}/wp-content/uploads/2024/11/new-metric-02.png" alt="Google Sheets screenshot showing similar summing as before, only this time with quasi-normalised inputs." width="1500" height="194" loading="lazy">
222+
<img src="{{ site.cloudinary }}/wp-content/uploads/2024/11/new-metric-02.png" alt="Google Sheets screenshot showing similar summing as before, only this time with quasi-normalised inputs leading to marginally better outcomes." width="1500" height="194" loading="lazy">
221223
<figcaption>This is marginally better—we’re now attributing the best to the
222224
best, but we’re now awarding the worst to the middle.</figcaption>
223225
</figure>
@@ -238,6 +240,8 @@ quite narrow (i.e. we’re unlikely to compare a 1.5s LCP to a 1500s LCP), we ca
238240
probably use the simplest: rescaling, or [**min-max
239241
normalisation**](https://en.wikipedia.org/wiki/Feature_scaling#Rescaling_(min-max_normalization)).
240242

243+
{% include cross-sell.html %}
244+
241245
Min-max normalisation takes a range of data points and plots them in the correct
242246
relative positions on a simple 0–1 scale. It doesn’t distribute them evenly—it
243247
distributes them accurately.
@@ -356,6 +360,8 @@ resilience:
356360
this place.</figcaption>
357361
</figure>
358362

363+
{% include cross-sell.html %}
364+
359365
The ordinal score correctly counts up passingness, and the New Score,
360366
separately, gives us an accurate reflection of each site’s standing in the
361367
cohort. While this looks like a much better summary of the sites in question,

0 commit comments

Comments
 (0)