A 14kb page can load much faster than a 15kb page (2022)

(endtimes.dev)

Comments

susam 19 July 2025
I just checked my home page [1] and it has a compressed transfer size of 7.0 kB.

  /            2.7 kB
  main.css     2.5 kB
  favicon.png  1.8 kB
  -------------------
  Total        7.0 kB
Not bad, I think! I generate the blog listing on the home page (as well as the rest of my website) with my own static site generator, written in Common Lisp [2]. On a limited number of mathematical posts [3], I use KaTeX with client-side rendering. On such pages, KaTeX adds a whopping 347.5 kB!

  katex.min.css              23.6 kB
  katex.min.js              277.0 kB
  auto-render.min.js          3.7 kB
  KaTeX_Main-Regular.woff2   26.5 kB
  KaTeX_Main-Italic.woff2    16.7 kB
  ----------------------------------
  Total Additional          347.5 kB
Perhaps I should consider KaTeX server-side rendering someday! This has been a little passion project of mine since my university dorm room days. All of the HTML content, the common HTML template (for a consistent layout across pages), and the CSS are entirely handwritten. Also, I tend to be conservative about what I include on each page, which helps keep them small.

[1] https://susam.net/

[2] https://github.com/susam/susam.net/blob/main/site.lisp

[3] https://susam.net/tag/mathematics.html

crawshaw 19 July 2025
If you want to have fun with this: the initial window (IW) is determined by the sender. So you can configure your server to the right number of packets for your website. It would look something like:

    ip route change default via <gw> dev <if> initcwnd 20 initrwnd 20
A web search suggests CDNs are now at 30 packets for the initial window, so you get 45kb there.
tonymet 19 July 2025
Software developer’s should be more aware of the media layer . I appreciate the author’s post about 3g /5g reliability and latency. Radio almost always retries, and with most HTTP your packets need to arrive in order.

A single REST request is only truly a single packet if the request and response are both < 1400 bytes. Any more than that and your “single” request is now multiple requests & responses . Any one of them may need a retry and they all need to arrive in order for the UI to update.

For practical experiments, try chrome dev tools in 3g mode with some packet loss and you can see even “small” optimizations improving UI responsiveness dramatically.

This is one of the most compelling reasons to make APIs and UIs as small as possible.

GavinAnderegg 19 July 2025
14kB is a stretch goal, though trying to stick to the first 10 packets is a cool idea. A project I like that focuses on page size is 512kb.club [1] which is like a golf score for your site’s page size. My site [2] came in just over 71k when I measured before getting added (for all assets). This project also introduced me to Cloudflare Radar [3] which includes a great tool for site analysis/page sizing, but is mainly a general dashboard for the internet.

[1] https://512kb.club/

[2] https://anderegg.ca/

[3] https://radar.cloudflare.com/

tgv 19 July 2025
This could be another reason: https://blog.cloudflare.com/russian-internet-users-are-unabl...

> ... analysis [by Cloudflare] suggests that the throttling [by Russian ISPs] allows Internet users to load only the first 16 KB of any web asset, rendering most web navigation impossible.

9dev 19 July 2025
The overlap of people that don’t know what TCP Slow Start is and those that should care about their website loading a few milliseconds faster is incredibly small. A startup should focus on, well, starting up, not performance; a corporation large enough to optimise speed on that level will have a team of experienced SREs that know over which detail to obsess.
firecall 19 July 2025
Damn... I'm at 17.2KB for my home page! (not including dependencies)

FWIW I optimised the heck out of my personal homepage and got 100/100 for all Lighthouse scores. Which I had not previously thought possible LOL

Built in Rails too!

It's absolutely worth optimising your site though. It just is such a pleasing experience when a page loads without any perceptible lag!

hackerman_fi 19 July 2025
The article has IMO two flawed arguments:

1. There is math for how long it takes to send even one packet over satellite connection (~1600ms). Its a weak argument for the 14kb rule since there is no comparison with a larger website. 10 packets wont necessarily take 16 seconds.

2. There is a mention that images on webpage are included in this 14kb rule. In what case are images inlined to a page’s initial load? If this is a special case and 99.9% of images don’t follow it, it should be mentioned at very least.

Alifatisk 19 July 2025
I agree with the sentiment here, the thing is, I've noticed that the newer generations are using frameworks like Next.js as default for building simple static websites. That's their bare bone start. The era of plain html + css (and maybe a sprinkle of js) feels like it's fading away, sadly.
simgt 19 July 2025
Aside from latency, reducing ressources consumption to the minimum required should always be a concern if we intend to have a sustainable future. The environmental impact of our network is not negligible. Given the snarky comments here, we clearly have a long way to go.

EDIT: some reply missed my point, I am not claiming this particular optimization is the holy grail, only that I'd have liked for added benefit of reducing the energy consumption to be mentioned

ksec 19 July 2025
Missing 2021 in the title.

I know it is not the exact topic, but sometimes I think we dont need the fastest response time but consistent response time. Like every single page within the site to be fully rendered with exactly 1s. Nothing more nothing less.

the_precipitate 19 July 2025
And you do know that .exe file is wasteful, .com file actually saves quite a few bytes if you can limit your executable's size to be smaller than 0xFF00h (man, I am old).
tomhow 19 July 2025
Discussed at the time:

A 14kb page can load much faster than a 15kb page - https://news.ycombinator.com/item?id=32587740 - Aug 2022 (343 comments)

mikl 19 July 2025
How relevant is this now, if you have a modern server that supports HTTP/3?

HTTP/3 uses UDP rather than TCP, so TCP slow start should not apply at all.

xg15 19 July 2025
> Also HTTPS requires two additional round trips before it can do the first one — which gets us up to 1836ms!

Doesn't this sort of undo the entire point of the article?

If the idea was to serve the entire web page in the first roundtrip, wouldn't you have lost the moment TLS is used? Not only does the TLS handshake send lots of stuff (including the certificate) that will likely get you over the 14kb boundary before you even get the chance to send a byte of your actual content - but the handshake also includes multiple request/response exchanges between client and server, so it would require additional roundtrips even if it stayed below the 14kb boundary.

So the article's advice only holds for unencrypted plain-TCP connections, which no one would want to use today anymore.

The advice might be useful again if you use QUIC/HTTP3, because that one ditches both TLS and TCP and provides the features from both in its own thing. But then, you'd have to look up first how congestion control and bandwidth estimation works in HTTP3 and if 14kb is still the right threshold.

youngtaff 19 July 2025
It’s not really relevant in 2025…

The HTTPS negotiation is going to consume the initial roundtrips which should start increasing the size of the window

Modern CDNs start with larger initial windows and also pace the packets onto the network to reduce the chances of congesting

There’s also a question as to how relevant the 14kb rule has ever been… HTML renders progressively so as long as there’s some meaningful content in the early packets then overall size is less important

LAC-Tech 19 July 2025
This looks like such an interesting articles, but it's completely ruined by the fact that every sentence is its own paragraph.

I swear I am not just trying to be a dick here. If I didn't think it had great content I wouldn't have commented. But I feel like I'm reading a LinkedIn post. Please join some of those sentences up into paragraphs!

justmarc 19 July 2025
Does anyone know have examples of tiny, yet aesthetically pleasing websites or pages?

Would love it if someone kept a list.

gammalost 19 July 2025
If you care about reducing the amount of back and forth then just use QUIC.
zelphirkalt 19 July 2025
My plain HTML alone is 10kB and it is mostly text. I don't think this is achievable for most sites, even the ones limiting themselves to only CSS and HTML, like mine.
mikae1 19 July 2025
> Once you lose the autoplaying videos, the popups, the cookies, the cookie consent banners, the social network buttons, the tracking scripts, javascript and css frameworks, and all the other junk nobody likes — you're probably there.

How about a single image? I suppose a lot of people (visitors and webmasters) like to have an image or two on the page.

paales2 19 July 2025
Or maybe we shouldn’t. A good experience doesnt have to load under 50ms, it is fine for it to take a second. 5G is common and people with slower connections accept longer waiting times. Optimizing is good but fixating isn’t.
smartmic 19 July 2025
If I understood correctly, the rule is dependent on web server features and/or configuration. In that case, an overview of web servers which have or have not implemented the slow start algorithm would be interesting.
maxlin 19 July 2025
The geostationary satellite example, while interesting, is kinda obsolete in the age of Starlink
palata 19 July 2025
Fortunately, most websites include megabytes of bullshit, so it's not remotely a concern for them :D.
zevv 19 July 2025
And now try to load the same website over HTTPS
nottorp 19 July 2025
So how bad is it when you add https?
eviks 19 July 2025
Has this theory been tested?
adastra22 19 July 2025
The linked page is 35kB.
moomoo11 19 July 2025
I’d care about this if I was selling in India or Africa.

If I’m selling to cash cows in America or Europe it’s not an issue at all.

As long as you have >10mbps download across 90% of users I think it’s better to think about making money. Besides if you don’t know that lazy loading exists in 2025 fire yourself lol.

austin-cheney 19 July 2025
It seems the better solution is to not use HTTP server software that employs this slow start concept.

Using my own server software I was able to produce a complex single page app that resembled an operating system graphical user interface and achieve full state restoration as fast as 80ms from localhost page request according to the Chrome performance tab.