localtunnel gives you a public URL for any port on your machine in one command. No account, no config file, no VPN — run npx localtunnel --port 3000 and you get https://some-slug.localtunnel.me pointing straight at your dev server.
Why I starred it
Webhook testing is the use case that sends people here. You need a public URL for a Stripe event, a Twilio SMS callback, or a GitHub webhook — and you don't want to deploy a staging server just to see if your handler works. localtunnel solves that without credentials or a daemon.
What made me look closer than usual: the entire client is four files and ~500 lines of JavaScript. No dependency on ngrok's binary, no auth token, no account wall. That simplicity is a design choice worth understanding.
How it works
The entry point in localtunnel.js is a thin adapter that normalizes the overloaded function signature (it accepts (port), (port, options), or (options)) and returns a Tunnel instance as either a Promise or a callback — the dual API exists explicitly for backward compatibility.
lib/Tunnel.js does the handshake. On open(), it sends a GET to the server (default https://localtunnel.me) at either /?new or /<subdomain>:
// lib/Tunnel.js
const uri = baseUri + (assignedDomain || '?new');
axios.get(uri, params).then(res => {
cb(null, getInfo(res.data));
});
The server responds with { id, ip, port, url, max_conn_count }. That port isn't 443 — it's a dedicated TCP port the server opened for this tunnel. The client then opens max_conn_count raw TCP sockets directly to that port. No HTTP, no TLS on the relay socket itself — just a plain net.connect.
The real work is in lib/TunnelCluster.js. Each open() call establishes one relay socket to the server's IP/port, waits for the server to signal an incoming request, then opens a corresponding TCP connection to your local port and pipes them together:
// lib/TunnelCluster.js
local.once('connect', () => {
remote.resume();
let stream = remote;
if (opt.local_host) {
stream = remote.pipe(new HeaderHostTransformer({ host: opt.local_host }));
}
stream.pipe(local).pipe(remote);
});
remote.pause() before local connects is the detail that matters — the relay socket stops reading until the local side is ready, which prevents buffering a partial request across the pipe join.
When a tunnel socket closes, the cluster emits dead and Tunnel.js immediately opens a replacement. This is how lt survives local server restarts: it's not reconnecting the same socket, it's cycling through replacements from a pool.
lib/HeaderHostTransformer.js is a 25-line Transform stream that rewrites the Host header on the first HTTP chunk only, then becomes a passthrough. One regex replace on the first call, this.replaced = true, done:
data.toString().replace(/(\r\n[Hh]ost: )\S+/, (match, $1) => {
this.replaced = true;
return $1 + this.host;
})
The pool size is set by the server's max_conn_count. The client opens that many sockets upfront and always keeps the pool at that count, replacing any that die. Under high request rates this matters — with max_conn: 1 (the server default), each incoming request must wait for the prior one to finish before a new relay socket is established.
Using it
# One-shot without installing
npx localtunnel --port 3000
# your url is: https://fuzzy-bear-42.localtunnel.me
# Request a fixed subdomain (availability not guaranteed)
lt --port 3000 --subdomain my-project
# Proxy to a local HTTPS server
lt --port 443 --local-https --allow-invalid-cert
# Print each incoming request
lt --port 3000 --print-requests
As a Node.js module, useful for test harnesses:
const localtunnel = require('localtunnel');
const tunnel = await localtunnel({ port: 3000 });
console.log(tunnel.url); // https://xxx.localtunnel.me
tunnel.on('request', ({ method, path }) => {
console.log(method, path);
});
// later
tunnel.close();
Rough edges
The relay server at localtunnel.me is a public shared resource with no SLA. It goes down, it throttles, it has connection caps. The --host flag lets you point at a self-hosted server (localtunnel/server) — that's the only real path to reliability.
The max_conn_count default of 1 from the public server means concurrent requests queue up behind a single relay socket. The client code handles higher counts correctly, but you don't get them from the public server. Self-host if you need more.
Test coverage (localtunnel.spec.js) makes live requests against localtunnel.me — there's one integration test file, no unit tests for the stream piping logic or the reconnect behavior. If the public server is down, your tests fail. That's a tradeoff you accept.
Dependencies are minimal: axios for the registration GET, debug for logging, openurl and yargs for the CLI. No native modules, no binary bundling. The last meaningful code change was August 2022; recent commits have been README fixes. The project is stable in the "not much left to change" sense.
Bottom line
If you need a quick public URL for local webhook testing and you trust a shared relay server for development-only use, this works and it's frictionless. For anything that needs reliability or a stable URL, self-host the server and point --host at it — the client is solid, the public relay is the variable.
