Unix Timestamps: The Hidden Clock Inside Every Codebase
You're reading through a database table and you find a column called created_at with the value 1713830400. Or you're debugging a log file and every event has a timestamp like 1713867823. These are Unix timestamps — and once you understand what they are, you start seeing them everywhere.
What Unix time actually is
Unix time (also called POSIX time or epoch time) is a simple count of the number of seconds that have elapsed since January 1, 1970, 00:00:00 UTC. That date and time is called the Unix epoch — the reference point from which all Unix time is measured.
Right now, as you read this, the Unix timestamp is somewhere in the low 1.7 billion range. It's just a number. A large integer representing seconds since a fixed point in time.
Why 1970? It's largely historical. When the Unix operating system was being developed in the late 1960s and early 1970s, the developers needed a simple starting point. January 1, 1970 was chosen partly because it was recent enough that the numbers would be manageable, and partly because it was simply a convenient round date during the system's development period.
Why every system uses epoch time
Storing time as a single integer has major advantages over storing it as a formatted string:
No timezone ambiguity. A timestamp like "2024-04-15 14:30:00" is ambiguous — is that UTC? Local time? Which timezone? A Unix timestamp of 1713191400 always means exactly one point in time, worldwide. Timezone conversion is done at display time, not storage time.
Simple arithmetic. "How many seconds between these two events?" is a subtraction. "Was this event within the last 24 hours?" is a comparison against now - 86400. Doing the same with formatted date strings requires parsing, normalization, and careful handling of edge cases.
Compact storage. A 32-bit integer takes 4 bytes. A formatted datetime string like "2024-04-15T14:30:00Z" takes 20 bytes as ASCII. This also has knock-on effects for IDs — one reason UUID v7 embeds a timestamp prefix rather than using a separate field. At millions of rows, this adds up.
Language-agnostic. Every programming language and database system understands Unix timestamps. Formatted dates require agreement on format, locale, and timezone handling.
Seconds, milliseconds, microseconds
This is where it gets confusing. Unix time is defined in seconds, but many modern systems use milliseconds (thousandths of a second) or even microseconds (millionths) for higher precision.
- Seconds: 10 digits, currently ~1.7 billion. Example:
1713191400 - Milliseconds: 13 digits. Example:
1713191400000 - Microseconds: 16 digits. Example:
1713191400000000
JavaScript's Date.now() and most JavaScript Date operations use milliseconds. Python's time.time() and most Unix system calls use seconds (with fractional seconds available). Database systems vary by implementation.
When you see a timestamp that converts to a date in 1973 or 2286, it's almost certainly a precision mismatch — you're treating a millisecond timestamp as if it were seconds, or vice versa. A 13-digit timestamp divided by 1000 gives the correct second-precision value.
ISO 8601 and RFC 3339
When timestamps are serialized as strings (in APIs, logs, and config files), two formats are dominant:
ISO 8601: 2024-04-15T14:30:00Z — the T separates date from time, Z means UTC. Offset versions look like 2024-04-15T16:30:00+02:00.
RFC 3339: A profile of ISO 8601 used in internet protocols (email, RSS, Atom, HTTP). Nearly identical in practice.
Both are unambiguous, timezone-aware, and sortable lexicographically (alphabetical sort gives chronological order if the timezone is consistent). For any API you're building, prefer ISO 8601 over custom date formats.
Leap seconds: the edge case almost everyone ignores
Unix time technically doesn't count leap seconds. When a leap second is added (as happens occasionally to keep UTC aligned with Earth's rotation), Unix time either repeats a second or uses a "smeared" approach where the second is spread across surrounding time. This means Unix time is not strictly a count of SI seconds, and the difference between Unix time and precise UTC can accumulate over long periods.
For almost all applications, this doesn't matter. For GPS systems, financial transaction systems, and scientific applications requiring precise time, it does. Most developers can ignore leap seconds entirely, but it's useful to know they exist when someone inevitably asks why a one-second discrepancy appeared in time-sensitive logs.
The Year 2038 problem
Unix time stored in a signed 32-bit integer overflows on January 19, 2038 at 03:14:07 UTC. The stored value wraps from the maximum positive 32-bit integer to the most negative, which most systems would interpret as December 13, 1901.
This is a real problem for embedded systems, legacy POSIX environments, and any system still using 32-bit time_t. Modern 64-bit systems are unaffected — a signed 64-bit integer won't overflow for approximately 292 billion years. The systems at risk are things like older industrial controllers, automotive systems, and legacy infrastructure that hasn't been updated or replaced since the 1990s or early 2000s.
Converting timestamps in practice
When you encounter a raw timestamp in a log, database dump, or API response, the Timestamp Converter handles the conversion instantly — paste a Unix timestamp in seconds or milliseconds and get the corresponding human-readable date, or enter a date to get the timestamp back.
Timestamp Converter — Convert Unix timestamps to human-readable dates and back. Supports seconds and milliseconds, shows live current time.
Open Tool