You missed microseconds() :-)
Though such calls might seem to make the interface complete, they offer no capability that cannot be done with the existing interface. tick_count::interval_t::seconds() returns a double, and a simple multiply will give you the number in the units you suggest.
There's also a misperception factor that you may introduce with such an interface, concerning precision versus accuracy. Just because the machine has the precision to record numbers representing such small intervals doesn't mean it records them to that accuracy. Note this quote from the Systems Programming Manual, Part 2:
For Pentium 4 processors, Intel Xeon processors (family [0FH], models [03H and higher]); for Intel Core Solo and Intel Core Duo processors (family [06H], model [0EH]); for the Intel Xeon processor 5100 series and Intel Core 2 Duo processors (family [06H], model [0FH]); for Intel Core 2 and Intel Xeon processors (family
[06H], display_model [17H]); for Intel Atom processors (family [06H], display_model [1CH]): the time-stamp counter increments at a constant rate. That rate may be set by the maximum core-clock to bus-clock ratio of the
processor or may be set by the maximum resolved frequency at which the processor is booted. The maximum resolved frequency may differ from the maximum qualified frequency of the processor, see Section 18.20.5 for more detail.
So even if your machine has a 2.4 GHz clock that makes one CPU clock cycle somewhere around 400 picoseconds, that doesn't mean it's being incremented one picosecond at a time. Offering an interface to derive nanoseconds suggests that you can actually measure in nanoseconds, a misleading notion.