We have experimented with changing the rate and looking at what kinds of errors resulted. It did seem as though 1000 samples per second was the most dependable rate. As the rate got slower by several orders of magnitude, the data for infrequently executed code disappeared from the results (but the hotspots were still there with the same relative percentages). When the rate sped up several orders of magnitude, the data just started getting very unreliable. It's probably best to leave it at 1000 samples per second unless you have some dominating reason to change it.
We wrote up the results in the June, 2004 issue of Dr. Dobb's Journal ("A Heisenberg Compensator for Measuring Software Performance").