top button
Flag Notify
    Connect to us
      Site Registration

Site Registration

Python: time.sleep(n) would sleep much longer than n seconds?

0 votes
403 views

I'm writing some scripts with python, and I found sometimes, when I try to use time.sleep(1) to sleep 1 sec, it would actually sleep for 9/10 secs or even longer.

From python document, I saw this:

time.sleep(secs)
....
"Also, the suspension time may be longer than requested by an arbitrary amount because of the scheduling of other activity in the system."

So, my question: under what kind of condition, time.sleep would suspend longer time than expected?

posted Jun 18, 2014 by anonymous

Share this question
Facebook Share Button Twitter Share Button LinkedIn Share Button

1 Answer

+1 vote

There are several reasons the sleep time is always described as "at least" that long. Firstly, your process will always sleep for some multiple of the kernel's internal task switching resolution. That doesn't explain why time.sleep(1) will sleep for nine seconds, but if you try this, you'll see that at work:

for i in range(1000): time.sleep(0.001)

In theory, that should be the same as time.sleep(1), right? Well, no.On my systems it sleeps for 2-3 seconds at least, which suggests that there's a minimum of 2-3 milliseconds of sleep time. And it could easily sleep for a lot longer than that.

answer Jun 18, 2014 by Sidharth
Similar Questions
+2 votes

As per my understanding Receive system calls will be a blocking call of the code... it will not go ahead unless it receives the response from client .. my question is how would i set time in that? i want my code should wait only for few seconds then it has to come out from that

+5 votes

Using 1/3 as an example,

 >>> 1./3
0.3333333333333333
 >>> print "%.50f" % (1./3)
0.33333333333333331482961625624739099293947219848633
 >>> print "%.50f" % (10./3)
3.33333333333333348136306995002087205648422241210938
 >>> print "%.50f" % (100./3)
33.33333333333333570180911920033395290374755859375000

which seems to mean real (at least default) decimal precision is limited to "double", 16 digit precision (with rounding error). Is there a way to increase the real precision, preferably as the default?
For instance, UBasic uses a "Words for fractionals", f, "Point(f)" system, where Point(f) sets the decimal display precision, .1^int(ln(65536^73)/ln(10)), with the last few digits usually garbage.
Using "90*(pi/180)*180/pi" as an example to highlight the rounding error (4 = UBasic's f default value):

 Point(2)=.1^09: 89.999999306
 Point(3)=.1^14: 89.9999999999944
 Point(4)=.1^19: 89.9999999999999998772
 Point(5)=.1^24: 89.999999999999999999999217
 Point(7)=.1^33: 89.999999999999999999999999999999823
 Point(10)=.1^48: 89.999999999999999999999999999999999999999999997686
 Point(11)=.1^52: 89.9999999999999999999999999999999999999999999999999632

If not in the core program, is there a higher decimal precision module that can be added?

0 votes

Previously, we found that our python scripts consume too much memory. So I use python's resource module to restrict RLIMIT_AS's soft limit and hard limit to 200M.
On my RHEL5.3, it works OK. But on CentOS 6.2 + python2.6.6, it reports memory error(exceeding 200M). And I tested with a very small script, and result is out of my expect, it still use too much memory on my CentOS 6.2 python:
I could understand that 64 bit machines will occupy more virtual memory than that on 32 bit, because the length of some types are not the same. But I don't know why they differs so greatly(6M to 180M), Or is this only caused by that python2.6 on CentOS 6.2's memory allocation is different from python's default one? Could you kindly give me some clues?

+2 votes

I need to search through a directory of text files for a string. Here is a short program I made in the past to search through a single text file for a line of text.

How can I modify the code to search through a directory of files that have different filenames, but the same extension?

fname = raw_input("Enter file name: ") #"*.txt"
fh = open(fname)
lst = list()
biglst=[]
for line in fh:
 line=line.rstrip()
 line=line.split()
 biglst+=line
final=[]
for out in biglst:
 if out not in final:
 final.append(out)
final.sort()
print (final)
0 votes

I'm somewhat confused working with @staticmethods. My logger and configuration methods are called n times, but I have only one call.
n is number of classes which import the loger and configuration class in the subfolder mymodule. What might be my mistake mistake?

### __init__.py ###

from mymodule.MyLogger import MyLogger
from mymodule.MyConfig import MyConfig

##### my_test.py ##########
from mymodule import MyConfig,MyLogger

#Both methods are static
key,logfile,loglevel = MyConfig().get_config('Logging')
log = MyLogger.set_logger(key,logfile,loglevel)
log.critical(time.time())

#Output
2013-05-21 17:20:37,192 - my_test - 17 - CRITICAL - **********.19
2013-05-21 17:20:37,192 - my_test - 17 - CRITICAL - **********.19
2013-05-21 17:20:37,192 - my_test - 17 - CRITICAL - **********.19
2013-05-21 17:20:37,192 - my_test - 17 - CRITICAL - **********.19
...