最新消息:雨落星辰是一个专注网站SEO优化、网站SEO诊断、搜索引擎研究、网络营销推广、网站策划运营及站长类的自媒体原创博客

math - How can I accurately measure small time differences in Python? - Stack Overflow

programmeradmin0浏览0评论

I'm working on a project for which I'm analyzing how quickly Newton's method can calculate a zero of a function from different starting points. I've written the program below and everything seems to be working except the calculation of the time difference. Because the method is quite fast, the time difference is also quite small and I think that because of that, Python interprets it as zero, so the output is nearly always zero. Is there any way to calculate this more accurately?

P.S.: I'm quite new to coding, so I'm sorry if my code is a bit of a mess, but I hope you can help me.

My code:

import math
import time

#enter function
def f(x):
    return x**2 - 4

#enter derivative
def fderiv(x):
    return 2*x

#Newton's method
def Newton(x):
    return (x - f(x)/fderiv(x))

#enter zero
zero = 2

#change starting point
for n in range(-100,100):
    start = zero + n/100
    #apply Newton on starting point
    current = start
    starttime = time.time()
    difference = abs(current - zero)
    try:
        while difference > 0.00001:
            current = Newton(current)
            difference = abs(current - zero)
        elapsed_time = time.time()-starttime
        print(str(start), ": elapsed time is: %.10f" % (elapsed_time))
    except:
        print(str(start), ": Error")

I've tried using some different methods, like datetime.fromtimestamp(), time.time_ns() or process_time(), but none of them seem to be working. The output is still mostly zeroes. Any ideas on how to fix it?

I'm working on a project for which I'm analyzing how quickly Newton's method can calculate a zero of a function from different starting points. I've written the program below and everything seems to be working except the calculation of the time difference. Because the method is quite fast, the time difference is also quite small and I think that because of that, Python interprets it as zero, so the output is nearly always zero. Is there any way to calculate this more accurately?

P.S.: I'm quite new to coding, so I'm sorry if my code is a bit of a mess, but I hope you can help me.

My code:

import math
import time

#enter function
def f(x):
    return x**2 - 4

#enter derivative
def fderiv(x):
    return 2*x

#Newton's method
def Newton(x):
    return (x - f(x)/fderiv(x))

#enter zero
zero = 2

#change starting point
for n in range(-100,100):
    start = zero + n/100
    #apply Newton on starting point
    current = start
    starttime = time.time()
    difference = abs(current - zero)
    try:
        while difference > 0.00001:
            current = Newton(current)
            difference = abs(current - zero)
        elapsed_time = time.time()-starttime
        print(str(start), ": elapsed time is: %.10f" % (elapsed_time))
    except:
        print(str(start), ": Error")

I've tried using some different methods, like datetime.fromtimestamp(), time.time_ns() or process_time(), but none of them seem to be working. The output is still mostly zeroes. Any ideas on how to fix it?

Share Improve this question asked Feb 3 at 16:15 30057363005736 32 bronze badges 4
  • The standard answer would likely be stackoverflow/a/38319607/218663 but you also might want to check out the timeit mondule – JonSG Commented Feb 3 at 16:39
  • Analysis of numerical algorithms is typically phrased in terms of numbers of iterations and numbers of arithmetic operations and function calls per iteration -- these are quantities which can be determined by studying the program and counting iterations; this approach factors out the dependence on the speed and other characteristics of the cpu. Hope this helps. – Robert Dodier Commented Feb 3 at 17:14
  • This question is similar to: How can I get millisecond and microsecond-resolution timestamps in Python?. If you believe it’s different, please edit the question, make it clear how it’s different and/or how the answers on that question are not helpful for your problem. – Anerdw Commented Feb 4 at 0:55
  • This is an XY problem. You'll get some pretty serious diminishing returns from trying to increase your timer's precision. Instead, do a large number of iterations so the precision doesn't matter as much. timeit is a good choice for this, as JonSG suggested. – Anerdw Commented Feb 4 at 0:56
Add a comment  | 

1 Answer 1

Reset to default 0

For high-resolution timing, use time.perf_counter().

发布评论

评论列表(0)

  1. 暂无评论