问题描述
谁能解释为什么我总是从下面的代码中得到 0 的时间?我只想要一个毫秒计时器来计算从套接字发送和接收数据之间的延迟,但无论我尝试什么,我总是得到 0 的结果......我什至尝试了微秒,以防我的系统在不到1 毫秒.
Can anyone explain why I always get a time of 0 from the code below? I just want a millisecond timer to calculate the delay between sending and receiving data from a socket but no matter what I try, I always get a result of 0...I even tried microseconds just in case my system was executing it in less than 1ms.
printf("#: ");
bzero(buffer,256);
fgets(buffer,255,stdin);
struct timeval start, end;
unsigned long mtime, seconds, useconds;
gettimeofday(&start, NULL);
n = write(clientSocket,buffer,strlen(buffer));
if (n < 0)
{
error("Error: Unable to write to socket!\n");
}
bzero(buffer,256);
n = read(clientSocket,buffer,255);
gettimeofday(&end, NULL);
seconds = end.tv_sec - start.tv_sec;
useconds = end.tv_usec - start.tv_usec;
mtime = ((seconds) * 1000 + useconds/1000.0) + 0.5;
if (n < 0)
{
error("Error: Unable to read from socket!\n");
}
printf("%s\n",buffer);
printf("Delay: %lu microseconds\n", useconds);
推荐答案
假设你的结果是 mtime:mtime 是整数,你用浮点数计算经过的时间,所以如果
Assuming your result is in mtime:mtime is integer and you calculate elapsed time with float numbers so if
((seconds) * 1000 + useconds/1000.0) + 0.5
评估为 <1.0 转换为整数会将其切为 0
evaluates to < 1.0 casting to integer will cut it to 0
只需将 mtime 类型更改为浮动,或者如果您可以保持微秒使用
simply change mtime type to float, or if you can keep micro seconds use
((seconds) * 1000000 + useconds) + 500
这篇关于C Unix 毫秒计时器返回差值为 0的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持!