问题描述
我有以下代码:
char fname[255] = {0}
snprintf(fname, 255, "%s_test_no.%d.txt", baseLocation, i);
vs
std::string fname = baseLocation + "_test_no." + std::to_string(i) + ".txt";
哪个表现更好?第二个涉及临时创建吗?有什么更好的方法吗?
Which one performs better? Does the second one involve temporary creation? Is there any better way to do this?
推荐答案
让我们运行数字:
代码(我使用了计时器)
The code (I used PAPI Timers)
#include <iostream>
#include <string>
#include <stdio.h>
#include "papi.h"
#include <vector>
#include <cmath>
#define TRIALS 10000000
class Clock
{
public:
typedef long_long time;
time start;
Clock() : start(now()){}
void restart(){ start = now(); }
time usec() const{ return now() - start; }
time now() const{ return PAPI_get_real_usec(); }
};
int main()
{
int eventSet = PAPI_NULL;
PAPI_library_init(PAPI_VER_CURRENT);
if(PAPI_create_eventset(&eventSet)!=PAPI_OK)
{
std::cerr << "Failed to initialize PAPI event" << std::endl;
return 1;
}
Clock clock;
std::vector<long_long> usecs;
const char* baseLocation = "baseLocation";
//std::string baseLocation = "baseLocation";
char fname[255] = {};
for (int i=0;i<TRIALS;++i)
{
clock.restart();
snprintf(fname, 255, "%s_test_no.%d.txt", baseLocation, i);
//std::string fname = baseLocation + "_test_no." + std::to_string(i) + ".txt";
usecs.push_back(clock.usec());
}
long_long sum = 0;
for(auto vecIter = usecs.begin(); vecIter != usecs.end(); ++vecIter)
{
sum+= *vecIter;
}
double average = static_cast<double>(sum)/static_cast<double>(TRIALS);
std::cout << "Average: " << average << " microseconds" << std::endl;
//compute variance
double variance = 0;
for(auto vecIter = usecs.begin(); vecIter != usecs.end(); ++vecIter)
{
variance += (*vecIter - average) * (*vecIter - average);
}
variance /= static_cast<double>(TRIALS);
std::cout << "Variance: " << variance << " microseconds" << std::endl;
std::cout << "Std. deviation: " << sqrt(variance) << " microseconds" << std::endl;
double CI = 1.96 * sqrt(variance)/sqrt(static_cast<double>(TRIALS));
std::cout << "95% CI: " << average-CI << " usecs to " << average+CI << " usecs" << std::endl;
}
播放注释以获取一种或另一种方式。
用编译行在我的机器上对这两种方法进行1000万次迭代:
Play with the comments to get one way or the other.10 million iterations of both methods on my machine with the compile line:
使用char数组:
Average: 0.240861 microseconds
Variance: 0.196387microseconds
Std. deviation: 0.443156 microseconds
95% CI: 0.240586 usecs to 0.241136 usecs
使用字符串方法:
Average: 0.365933 microseconds
Variance: 0.323581 microseconds
Std. deviation: 0.568842 microseconds
95% CI: 0.365581 usecs to 0.366286 usecs
在装有MY代码和MY编译器设置的机器上,使用以下公式,字符数组比字符串的速度提高了34%:
So at least on MY machine with MY code and MY compiler settings, that character arrays incur a 34% speedup over strings using the following formula:
哪个给出了方法之间的时间差,以单独字符串的时间百分比表示。我原来的百分比是正确的;我改用字符数组方法作为参考点,当移至字符串时,它显示速度降低了52%,但我发现它具有误导性。
Which gives the difference in time between the approaches as a percentage on time for string alone. My original percentage was correct; I used the character array approach as a reference point instead, which shows a 52% slowdown when moving to string, but I found it misleading.
关于我如何做错的所有注释:)
I'll take any and all comments for how I did this wrong :)
Average: 0.338876 microseconds
Variance: 0.853823 microseconds
Std. deviation: 0.924026 microseconds
95% CI: 0.338303 usecs to 0.339449 usecs
字符数组
character array
Average: 0.239083 microseconds
Variance: 0.193538 microseconds
Std. deviation: 0.439929 microseconds
95% CI: 0.238811 usecs to 0.239356 usecs
因此字符数组这种方法的速度仍然要快得多,但速度要慢得多。在这些测试中,速度要快29%。
So the character array approach remains significantly faster although less so. In these tests, it was about 29% faster.
这篇关于使用字符串vs字符数组时性能有多少差异?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持!