本文介绍了比IsolatedStorageFile慢StorageFile 50倍的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧! 问题描述 29岁程序员,3月因学历无情被辞! 我只是在基准测试多种算法找到加载在我的应用程序的所有数据以最快的方式,当我发现我的Lumia 920负荷运行我的应用程序的WP7版本的数据2倍的速度在运行WP8版相同的设备。I was just benchmarking multiple algorithms to find the fastest way to load all data in my app when I discovered that the WP7 version of my app running on my Lumia 920 loads the data 2 times as fast as the WP8 version running on the same device.我不是写了下面的独立代码测试StorageFile从WP8和WP7的IsolatedStorageFile的性能。I than wrote the following independent code to test performance of the StorageFile from WP8 and the IsolatedStorageFile from WP7.要澄清的称号,我在这里做我的初步测试结果,阅读20KB和100KB的50个文件:To clarify the title, here my preliminary benchmark results I did, reading 50 files of 20kb and 100kb:有关代码,请参见下面做基准今天几个小时,一些有趣的结果后,让我重新整理我的问题:After doing benchmarks for a few hours today and some interesting results, let me rephrase my questions: 为什么等待StreamReader.ReadToEndAsync()在每一个基准比非异步方法一贯慢 StreamReader.ReadToEnd()? (这可能已经从尼尔·特纳评论回答)Why is await StreamReader.ReadToEndAsync() consistently slower in every benchmark than the non async method StreamReader.ReadToEnd()? (This might already be answered in a comment from Neil Turner)有似乎是打开与StorageFile一个文件时,一个很大的开销,但只有当它是在UI线程中打开。 (参见方法1和3之间或5和6,其中3和6是关于比等效UI线程方法快10倍之间的加载时间差)There seems to be a big overhead when opening a file with StorageFile, but only when it is opened in the UI thread. (See difference in loading times between method 1 and 3 or between 5 and 6, where 3 and 6 are about 10 times faster than the equivalent UI thread method)是否有任何其他的方式来读取可能会更快的文件?Are there any other ways to read the files that might be faster?好了,现在有了这个更新我加10个算法,重新运行每一个算法使用的文件的每一个先前使用的文件的大小和数量。这一次,每个算法运行10次。因此,在Excel文件中的原始数据,这些运行的平均值。由于目前有18个算法,每个测试的4个文件大小(1KB,20KB,100KB,1MB)为50,100和200个文件每个(18 * 4 * 3 = 216),共有2160基准测试运行是,以95分钟的总时间(原运行时间)。Update 3Well, now with this Update I added 10 more algorithms, reran every algorithm with every previously used file size and number of files used. This time each algorithm was run 10 times. So the raw data in the excel file is an average of these runs. As there are now 18 algorithms, each tested with 4 file sizes (1kb, 20kb, 100kb, 1mb) for 50, 100, and 200 files each (18*4*3 = 216), there were a total of 2160 benchmark runs, taking a total time of 95 minutes (raw running time).添加基准25,26 ,27日和 ReadStorageFile 方法。假如因为岗位有超过30000个字符这显然是最大限度地消除一些文字。已更新Excel的新数据,新结构,比较和新的图形文件Added benchmarks 25, 26, 27 and ReadStorageFile method. Had to remove some text because the post had over 30000 characters which is apparently the maximum. Updated the Excel file with new data, new structure, comparisons and new graphs.中的代码:public async Task b1LoadDataStorageFileAsync(){ StorageFolder data = await ApplicationData.Current.LocalFolder.GetFolderAsync("benchmarks"); data = await data.GetFolderAsync("samplefiles"); //b1 for (int i = 0; i < filepaths.Count; i++) { StorageFile f = await data.GetFileAsync(filepaths[i]); using (var stream = await f.OpenStreamForReadAsync()) { using (StreamReader r = new StreamReader(stream)) { filecontent = await r.ReadToEndAsync(); } } }}public async Task b2LoadDataIsolatedStorage(){ using (var store = IsolatedStorageFile.GetUserStoreForApplication()) { for (int i = 0; i < filepaths.Count; i++) { using (var stream = new IsolatedStorageFileStream("/benchmarks/samplefiles/" + filepaths[i], FileMode.Open, store)) { using (StreamReader r = new StreamReader(stream)) { filecontent = r.ReadToEnd(); } } } } await TaskEx.Delay(0);}public async Task b3LoadDataStorageFileAsyncThread(){ StorageFolder data = await ApplicationData.Current.LocalFolder.GetFolderAsync("benchmarks"); data = await data.GetFolderAsync("samplefiles"); await await Task.Factory.StartNew(async () => { for (int i = 0; i < filepaths.Count; i++) { StorageFile f = await data.GetFileAsync(filepaths[i]); using (var stream = await f.OpenStreamForReadAsync()) { using (StreamReader r = new StreamReader(stream)) { filecontent = await r.ReadToEndAsync(); } } } });}public async Task b4LoadDataStorageFileThread(){ StorageFolder data = await ApplicationData.Current.LocalFolder.GetFolderAsync("benchmarks"); data = await data.GetFolderAsync("samplefiles"); await await Task.Factory.StartNew(async () => { for (int i = 0; i < filepaths.Count; i++) { StorageFile f = await data.GetFileAsync(filepaths[i]); using (var stream = await f.OpenStreamForReadAsync()) { using (StreamReader r = new StreamReader(stream)) { filecontent = r.ReadToEnd(); } } } });}public async Task b5LoadDataStorageFile(){ StorageFolder data = await ApplicationData.Current.LocalFolder.GetFolderAsync("benchmarks"); data = await data.GetFolderAsync("samplefiles"); //b5 for (int i = 0; i < filepaths.Count; i++) { StorageFile f = await data.GetFileAsync(filepaths[i]); using (var stream = await f.OpenStreamForReadAsync()) { using (StreamReader r = new StreamReader(stream)) { filecontent = r.ReadToEnd(); } } }}public async Task b6LoadDataIsolatedStorageThread(){ using (var store = IsolatedStorageFile.GetUserStoreForApplication()) { await Task.Factory.StartNew(() => { for (int i = 0; i < filepaths.Count; i++) { using (var stream = new IsolatedStorageFileStream("/benchmarks/samplefiles/" + filepaths[i], FileMode.Open, store)) { using (StreamReader r = new StreamReader(stream)) { filecontent = r.ReadToEnd(); } } } }); }}public async Task b7LoadDataIsolatedStorageAsync(){ using (var store = IsolatedStorageFile.GetUserStoreForApplication()) { for (int i = 0; i < filepaths.Count; i++) { using (var stream = new IsolatedStorageFileStream("/benchmarks/samplefiles/" + filepaths[i], FileMode.Open, store)) { using (StreamReader r = new StreamReader(stream)) { filecontent = await r.ReadToEndAsync(); } } } }}public async Task b8LoadDataIsolatedStorageAsyncThread(){ using (var store = IsolatedStorageFile.GetUserStoreForApplication()) { await await Task.Factory.StartNew(async () => { for (int i = 0; i < filepaths.Count; i++) { using (var stream = new IsolatedStorageFileStream("/benchmarks/samplefiles/" + filepaths[i], FileMode.Open, store)) { using (StreamReader r = new StreamReader(stream)) { filecontent = await r.ReadToEndAsync(); } } } }); }}public async Task b9LoadDataStorageFileAsyncMy9(){ StorageFolder data = await ApplicationData.Current.LocalFolder.GetFolderAsync("benchmarks"); data = await data.GetFolderAsync("samplefiles"); for (int i = 0; i < filepaths.Count; i++) { StorageFile f = await data.GetFileAsync(filepaths[i]); using (var stream = await f.OpenStreamForReadAsync()) { using (StreamReader r = new StreamReader(stream)) { filecontent = await Task.Factory.StartNew<String>(() => { return r.ReadToEnd(); }); } } }}public async Task b10LoadDataIsolatedStorageAsyncMy10(){ using (var store = IsolatedStorageFile.GetUserStoreForApplication()) { //b10 for (int i = 0; i < filepaths.Count; i++) { using (var stream = new IsolatedStorageFileStream("/benchmarks/samplefiles/" + filepaths[i], FileMode.Open, store)) { using (StreamReader r = new StreamReader(stream)) { filecontent = await Task.Factory.StartNew<String>(() => { return r.ReadToEnd(); }); } } } }}public async Task b11LoadDataStorageFileAsyncMy11(){ StorageFolder data = await ApplicationData.Current.LocalFolder.GetFolderAsync("benchmarks"); data = await data.GetFolderAsync("samplefiles"); for (int i = 0; i < filepaths.Count; i++) { await await Task.Factory.StartNew(async () => { StorageFile f = await data.GetFileAsync(filepaths[i]); using (var stream = await f.OpenStreamForReadAsync()) { using (StreamReader r = new StreamReader(stream)) { filecontent = r.ReadToEnd(); } } }); }}public async Task b12LoadDataIsolatedStorageMy12(){ using (var store = IsolatedStorageFile.GetUserStoreForApplication()) { for (int i = 0; i < filepaths.Count; i++) { await Task.Factory.StartNew(() => { using (var stream = new IsolatedStorageFileStream("/benchmarks/samplefiles/" + filepaths[i], FileMode.Open, store)) { using (StreamReader r = new StreamReader(stream)) { filecontent = r.ReadToEnd(); } } }); } }}public async Task b13LoadDataStorageFileParallel13(){ StorageFolder data = await ApplicationData.Current.LocalFolder.GetFolderAsync("benchmarks"); data = await data.GetFolderAsync("samplefiles"); List<Task> tasks = new List<Task>(); for (int i = 0; i < filepaths.Count; i++) { int index = i; var task = await Task.Factory.StartNew(async () => { StorageFile f = await data.GetFileAsync(filepaths[index]); using (var stream = await f.OpenStreamForReadAsync()) { using (StreamReader r = new StreamReader(stream)) { String content = r.ReadToEnd(); if (content.Length == 0) { //just some code to ensure this is not removed by optimization from the compiler //because "content" is not used otherwise //should never be called ShowNotificationText(content); } } } }); tasks.Add(task); } await TaskEx.WhenAll(tasks);}public async Task b14LoadDataIsolatedStorageParallel14(){ List<Task> tasks = new List<Task>(); using (var store = IsolatedStorageFile.GetUserStoreForApplication()) { for (int i = 0; i < filepaths.Count; i++) { int index = i; var t = Task.Factory.StartNew(() => { using (var stream = new IsolatedStorageFileStream("/benchmarks/samplefiles/" + filepaths[index], FileMode.Open, store)) { using (StreamReader r = new StreamReader(stream)) { String content = r.ReadToEnd(); if (content.Length == 0) { //just some code to ensure this is not removed by optimization from the compiler //because "content" is not used otherwise //should never be called ShowNotificationText(content); } } } }); tasks.Add(t); } await TaskEx.WhenAll(tasks); }}public async Task b15LoadDataStorageFileParallelThread15(){ StorageFolder data = await ApplicationData.Current.LocalFolder.GetFolderAsync("benchmarks"); data = await data.GetFolderAsync("samplefiles"); await await Task.Factory.StartNew(async () => { List<Task> tasks = new List<Task>(); for (int i = 0; i < filepaths.Count; i++) { int index = i; var task = await Task.Factory.StartNew(async () => { StorageFile f = await data.GetFileAsync(filepaths[index]); using (var stream = await f.OpenStreamForReadAsync()) { using (StreamReader r = new StreamReader(stream)) { String content = r.ReadToEnd(); if (content.Length == 0) { //just some code to ensure this is not removed by optimization from the compiler //because "content" is not used otherwise //should never be called ShowNotificationText(content); } } } }); tasks.Add(task); } await TaskEx.WhenAll(tasks); });}public async Task b16LoadDataIsolatedStorageParallelThread16(){ await await Task.Factory.StartNew(async () => { List<Task> tasks = new List<Task>(); using (var store = IsolatedStorageFile.GetUserStoreForApplication()) { for (int i = 0; i < filepaths.Count; i++) { int index = i; var t = Task.Factory.StartNew(() => { using (var stream = new IsolatedStorageFileStream("/benchmarks/samplefiles/" + filepaths[index], FileMode.Open, store)) { using (StreamReader r = new StreamReader(stream)) { String content = r.ReadToEnd(); if (content.Length == 0) { //just some code to ensure this is not removed by optimization from the compiler //because "content" is not used otherwise //should never be called ShowNotificationText(content); } } } }); tasks.Add(t); } await TaskEx.WhenAll(tasks); } });}public async Task b17LoadDataStorageFileParallel17(){ StorageFolder data = await ApplicationData.Current.LocalFolder.GetFolderAsync("benchmarks"); data = await data.GetFolderAsync("samplefiles"); List<Task<Task>> tasks = new List<Task<Task>>(); for (int i = 0; i < filepaths.Count; i++) { int index = i; var task = Task.Factory.StartNew<Task>(async () => { StorageFile f = await data.GetFileAsync(filepaths[index]); using (var stream = await f.OpenStreamForReadAsync()) { using (StreamReader r = new StreamReader(stream)) { String content = r.ReadToEnd(); if (content.Length == 0) { //just some code to ensure this is not removed by optimization from the compiler //because "content" is not used otherwise //should never be called ShowNotificationText(content); } } } }); tasks.Add(task); } await TaskEx.WhenAll(tasks); List<Task> tasks2 = new List<Task>(); foreach (var item in tasks) { tasks2.Add(item.Result); } await TaskEx.WhenAll(tasks2);}public async Task b18LoadDataStorageFileParallelThread18(){ StorageFolder data = await ApplicationData.Current.LocalFolder.GetFolderAsync("benchmarks"); data = await data.GetFolderAsync("samplefiles"); await await Task.Factory.StartNew(async () => { List<Task<Task>> tasks = new List<Task<Task>>(); for (int i = 0; i < filepaths.Count; i++) { int index = i; var task = Task.Factory.StartNew<Task>(async () => { StorageFile f = await data.GetFileAsync(filepaths[index]); using (var stream = await f.OpenStreamForReadAsync()) { using (StreamReader r = new StreamReader(stream)) { String content = r.ReadToEnd(); if (content.Length == 0) { //just some code to ensure this is not removed by optimization from the compiler //because "content" is not used otherwise //should never be called ShowNotificationText(content); } } } }); tasks.Add(task); } await TaskEx.WhenAll(tasks); List<Task> tasks2 = new List<Task>(); foreach (var item in tasks) { tasks2.Add(item.Result); } await TaskEx.WhenAll(tasks2); });}public async Task b19LoadDataIsolatedStorageAsyncMyThread(){ using (var store = IsolatedStorageFile.GetUserStoreForApplication()) { //b19 await await Task.Factory.StartNew(async () => { for (int i = 0; i < filepaths.Count; i++) { using (var stream = new IsolatedStorageFileStream("/benchmarks/samplefiles/" + filepaths[i], FileMode.Open, store)) { using (StreamReader r = new StreamReader(stream)) { filecontent = await Task.Factory.StartNew<String>(() => { return r.ReadToEnd(); }); } } } }); }}public async Task b20LoadDataIsolatedStorageAsyncMyConfigure(){ using (var store = IsolatedStorageFile.GetUserStoreForApplication()) { for (int i = 0; i < filepaths.Count; i++) { using (var stream = new IsolatedStorageFileStream("/benchmarks/samplefiles/" + filepaths[i], FileMode.Open, store)) { using (StreamReader r = new StreamReader(stream)) { filecontent = await Task.Factory.StartNew<String>(() => { return r.ReadToEnd(); }).ConfigureAwait(false); } } } }}public async Task b21LoadDataIsolatedStorageAsyncMyThreadConfigure(){ using (var store = IsolatedStorageFile.GetUserStoreForApplication()) { await await Task.Factory.StartNew(async () => { for (int i = 0; i < filepaths.Count; i++) { using (var stream = new IsolatedStorageFileStream("/benchmarks/samplefiles/" + filepaths[i], FileMode.Open, store)) { using (StreamReader r = new StreamReader(stream)) { filecontent = await Task.Factory.StartNew<String>(() => { return r.ReadToEnd(); }).ConfigureAwait(false); } } } }); }}public async Task b22LoadDataOwnReadFileMethod(){ await await Task.Factory.StartNew(async () => { for (int i = 0; i < filepaths.Count; i++) { filecontent = await ReadFile("/benchmarks/samplefiles/" + filepaths[i]); } });}public async Task b23LoadDataOwnReadFileMethodParallel(){ List<Task> tasks = new List<Task>(); for (int i = 0; i < filepaths.Count; i++) { int index = i; var t = ReadFile("/benchmarks/samplefiles/" + filepaths[i]); tasks.Add(t); } await TaskEx.WhenAll(tasks);}public async Task b24LoadDataOwnReadFileMethodParallelThread(){ await await Task.Factory.StartNew(async () => { List<Task> tasks = new List<Task>(); for (int i = 0; i < filepaths.Count; i++) { int index = i; var t = ReadFile("/benchmarks/samplefiles/" + filepaths[i]); tasks.Add(t); } await TaskEx.WhenAll(tasks); });}public async Task b25LoadDataOwnReadFileMethodStorageFile(){ StorageFolder data = await ApplicationData.Current.LocalFolder.GetFolderAsync("benchmarks"); data = await data.GetFolderAsync("samplefiles"); await await Task.Factory.StartNew(async () => { for (int i = 0; i < filepaths.Count; i++) { filecontent = await ReadStorageFile(data, filepaths[i]); } });}public async Task b26LoadDataOwnReadFileMethodParallelStorageFile(){ StorageFolder data = await ApplicationData.Current.LocalFolder.GetFolderAsync("benchmarks"); data = await data.GetFolderAsync("samplefiles"); List<Task> tasks = new List<Task>(); for (int i = 0; i < filepaths.Count; i++) { int index = i; var t = ReadStorageFile(data, filepaths[i]); tasks.Add(t); } await TaskEx.WhenAll(tasks);}public async Task b27LoadDataOwnReadFileMethodParallelThreadStorageFile(){ StorageFolder data = await ApplicationData.Current.LocalFolder.GetFolderAsync("benchmarks"); data = await data.GetFolderAsync("samplefiles"); await await Task.Factory.StartNew(async () => { List<Task> tasks = new List<Task>(); for (int i = 0; i < filepaths.Count; i++) { int index = i; var t = ReadStorageFile(data, filepaths[i]); tasks.Add(t); } await TaskEx.WhenAll(tasks); });}public async Task b28LoadDataOwnReadFileMethodStorageFile(){ //StorageFolder data = await ApplicationData.Current.LocalFolder.GetFolderAsync("benchmarks"); //data = await data.GetFolderAsync("samplefiles"); await await Task.Factory.StartNew(async () => { for (int i = 0; i < filepaths.Count; i++) { filecontent = await ReadStorageFile(ApplicationData.Current.LocalFolder, @"benchmarks\samplefiles\" + filepaths[i]); } });}public async Task<String> ReadStorageFile(StorageFolder folder, String filename){ return await await Task.Factory.StartNew<Task<String>>(async () => { String filec = ""; StorageFile f = await folder.GetFileAsync(filename); using (var stream = await f.OpenStreamForReadAsync()) { using (StreamReader r = new StreamReader(stream)) { filec = await r.ReadToEndAsyncThread(); } } return filec; });}public async Task<String> ReadFile(String filepath){ return await await Task.Factory.StartNew<Task<String>>(async () => { String filec = ""; using (var store = IsolatedStorageFile.GetUserStoreForApplication()) { using (var stream = new IsolatedStorageFileStream(filepath, FileMode.Open, store)) { using (StreamReader r = new StreamReader(stream)) { filec = await r.ReadToEndAsyncThread(); } } } return filec; });} 如何将这些基准测试运行:How these benchmarks are run:public async Task RunBenchmark(String message, Func<Task> benchmarkmethod) { SystemTray.ProgressIndicator.IsVisible = true; SystemTray.ProgressIndicator.Text = message; SystemTray.ProgressIndicator.Value = 0; long milliseconds = 0; Stopwatch w = new Stopwatch(); List<long> results = new List<long>(benchmarkruns); for (int i = 0; i < benchmarkruns; i++) { w.Reset(); w.Start(); await benchmarkmethod(); w.Stop(); milliseconds += w.ElapsedMilliseconds; results.Add(w.ElapsedMilliseconds); SystemTray.ProgressIndicator.Value += (double)1 / (double)benchmarkruns; } Log.Write("Fastest: " + results.Min(), "Slowest: " + results.Max(), "Average: " + results.Average(), "Median: " + results[results.Count / 2], "Maxdifference: " + (results.Max() - results.Min()), "All results: " + results); ShowNotificationText((message + ":").PadRight(24) + (milliseconds / ((double)benchmarkruns)).ToString()); SystemTray.ProgressIndicator.IsVisible = false; } 基准测试结果 在这里一到原始基准数据链接: http://www.dehodev.com/windowsphonebenchmarks.xlsx现在的图(每图通过每一种方法给出了装载50的数据,结果都以毫秒计)Now the graphs (every graph shows the data for loading 50 via each method, results are all in milliseconds)带1MB接下来的基准测试没有真正代表应用程序。我包括他们在这里给这些方法如何缩放更好的概览。The next benchmarks with 1mb are not really representative for apps. I include them here to give a better overview on how these methods scale.因此,要总结这一切:用于读取文件(1)的标准方法始终是最糟糕的(除了在你想要的情况下读取10MB 50文件,但即使如此,也有更好的方法)。So to sum it all up: The standard method used to read files (1.) is always the worst (except in the case you want to read 50 10mb files, but even then there are better methods).我也链接这样的:等待AsyncMethod()与的await的await Task.Factory.StartNew< TResult> (AsyncMethod),其中有人认为通常是不添加新的任务非常有用。但是我看到这里的结果是,你就不能说asume并应经常检查是否添加任务提高性能。I'm also linking this: await AsyncMethod() versus await await Task.Factory.StartNew<TResult>(AsyncMethod), where it is argued that normally it is not useful to add a new task. However the results I'm seeing here are that you just can't asume that and should always check if adding a task improves performance.和最后:我想后这在官方的Windows Phone开发者论坛,但每次我尝试,我得到一个意外的错误消息...And last: I wanted to post this in the official Windows Phone developer forum but everytime I try, I get an "Unexpected Error" message...回顾可以清楚地看到数据后,无论文件大小每个算法线性扩展的文件的数量。所以为了简化一切,我们可以忽略的文件数量(我们将只使用这些数据在今后比较50个文件)After reviewing the data you can clearly see that no matter the file size every algorithm scales linear to the number of files. So to simplify everything we can ignore the number of files (we will just use the data for 50 files in future comparisons).现在的文件大小:文件大小重要。我们可以看到,当我们增加文件大小的算法开始收敛。在10MB的文件大小以前最慢的算法需要的8.将4但是,由于这个问题主要是手机交易这是令人难以置信的罕见的应用程序将读取多个文件,这么多的数据,即使1MB的文件将是罕见的大多数应用程序。 My guess is, that even reading 50 20kb files is uncommon. Most apps are probably reading data in the range of 10 to 30 files, each the size of 0.5kb to 3kb. (This is only a guess, but I think it might be accurate)Now on to file size: File size is important. We can see that when we increase the file size the algorithms begin to converge. At 10MB file size the previous slowest algorithm takes place 4 of 8. However because this question primarily deals with phones it’s incredibly rare that apps will read multiple files with this much data, even 1MB files will be rare for most apps. My guess is, that even reading 50 20kb files is uncommon. Most apps are probably reading data in the range of 10 to 30 files, each the size of 0.5kb to 3kb. (This is only a guess, but I think it might be accurate)推荐答案This will be a long answer that includes answers to all my questions, and recommendations on what methods to use.This will be a long answer that includes answers to all my questions, and recommendations on what methods to use.This answer is also not yet finished, but after having 5 pages in word already, I thought I’ll post the first part now.This answer is also not yet finished, but after having 5 pages in word already, I thought I'll post the first part now.After running over 2160 benchmarks, comparing and analyzing the gathered data, I’m pretty sure I can answer my own questions and provide additional insights on how to get the best possible performance for StorageFile (and IsolatedStorageFile)After running over 2160 benchmarks, comparing and analyzing the gathered data, I’m pretty sure I can answer my own questions and provide additional insights on how to get the best possible performance for StorageFile (and IsolatedStorageFile)(for raw results and all benchmark methods, see question)(for raw results and all benchmark methods, see question) Why is await StreamReader.ReadToEndAsync() consistently slower in every benchmark than the non async method StreamReader.ReadToEnd()? Why is await StreamReader.ReadToEndAsync() consistently slower in every benchmark than the non async method StreamReader.ReadToEnd()?Neil Turner wrote in comments: awaiting in a loop will cause a slight perf . hit due to the constant context switching back and forthNeil Turner wrote in comments: "awaiting in a loop will cause a slight perf . hit due to the constant context switching back and forth"I expected a slight performance hit but we both didn’t think it would cause such a big drop in every benchmark with awaits. Let’s analyze the performance hit of awaits in a loop.I expected a slight performance hit but we both didn’t think it would cause such a big drop in every benchmark with awaits.Let’s analyze the performance hit of awaits in a loop.For this we first compare the results of the benchmarks b1 and b5 (and b2 as an unrelated best case comparison) here the important parts of the two methods:For this we first compare the results of the benchmarks b1 and b5 (and b2 as an unrelated best case comparison) here the important parts of the two methods://b1for (int i = 0; i < filepaths.Count; i++){ StorageFile f = await data.GetFileAsync(filepaths[i]); using (var stream = await f.OpenStreamForReadAsync()) { using (StreamReader r = new StreamReader(stream)) { filecontent = await r.ReadToEndAsync(); } }}//b5for (int i = 0; i < filepaths.Count; i++){ StorageFile f = await data.GetFileAsync(filepaths[i]); using (var stream = await f.OpenStreamForReadAsync()) { using (StreamReader r = new StreamReader(stream)) { filecontent = r.ReadToEnd(); } }}Benchmark results:Benchmark results:50 files, 100kb:50 files, 100kb:B1: 2651msB5: 1553msB2: 147200 files, 1kb200 files, 1kbB1: 9984msB5: 6572B2: 87In both scenarios B5 takes roughly about 2/3 of the time B1 takes, with only 2 awaits in a loop vs 3 awaits in B1. It seems that the actual loading of both b1 and b5 might be about the same as in b2 and only the awaits cause the huge drop in performance (probably because of context switching) (assumption 1).In both scenarios B5 takes roughly about 2/3 of the time B1 takes, with only 2 awaits in a loop vs 3 awaits in B1. It seems that the actual loading of both b1 and b5 might be about the same as in b2 and only the awaits cause the huge drop in performance (probably because of context switching) (assumption 1).Let’s try to calculate how long one context switch takes (with b1) and then check if assumption 1 was correct.Let’s try to calculate how long one context switch takes (with b1) and then check if assumption 1 was correct.With 50 files and 3 awaits, we have 150 context switches: (2651ms-147ms)/150 = 16.7ms for one context switch. Can we confirm this? :With 50 files and 3 awaits, we have 150 context switches: (2651ms-147ms)/150 = 16.7ms for one context switch. Can we confirm this? :B5, 50 files: 16.7ms * 50 * 2 = 1670ms + 147ms = 1817ms vs benchmarks results: 1553msB5, 50 files: 16.7ms * 50 * 2 = 1670ms + 147ms = 1817ms vs benchmarks results: 1553msB1, 200 files: 16.7ms * 200 * 3 = 10020ms + 87ms = 10107ms vs 9984msB1, 200 files: 16.7ms * 200 * 3 = 10020ms + 87ms = 10107ms vs 9984msB5, 200 files: 16.7ms * 200 * 2 = 6680ms + 87ms = 6767ms vs 6572msB5, 200 files: 16.7ms * 200 * 2 = 6680ms + 87ms = 6767ms vs 6572msSeems pretty promising with only relative small differences that could be attributed to a margin of error in the benchmark results.Seems pretty promising with only relative small differences that could be attributed to a margin of error in the benchmark results.Benchmark (awaits, files): Calculation vs Benchmark resultsBenchmark (awaits, files): Calculation vs Benchmark resultsB7 (1 await, 50 files): 16.7ms*50 + 147= 982ms vs 899msB7 (1 await, 50 files): 16.7ms*50 + 147= 982ms vs 899msB7 (1 await, 200 files): 16.7*200+87 = 3427ms vs 3354msB7 (1 await, 200 files): 16.7*200+87 = 3427ms vs 3354msB12 (1 await, 50 files): 982ms vs 897msB12 (1 await, 50 files): 982ms vs 897msB12 (1 await, 200 files): 3427ms vs 3348msB12 (1 await, 200 files): 3427ms vs 3348msB9 (3 awaits, 50 files): 2652ms vs 2526msB9 (3 awaits, 50 files): 2652ms vs 2526msB9 (3 awaits, 200 files): 10107ms vs 10014msB9 (3 awaits, 200 files): 10107ms vs 10014msI think with this results it is safe to say, one context switch takes about 16.7ms (at least in a loop).With this cleared up, some of the benchmark results make much more sense. In benchmarks with 3 awaits, we mostly see only a 0.1% difference in results of different file sizes (1, 20, 100). Which is about the absolute difference we can observe in our reference benchmark b2.With this cleared up, some of the benchmark results make much more sense. In benchmarks with 3 awaits, we mostly see only a 0.1% difference in results of different file sizes (1, 20, 100). Which is about the absolute difference we can observe in our reference benchmark b2.Conclusion: awaits in loops are really really bad (if the loop is executed in the ui thread, but I will come to that later) There seems to be a big overhead when opening a file with StorageFile, but only when it is opened in the UI thread. (Why?) There seems to be a big overhead when opening a file with StorageFile, but only when it is opened in the UI thread. (Why?)Let’s look at benchmark 10 and 19:Let’s look at benchmark 10 and 19://b10for (int i = 0; i < filepaths.Count; i++){ using (var stream = new IsolatedStorageFileStream("/benchmarks/samplefiles/" + filepaths[i], FileMode.Open, store)) { using (StreamReader r = new StreamReader(stream)) { filecontent = await Task.Factory.StartNew<String>(() => { return r.ReadToEnd(); }); } }}//b19await await Task.Factory.StartNew(async () =>{ for (int i = 0; i < filepaths.Count; i++) { using (var stream = new IsolatedStorageFileStream("/benchmarks/samplefiles/" + filepaths[i], FileMode.Open, store)) { using (StreamReader r = new StreamReader(stream)) { filecontent = await Task.Factory.StartNew<String>(() => { return r.ReadToEnd(); }); } } }});Benchmarks (1kb, 20kb, 100kb, 1mb) in ms:Benchmarks (1kb, 20kb, 100kb, 1mb) in ms:10: (846, 865, 916, 1564)10: (846, 865, 916, 1564)19: (35, 57, 166, 1438)19: (35, 57, 166, 1438)In benchmark 10, we again see a huge performance hit with the context switching. However, when we execute the for loop in a different thread (b19), we get almost the same performance as with our reference benchmark 2 (Ui blocking IsolatedStorageFile). Theoretically there should still be context switches (at least to my knowledge). I suspect that the compiler optimizes the code in this situation that there are no context switches.In benchmark 10, we again see a huge performance hit with the context switching. However, when we execute the for loop in a different thread (b19), we get almost the same performance as with our reference benchmark 2 (Ui blocking IsolatedStorageFile). Theoretically there should still be context switches (at least to my knowledge). I suspect that the compiler optimizes the code in this situation that there are no context switches.As a matter of fact, we get nearly the same performance, as in benchmark 20, which is basically the same as benchmark 10 but with a ConfigureAwait(false):As a matter of fact, we get nearly the same performance, as in benchmark 20, which is basically the same as benchmark 10 but with a ConfigureAwait(false):filecontent = await Task.Factory.StartNew<String>(() => { return r.ReadToEnd(); }).ConfigureAwait(false);20: (36, 55, 168, 1435)20: (36, 55, 168, 1435)This seems to be the case not only for new Tasks, but for every async method (well at least for all that I tested)This seems to be the case not only for new Tasks, but for every async method (well at least for all that I tested)So the answer to this question is combination of answer one and what we just found out:So the answer to this question is combination of answer one and what we just found out:The big overhead is because of the context switches, but in a different thread either no context switches occur or there is no overhead caused by them. (Of course this is not only true for opening a file as was asked in the question but for every async method)The big overhead is because of the context switches, but in a different thread either no context switches occur or there is no overhead caused by them. (Of course this is not only true for opening a file as was asked in the question but for every async method)Question 3 can’t really be fully answered there can always be ways that might be a little bit faster in specific conditions but we can at least tell that some methods should never be used and find the best solution for the most common cases from the data I gathered:Question 3 can’t really be fully answered there can always be ways that might be a little bit faster in specific conditions but we can at least tell that some methods should never be used and find the best solution for the most common cases from the data I gathered:Let’s first take a look at StreamReader.ReadToEndAsync and alternatives. For that, we can compare benchmark 7 and benchmark 10Let’s first take a look at StreamReader.ReadToEndAsync and alternatives. For that, we can compare benchmark 7 and benchmark 10They only differ in one line:They only differ in one line:b7:filecontent = await r.ReadToEndAsync();b10:filecontent = await Task.Factory.StartNew<String>(() => { return r.ReadToEnd(); });You might think that they would perform similarly good or bad and you would be wrong (at least in some cases).You might think that they would perform similarly good or bad and you would be wrong (at least in some cases).When I first thought of doing this test, I thought that ReadToEndAsync() would be implemented that way.When I first thought of doing this test, I thought that ReadToEndAsync() would be implemented that way.Benchmarks:Benchmarks:b7: (848, 853, 899, 3386)b7: (848, 853, 899, 3386)b10: (846, 865, 916, 1564)b10: (846, 865, 916, 1564)We can clearly see that in the case where most of the time is spent reading the file, the second method is way faster.We can clearly see that in the case where most of the time is spent reading the file, the second method is way faster.My recommendation:My recommendation:Don’t use ReadToEndAsync() but write yourself an extension method like this:Don’t use ReadToEndAsync() but write yourself an extension method like this:public static async Task<String> ReadToEndAsyncThread(this StreamReader reader){ return await Task.Factory.StartNew<String>(() => { return reader.ReadToEnd(); });}Always use this instead of ReadToEndAsync().Always use this instead of ReadToEndAsync().You can see this even more when comparing benchmark 8 and 19 (which are benchmark 7 and 10, with the for loop being executed in a different thread:You can see this even more when comparing benchmark 8 and 19 (which are benchmark 7 and 10, with the for loop being executed in a different thread:b8: (55, 103, 360, 3252)b8: (55, 103, 360, 3252)b19: (35, 57, 166, 1438)b19: (35, 57, 166, 1438)b6: (35, 55, 163, 1374)b6: (35, 55, 163, 1374)In both cases there is no overhead from context switching and you can clearly see, that the performance from ReadToEndAsync() is absolutely terrible. (Benchmark 6 is also nearly identical to 8 and 19, but with filecontent = r.ReadToEnd();. Also scaling to 10 files with 10mb)In both cases there is no overhead from context switching and you can clearly see, that the performance from ReadToEndAsync() is absolutely terrible. (Benchmark 6 is also nearly identical to 8 and 19, but with filecontent = r.ReadToEnd();. Also scaling to 10 files with 10mb)If we compare this to our reference ui blocking method:If we compare this to our reference ui blocking method:b2: (21, 44, 147, 1365)b2: (21, 44, 147, 1365)We can see, that both benchmark 6 and 19 come very close to the same performance without blocking the ui thread. Can we improve the performance even more? Yes, but only marginally with parallel loading:We can see, that both benchmark 6 and 19 come very close to the same performance without blocking the ui thread. Can we improve the performance even more? Yes, but only marginally with parallel loading:b14: (36, 45, 133, 1074)b14: (36, 45, 133, 1074)b16: (31, 52, 141, 1086)b16: (31, 52, 141, 1086)However, if you look at these methods, they are not very pretty and writing that everywhere you have to load something would be bad design. For that I wrote the method ReadFile(string filepath) which can be used for single files, in normal loops with 1 await and in loops with parallel loading. This should give really good performance and result in easily reusable and maintainable code:However, if you look at these methods, they are not very pretty and writing that everywhere you have to load something would be bad design. For that I wrote the method ReadFile(string filepath) which can be used for single files, in normal loops with 1 await and in loops with parallel loading. This should give really good performance and result in easily reusable and maintainable code:public async Task<String> ReadFile(String filepath){ return await await Task.Factory.StartNew<Task<String>>(async () => { String filec = ""; using (var store = IsolatedStorageFile.GetUserStoreForApplication()) { using (var stream = new IsolatedStorageFileStream(filepath, FileMode.Open, store)) { using (StreamReader r = new StreamReader(stream)) { filec = await r.ReadToEndAsyncThread(); } } } return filec; });}Here are some benchmarks (compared with benchmark 16) (for this benchmark I had a separate benchmark run, where I took the MEDIAN (not the average) time from 100 runs of each method):Here are some benchmarks (compared with benchmark 16) (for this benchmark I had a separate benchmark run, where I took the MEDIAN (not the average) time from 100 runs of each method):b16: (16, 32, 122, 1197)b16: (16, 32, 122, 1197)b22: (59, 81, 219, 1516)b22: (59, 81, 219, 1516)b23: (50, 48, 160, 1015)b23: (50, 48, 160, 1015)b24: (34, 50, 87, 1002)b24: (34, 50, 87, 1002)(the median in all of these is methods is very close to the average, with the average sometimes being a little bit slower, sometimes faster. The data should be comparable)(the median in all of these is methods is very close to the average, with the average sometimes being a little bit slower, sometimes faster. The data should be comparable)(Please note, that even though the values are the median of 100 runs, the data in the range of 0-100ms is not really comparable. E.g. in the first 100 runs, benchmark 24 had a median of 1002ms, in the second 100 runs, 899ms. )(Please note, that even though the values are the median of 100 runs, the data in the range of 0-100ms is not really comparable. E.g. in the first 100 runs, benchmark 24 had a median of 1002ms, in the second 100 runs, 899ms. )Benchmark 22 is comparable with benchmark 19. Benchmark 23 and 24 are comparable with benchmark 14 and 16.Benchmark 22 is comparable with benchmark 19. Benchmark 23 and 24 are comparable with benchmark 14 and 16.Ok, now this should be about one the best ways to read the files, when IsolatedStorageFile is available.I’ll add a similar analysis for StorageFile for situations where you only have StorageFile available (sharing code with Windows 8 Apps).I’ll add a similar analysis for StorageFile for situations where you only have StorageFile available (sharing code with Windows 8 Apps).And because I’m interested on how StorageFile performs on Windows 8, I’ll probably test all StorageFile methods on my Windows 8 machine too. (though for that I’m probably not going to write an analysis)And because I’m interested on how StorageFile performs on Windows 8, I’ll probably test all StorageFile methods on my Windows 8 machine too. (though for that I’m probably not going to write an analysis) 这篇关于比IsolatedStorageFile慢StorageFile 50倍的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持! 上岸,阿里云!