本文介绍了指向本地目录的UNC路径比本地访问慢得多的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

某些代码我有偶尔需要参照的长UNC路径工作(如\\?\UNC\MachineName\Path),但我们发现,无论身在何处该目录所在,即使在同一台机器上,通过UNC路径访问比本地路径要慢很多。



例如,我们编写了一些基准测试代码,乱码文件,然后再读回来,多次。我用6种不同的方式测试它,以访问我的开发机器上的同一个共享目录,代码运行在同一台机器上:


  • C:\Temp

  • \\MachineName\Temp

  • \\ \C:\Temp

  • \\?\UNC\MachineName\Temp

  • \\127.0.0.1\Temp
  • $ b ?$ b
  • \\ \UNC\127.0.0.1\Temp



和这里的结果:

 测试:C:\ Temp 
将1000个文件写入C:\ Temp in 861.0647 ms
在60.0744 ms
中从C:\ Temp中读取1000个文件测试:\\MachineName\Temp
在2270.2051中将1000个文件写入\\MachineName\Temp ms
从\\\\\ MachineName \ Temp中读取1000个文件,时间为1655.0815 ms
测试:\\\\\\\\\\:\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\ \ Temp in 916.0596 ms
从中读取1000个文件\\\ \C:\Temp在60.0517毫秒
测试:??\\ \UNC\MachineName\Temp
写给1000页的文件\\ \UNC \MachineName\Temp in 2499.3235 ms
从\\?\UNC\MachineName\Temp在1684.2291 ms
中读取1000个文件测试:\\127.0.0.1\Temp
将1000个文件写入\\\\\\\\\\\\\\\\\\\\\\\\ inText in 2516.2847 ms
在1721.1925 ms \\ b $ b中读取\\\\\\\\\\\\\\\\\\\\ \\\\\\\\\\\\\\\\\\\\\\\\\\\\\'\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\' \\\?\UNC\127.0.0.1\Temp in 1678.18 ms

我试了IP地址排除DNS问题。它可以检查每个文件访问的凭据或权限?如果是这样,有没有办法缓存它?它是否只是假设,因为它是一个UNC路径,它应该做的一切,而不是直接访问磁盘的TCP / IP?我们用于读/写的代码有问题吗?我已经拿出了相关的基准测试部分,如下所示:

  using System; 
using System.Collections.Generic;
使用System.IO;
使用System.Runtime.InteropServices;
使用System.Text;
使用Microsoft.Win32.SafeHandles;
使用Util.FileSystem;

namespace UNCWriteTest {
内部类程序{
[DllImport(kernel32.dll,CharSet = CharSet.Auto,SetLastError = true)]
public static extern bool DeleteFile(string path); // File.Delete不处理\\ \\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\

私人常量字符串TextToSerialize =
ASD; lgviajsmfopajwf0923p84jtmpq93worjgfq0394jktp9orgjawefuogahejngfmliqwegfnailsjdhfmasodfhnasjldgifvsdkuhjsmdofasldhjfasolfgiasngouahfmp9284jfqp92384fhjwp90c8jkp04jk34pofj4eo9aWIUEgjaoswdfg8jmp409c8jmwoeifulhnjq34lotgfhnq34g;

private static readonly byte [] _Buffer = Encoding.UTF8.GetBytes(TextToSerialize);

public static string WriteFile(string basedir){
string fileName = Path.Combine(basedir,string.Format({0} .tmp,Guid.NewGuid()));

尝试{
IntPtr writeHandle = NativeFileHandler.CreateFile(
fileName,
NativeFileHandler.EFileAccess.GenericWrite,
NativeFileHandler.EFileShare.None,
IntPtr.Zero,
NativeFileHandler.ECreationDisposition.New,
NativeFileHandler.EFileAttributes.Normal,
IntPtr.Zero);

//如果文件被锁定
int fileError = Marshal.GetLastWin32Error(); $(b)if((fileError == 32 / * ERROR_SHARING_VIOLATION * /)||(fileError == 80 / * ERROR_FILE_EXISTS * /)){
throw new Exception(oopsy);
}

使用(var h = new SafeFileHandle(writeHandle,true)){
using(var fs = new FileStream(h,FileAccess.Write,NativeFileHandler.DiskPageSize)) {
fs.Write(_Buffer,0,_Buffer.Length);
}
}
}
catch(IOException){
throw;
}
赶上(例外){
抛出新InvalidOperationException异常( 代码 + Marshal.GetLastWin32Error(),EX);
}

return fileName;


public static void ReadFile(string fileName){
var fileHandle =
SafeFileHandle(
)NativeFileHandler.CreateFile(fileName,NativeFileHandler.EFileAccess。 GenericRead,NativeFileHandler.EFileShare.Read,IntPtr.Zero,
NativeFileHandler.ECreationDisposition.OpenExisting,NativeFileHandler.EFileAttributes.Normal,IntPtr.Zero),true);使用

(句柄){
//这里检查手柄变得有点清洁异常语义
如果(fileHandle.IsInvalid){
// MS-帮助://MS.MSSDK.1033/MS.WinSDK.1033/debug/base/system_error_codes__0-499_.htm
int errorCode = Marshal.GetLastWin32Error();
//现在,我们已经采取了比我们分配的时间份额,抛出该异常
抛出IOException的新(的String.Format(文件读取失败的{0} {1},错误代码{1},fileName,errorCode));


//我们有一个有效的句柄,实际上可以读取一个流,序列化的异常冒泡
使用(var fs = new FileStream(fileHandle,FileAccess.Read, 1 * NativeFileHandler.DiskPageSize)){
//如果在序列失败,我们只是让正常序列的异常流出
变种富=新的字节[256];
fs.Read(foo,0,256);
}
}
}

公共静态字符串[] TestWrites(串BASEDIR){
尝试{
变种文件名=新列表与LT;串GT;();
DateTime start = DateTime.UtcNow;
for(int i = 0; i fileNames.Add(WriteFile(baseDir));
}
DateTime end = DateTime.UtcNow;

Console.Out.WriteLine( 写{0}的文件{1}在{2}毫秒,N,BASEDIR,end.Subtract(开始).TotalMilliseconds);
返回fileNames.ToArray();

catch(Exception e){
Console.Out.WriteLine(无法写入+ baseDir +Exception:+ e.Message);
返回新的字符串[] {};



public static void TestReads(string baseDir,string [] fileNames){
try {
DateTime start = DateTime.UtcNow; (int i = 0; i< N; i ++){
ReadFile(fileNames [i%fileNames.Length]);


}
DateTime end = DateTime.UtcNow;

Console.Out.WriteLine({2} ms中的{1} {{}}中读取{0}文件,N,baseDir,end.Subtract(start).TotalMilliseconds);

catch(Exception e){
Console.Out.WriteLine(无法读取+ baseDir +Exception:+ e.Message);



private static void Main(string [] args){
foreach(args中的字符串baseDir){
Console.Out.WriteLine (Testing:{0},baseDir);

string [] fileNames = TestWrites(baseDir);

TestReads(baseDir,fileNames);

foreach(fileName中的字符串fileName){
DeleteFile(fileName);






$ div class =h2_lin>解决方案

这并不令我感到惊讶。您正在写入/读取相当少量的数据,因此文件系统缓存可能会最小化物理磁盘I / O的影响;基本上,瓶颈就是CPU。我不确定流量是否将通过TCP / IP协议栈进行传输,但至少涉及SMB协议。这意味着请求在SMB客户端进程和SMB服务器进程之间来回传递,因此您可以在三个不同进程(包括您自己的进程)之间切换上下文。使用本地文件系统路径切换到内核模式并返回,但不涉及其他进程。上下文切换比内核模式的转换慢很多



可能会有两个不同的额外开销,每个文件一个每千字节数据一个。在这个特定的测试中,每个文件SMB开销可能占主导地位。因为涉及的数据量也会影响物理磁盘I / O的影响,所以在处理大量小文件时,这可能只是一个问题。


Some code I'm working with occasionally needs to refer to long UNC paths (e.g. \\?\UNC\MachineName\Path), but we've discovered that no matter where the directory is located, even on the same machine, it's much slower when accessing through the UNC path than the local path.

For example, we've written some benchmarking code that writes a string of gibberish to a file, then later read it back, multiple times. I'm testing it with 6 different ways to access the same shared directory on my dev machine, with the code running on the same machine:

  • C:\Temp
  • \\MachineName\Temp
  • \\?\C:\Temp
  • \\?\UNC\MachineName\Temp
  • \\127.0.0.1\Temp
  • \\?\UNC\127.0.0.1\Temp

And here are the results:

Testing: C:\Temp
Wrote 1000 files to C:\Temp in 861.0647 ms
Read 1000 files from C:\Temp in 60.0744 ms
Testing: \\MachineName\Temp
Wrote 1000 files to \\MachineName\Temp in 2270.2051 ms
Read 1000 files from \\MachineName\Temp in 1655.0815 ms
Testing: \\?\C:\Temp
Wrote 1000 files to \\?\C:\Temp in 916.0596 ms
Read 1000 files from \\?\C:\Temp in 60.0517 ms
Testing: \\?\UNC\MachineName\Temp
Wrote 1000 files to \\?\UNC\MachineName\Temp in 2499.3235 ms
Read 1000 files from \\?\UNC\MachineName\Temp in 1684.2291 ms
Testing: \\127.0.0.1\Temp
Wrote 1000 files to \\127.0.0.1\Temp in 2516.2847 ms
Read 1000 files from \\127.0.0.1\Temp in 1721.1925 ms
Testing: \\?\UNC\127.0.0.1\Temp
Wrote 1000 files to \\?\UNC\127.0.0.1\Temp in 2499.3211 ms
Read 1000 files from \\?\UNC\127.0.0.1\Temp in 1678.18 ms

I tried the IP address to rule out a DNS issue. Could it be checking credentials or permissions on each file access? If so, is there a way to cache it? Does it just assume since it's a UNC path that it should do everything over TCP/IP instead of directly accessing the disk? Is it something wrong with the code we're using for the reads/writes? I've ripped out the pertinent parts for benchmarking, seen below:

using System;
using System.Collections.Generic;
using System.IO;
using System.Runtime.InteropServices;
using System.Text;
using Microsoft.Win32.SafeHandles;
using Util.FileSystem;

namespace UNCWriteTest {
    internal class Program {
        [DllImport("kernel32.dll", CharSet = CharSet.Auto, SetLastError = true)]
        public static extern bool DeleteFile(string path); // File.Delete doesn't handle \\?\UNC\ paths

        private const int N = 1000;

        private const string TextToSerialize =
            "asd;lgviajsmfopajwf0923p84jtmpq93worjgfq0394jktp9orgjawefuogahejngfmliqwegfnailsjdhfmasodfhnasjldgifvsdkuhjsmdofasldhjfasolfgiasngouahfmp9284jfqp92384fhjwp90c8jkp04jk34pofj4eo9aWIUEgjaoswdfg8jmp409c8jmwoeifulhnjq34lotgfhnq34g";

        private static readonly byte[] _Buffer = Encoding.UTF8.GetBytes(TextToSerialize);

        public static string WriteFile(string basedir) {
            string fileName = Path.Combine(basedir, string.Format("{0}.tmp", Guid.NewGuid()));

            try {
                IntPtr writeHandle = NativeFileHandler.CreateFile(
                    fileName,
                    NativeFileHandler.EFileAccess.GenericWrite,
                    NativeFileHandler.EFileShare.None,
                    IntPtr.Zero,
                    NativeFileHandler.ECreationDisposition.New,
                    NativeFileHandler.EFileAttributes.Normal,
                    IntPtr.Zero);

                // if file was locked
                int fileError = Marshal.GetLastWin32Error();
                if ((fileError == 32 /* ERROR_SHARING_VIOLATION */) || (fileError == 80 /* ERROR_FILE_EXISTS */)) {
                    throw new Exception("oopsy");
                }

                using (var h = new SafeFileHandle(writeHandle, true)) {
                    using (var fs = new FileStream(h, FileAccess.Write, NativeFileHandler.DiskPageSize)) {
                        fs.Write(_Buffer, 0, _Buffer.Length);
                    }
                }
            }
            catch (IOException) {
                throw;
            }
            catch (Exception ex) {
                throw new InvalidOperationException(" code " + Marshal.GetLastWin32Error(), ex);
            }

            return fileName;
        }

        public static void ReadFile(string fileName) {
            var fileHandle =
                new SafeFileHandle(
                    NativeFileHandler.CreateFile(fileName, NativeFileHandler.EFileAccess.GenericRead, NativeFileHandler.EFileShare.Read, IntPtr.Zero,
                                                 NativeFileHandler.ECreationDisposition.OpenExisting, NativeFileHandler.EFileAttributes.Normal, IntPtr.Zero), true);

            using (fileHandle) {
                //check the handle here to get a bit cleaner exception semantics
                if (fileHandle.IsInvalid) {
                    //ms-help://MS.MSSDK.1033/MS.WinSDK.1033/debug/base/system_error_codes__0-499_.htm
                    int errorCode = Marshal.GetLastWin32Error();
                    //now that we've taken more than our allotted share of time, throw the exception
                    throw new IOException(string.Format("file read failed on {0} to {1} with error code {1}", fileName, errorCode));
                }

                //we have a valid handle and can actually read a stream, exceptions from serialization bubble out
                using (var fs = new FileStream(fileHandle, FileAccess.Read, 1*NativeFileHandler.DiskPageSize)) {
                    //if serialization fails, we'll just let the normal serialization exception flow out
                    var foo = new byte[256];
                    fs.Read(foo, 0, 256);
                }
            }
        }

        public static string[] TestWrites(string baseDir) {
            try {
                var fileNames = new List<string>();
                DateTime start = DateTime.UtcNow;
                for (int i = 0; i < N; i++) {
                    fileNames.Add(WriteFile(baseDir));
                }
                DateTime end = DateTime.UtcNow;

                Console.Out.WriteLine("Wrote {0} files to {1} in {2} ms", N, baseDir, end.Subtract(start).TotalMilliseconds);
                return fileNames.ToArray();
            }
            catch (Exception e) {
                Console.Out.WriteLine("Failed to write for " + baseDir + " Exception: " + e.Message);
                return new string[] {};
            }
        }

        public static void TestReads(string baseDir, string[] fileNames) {
            try {
                DateTime start = DateTime.UtcNow;

                for (int i = 0; i < N; i++) {
                    ReadFile(fileNames[i%fileNames.Length]);
                }
                DateTime end = DateTime.UtcNow;

                Console.Out.WriteLine("Read {0} files from {1} in {2} ms", N, baseDir, end.Subtract(start).TotalMilliseconds);
            }
            catch (Exception e) {
                Console.Out.WriteLine("Failed to read for " + baseDir + " Exception: " + e.Message);
            }
        }

        private static void Main(string[] args) {
            foreach (string baseDir in args) {
                Console.Out.WriteLine("Testing: {0}", baseDir);

                string[] fileNames = TestWrites(baseDir);

                TestReads(baseDir, fileNames);

                foreach (string fileName in fileNames) {
                    DeleteFile(fileName);
                }
            }
        }
    }
}
解决方案

This doesn't surprise me. You're writing/reading a fairly small amount of data, so the file system cache is probably minimizing the impact of the physical disk I/O; basically, the bottleneck is going to be the CPU. I'm not certain whether the traffic will be going via the TCP/IP stack or not but at a minimum the SMB protocol is involved. For one thing that means the requests are being passed back and forth between the SMB client process and the SMB server process, so you've got context switching between three distinct processes, including your own. Using the local file system path you're switching into kernel mode and back but no other process is involved. Context switching is much slower than the transition to and from kernel mode.

There are likely to be two distinct additional overheads, one per file and one per kilobyte of data. In this particular test the per file SMB overhead is likely to be dominant. Because the amount of data involved also affects the impact of physical disk I/O, you may find that this is only really a problem when dealing with lots of small files.

这篇关于指向本地目录的UNC路径比本地访问慢得多的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持!

08-19 10:31