也许看看glibc的SSE2/AVX memcmp 实现是做什么的;它具有从2个数组中读取SIMD向量的相同问题,该数组可能未对齐.彼此.(简单的按字节相等使用 pcmpeqb ,因此它不会使用SSE4.2字符串指令,但是要加载哪些SIMD向量的问题是相同的.是的,这就是获取2个输入长度的要点(对于XMM1,为RAX;对于XMM2,为RDX).有关 pcmpestri 的信息,请参阅英特尔的asm手册条目.如果这是您想要的,就必须这样做; pcmpestri 查看XMM1的前RAX字节/字(最多16/8)和XMM2/mem的前RDX字节(字)(最多16/8),并输出到ECX和EFLAGS.就这些.同样,英特尔手册对此非常清楚.(虽然了解实际的汇总和比较选项相当复杂!)如果要在循环中使用它,可以将那些寄存器设置为 16 并正确计算它们,以在循环后进行最终剥离.或者您可以将它们每个递减16次; pcmpestri 似乎是为此设计的,如果EDX和/或EAX小于< ;,则设置ZF和/或SF.16(或8). 另请参见 https://www.strchr.com/strcmp_and_strlen_using_sse_4.2要获得SSE4.2字符串指令所做的处理步骤的有用的概括图,因此您可以弄清楚如何设计使用它们的有用方法.还有一些示例,例如实现 strcmp 和 strlen .英特尔在SDM中的详细文档陷入了困境,难以一目了然.(良好的展开式SSE2实现在这些简单功能上可以胜过SSE4.2,但一个简单的问题便是一个很好的例子.)理想情况下,您将拥有适当的 intrinsics ,而不仅仅是内联汇编的包装器.这可能取决于高级代码想要做什么,尽管特别是对于 pcmpestri ,所有信息都存在于ECX中(整数结果). CF =(ECX == 0)和 OF = ECX [0] (低位).如果GDC具有GCC6标志输出语法,我猜不会伤害,除非它欺骗编译器制作更糟糕的代码来接收这些输出.如果您使用inline-asm基本上为SSE4.2字符串指令创建内在函数,那么可能值得看一下英特尔针对C内在函数的设计: https://software.intel.com/sites/landingpage/IntrinsicsGuide/.例如一个用于ECX结果的 int _mm_cmpestri(__ m128i a,int la,__ m128i b,int lb,const int模式); 每个独立的FLAG输出位各一个,例如 _mm_cmpestro 但是,英特尔的设计存在缺陷.例如,至少使用隐式长度字符串版本,我记得获得整数结果和并使编译器直接从指令直接跳转到FLAGS的唯一方法是使用两个不同的内在函数相同的输入,并取决于编译器一起对其进行优化.使用内联汇编,很容易描述多个输出,并对未使用的输出进行优化.但是不幸的是,C没有针对多个返回值的语法,我想英特尔不希望拥有一个带有按引用输出arg和返回值的内在函数.我先加载 movdqa ,然后加载 add ,然后加载 pcmpistri .这样可以使movdqa寻址模式更简单,更小,并使第一次迭代的加载更早开始执行1个周期,而无需等待 add 的延迟(如果索引位于关键路径上;它可能不会如果您从 0 开始)在这里使用索引寻址模式可能不会有害(像 pcmpe/istri 这样的多uup指令可能无论如何都无法微融合负载,而 movdqa / movdqu 不在乎).但在其他情况下,值得展开并改为使用指针增量:微融合和寻址模式 可能值得将其展开为2.我建议计数uops,以查看它是否刚好是4的倍数,和/或在Skylake和Zen等CPU上尝试使用.I’m trying to write a minimal loop around pcmpestri in x86-64 asm (actually in-line asm embedded in Dlang using the GDC compiler). There are a couple of things that I don’t understandI you are using pcmpestri with two pointers to strings, are the lengths of the strings in rax and rdx ?If so, what are the units? count in bytes always, or count in chars where 1 count = 2 bytes for uwords ?Does pcmpestri check for short strings? ie len str1 or str2 < 16 bytes or 8 uwords if uwordsDoes pcmpestri count rax and rax down by n per chunk or do I have to do it ? subtracting either 16 always or (16 or 8 depending on bytes/uwords)?Do I have to worry about 128-bit alignment on the fetch below? I could precheck that the string is 128-bit aligned if it’s faster, but then that could get really messy. If I use instructions that don’t require 128-bit alignment how much slower will that be? see belowIs it slower to use lea %[offset], [ %[offset] - 16 ] before the ja ? (chosen as it doesn’t set flags)Worth loop-unrolling? Or a terrible idea ?What info do I need to pass back to the hi-level lang code? rcx I know i one thing, the flags too or can I forget about them? (In an earlier routine I passed back true if cond ‘na’ if final ja not-taken.)One final question: what about passing back updated offset?leaving out required preamble I have:; having reserved say xmm1 as a working variableloop: add %[offset], 16 ; 16 bytes = nbytes of chunk of string; do I need to count lengths of strings down ? by 16 per chunk or by (8 or 16) per chunk ? movdqa xmm1, [ %[pstr1] + %[offset] - 16 ] ; -16 to compensate for pre-add pcmpestri xmm1, [ %[pstr1] + %[offset] - 16 ], 0 ; mode=0 or 1 for uwords ja loop; what do I do about passing back info to the main code?; I already pass back rcx = offset-in-chunk, do I need to pass the flags back too; I have reserved rcx by declaring it as an output; what about passing down the value of %[offset]? or passing the counted-down lengths?I haven’t managed to find examples that feature words rather than bytes.And for a 1-string usage pattern, where I have reserved say xmm1 as an input argument xmm reg :loop: add %[offset], 16 ; 16 bytes = nbytes of chunk of string pcmpestri xmm1, [ %[pstr1] + %[offset] - 16 ], 0 ; mode=0 or 1 for uwords ja loop 解决方案 In your main loop (while remaining lengths of both input strings are >=16), use pcmpistri (the implicit-length string version) if you know there are no 0 bytes in your data. pcmpistri significantly faster and fewer uops on most CPUs, perhaps because it only has 3 inputs (including the immediate) instead of 5. (https://uops.info/)Yes for movdqa of course, but surprisingly the SSE4.2 string instructions don't fault on misaligned memory operands! For the legacy SSE (non-VEX) encoding of all previous instructions (except unaligned mov like movups / movdqu), 16-byte memory operands must be aligned. Intel's manual notes: "additionally, this instruction does not cause #GP if the memory operand is not aligned to 16 Byte boundary".Of course you still have to avoid crossing into an unmapped page, e.g. for a 5 byte string that starts 7 bytes before an unmapped page, a 16-byte memory operand will still page-fault. (Is it safe to read past the end of a buffer within the same page on x86 and x64?) I don't see any mention of fault-suppression for the "ignored" part of a memory source operand in Intel's manual, unlike with AVX-512 masked loads.For explicit-length strings, this is easy: you know when you're definitely far from the end of the shorter string, so you can just special case the last iteration. (And you want to do that anyway so you can use pcmpistri in the main loop).e.g. do an unaligned that ends at the last byte of the string, if it's at least 16 bytes long, or check (p&4095) <= (4096-16) to avoid a page-crossing load when you're fetching that the end of a string.So in practice, if both strings have the same relative alignment you can just handle the unaligned starts of the strings, then get into a loop that uses aligned loads from both (so you can keep using movdqa). That can't page-split and thus can't fault when loading any aligned vector that contains any string bytes.relative misalignment is harder.For performance, note that SSE4.2 is only supported on Nehalem and newer, where movdqu is relatively efficient (as cheap as movdqa if the pointer happens to be aligned at runtime). I think AMD support is similar; not until Bulldozer which has AVX and cheap unaligned loads. Cache-line splits still hurt some, so if you expect large strings to be common then it's worth maybe hurting the short-string case and/or the already-aligned case by doing some extra checking.Maybe have a look at what glibc's SSE2 / AVX memcmp implementation does; it has the same problem of reading SIMD vectors from 2 arrays that might be misaligned wrt. each other. (Simple bytewise equality is faster with pcmpeqb so it wouldn't use SSE4.2 string instructions, but the problem of which SIMD vectors to load is the same).Yes, that's the whole point of taking 2 input lengths (in RAX for XMM1, and RDX for XMM2). See Intel's asm manual entry for pcmpestri.You have to do it if that's what you want; pcmpestri looks at the first RAX bytes/words of XMM1 (up to 16 / 8), and the first RDX bytes (words) of XMM2/mem (up to 16 / 8), and outputs to ECX and EFLAGS. That is all. Again, Intel's manual is pretty clear about this. (Although pretty complicated to understand the actual aggregation and compare options!)If you wanted to use it in a loop, you could just leave those registers set to 16 and compute them properly for a peeled final iteration after the loop. Or you could decrement them each by 16 each iteration; pcmpestri appears to be designed for doing that, setting ZF and/or SF if EDX and/or EAX are < 16 (or 8), respectively.See also https://www.strchr.com/strcmp_and_strlen_using_sse_4.2 for a useful high-level picture of the processing steps the SSE4.2 string instructions do, so you can figure out how to design useful ways to use them. And some examples like implementing strcmp and strlen. Intel's detailed documentation in the SDM gets bogged down in details and hard to take in the big picture.(A good unrolled SSE2 implementation can beat SSE4.2 for those simple functions, but a simple problem makes a good example.)Ideally you'd have proper intrinsics, not just wrappers for inline asm.It probably depends what high-level code wants to do with it, although for pcmpestri specifically, all the information is present in in ECX (the integer result). CF = (ECX == 0), and OF = ECX[0] (low bit). If GDC has GCC6 flag-output syntax, it wouldn't hurt I guess, unless it tricks the compiler into making worse code to receive those outputs.If you are using inline-asm to basically create intrinsics for SSE4.2 string instructions, it might be worth looking at Intel's design for C intrinsics: https://software.intel.com/sites/landingpage/IntrinsicsGuide/.e.g. one for the ECX result, int _mm_cmpestri (__m128i a, int la, __m128i b, int lb, const int mode);And one each for each separate FLAG output bit, like _mm_cmpestroHowever, there are flaws in Intel's design. For example, with the implicit-length string version at least, I remember that the only way to get an integer result and get the compiler to branch on FLAGS directly from the instruction was to use two different intrinsics with the same inputs, and depend on the compiler optimizing them together.With inline asm, it's easy to describe multiple outputs and have unused ones be optimized away. But unfortunately C doesn't have syntax for multiple return values, and I guess Intel didn't want to have an intrinsic with a by-reference output arg as well as a return value.I'd do the movdqa load first, then add, then pcmpistri. That keeps the movdqa addressing mode simpler and smaller, and lets the first iteration's load start executing 1 cycle earlier, without waiting for the latency of an add (if the index was on the critical path; it might not be if you started at 0)Using an indexed addressing mode is probably not harmful here (a multi-uop instruction like pcmpe/istri probably can't micro-fuse a load anyway, and movdqa / movdqu don't care). But in other cases it can be worth it to unroll and use pointer increments instead: Micro fusion and addressing modesIt might be worth unrolling by 2. I'd suggest counting uops to see if it's just above a multiple of 4, and/or trying it on a couple CPUs like Skylake and Zen. 这篇关于pcmpestri字符单位和倒计时-x86-64 asm的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持! 上岸,阿里云!
08-29 07:00