image/svg+xml SHUFPD—Packed Interleave Shuffle of Pairs of Double-Precision Floating-Point Values Instruction Operand Encoding Description Selects a double-precision floating-point value of an input pair using a bit control and move to a designated  element of the destination operand. The low-to-high order of double-precision element of the destination operand  is interleaved between the first source operand and the second source operand at the granularity of input pair of  128 bits. Each bit in the imm8 byte, starting from bit 0, is the select control of the corresponding element of the  destination to received the shuffled result of an input pair.  EVEX encoded versions: The first source operand is a ZMM/YMM/XMM register. The second source operand can be  a ZMM/YMM/XMM register, a 512/256/128-bit memory location or a 512/256/128-bit vector broadcasted from a  64-bit memory location The destination operand is a ZMM/YMM/XMM register updated according to the writemask.  The select controls are the lower 8/4/2 bits of the imm8 byte. VEX.256 encoded version: The first source operand is a YMM register. The second source operand can be a YMM  register or a 256-bit memory location. The destination operand is a YMM register. The select controls are the bit 3:0  of the imm8 byte, imm8[7:4) are ignored. VEX.128 encoded version: The first source operand is a XMM register. The second source operand can be a XMM  register or a 128-bit memory location. The destination operand is a XMM register. The upper bits (MAXVL-1:128) of  the corresponding ZMM register destination are zeroed. The select controls are the bit 1:0 of the imm8 byte,  imm8[7:2) are ignored. Opcode/ Instruction Op /  En 64/32  bit Mode  Support CPUID  Feature  Flag Description 66 0F C6 /r ib SHUFPD xmm1, xmm2/m128, imm8 AV/VSSE2Shuffle two pairs of double-precision floating-point  values from xmm1 and xmm2/m128 using imm8 to  select from each pair, interleaved result is stored in  xmm1. VEX.128.66.0F.WIG C6 /r ib VSHUFPD xmm1, xmm2, xmm3/m128,  imm8 BV/VAVXShuffle two pairs of double-precision floating-point  values from xmm2 and xmm3/m128 using imm8 to  select from each pair, interleaved result is stored in  xmm1. VEX.256.66.0F.WIG C6 /r ib VSHUFPD ymm1, ymm2, ymm3/m256,  imm8 BV/VAVXShuffle four pairs of double-precision floating-point  values from ymm2 and ymm3/m256 using imm8 to  select from each pair, interleaved result is stored in  xmm1. EVEX.128.66.0F.W1 C6 /r ib VSHUFPD xmm1{k1}{z}, xmm2,  xmm3/m128/m64bcst, imm8 CV/VAVX512VL AVX512F Shuffle two paris of double-precision floating-point  values from xmm2 and xmm3/m128/m64bcst using  imm8 to select from each pair. store interleaved  results in xmm1 subject to writemask k1. EVEX.256.66.0F.W1 C6 /r ib VSHUFPD ymm1{k1}{z}, ymm2,  ymm3/m256/m64bcst, imm8 CV/VAVX512VL AVX512F Shuffle four paris of double-precision floating-point  values from ymm2 and ymm3/m256/m64bcst using  imm8 to select from each pair. store interleaved  results in ymm1 subject to writemask k1. EVEX.512.66.0F.W1 C6 /r ib VSHUFPD zmm1{k1}{z}, zmm2,  zmm3/m512/m64bcst, imm8 CV/VAVX512FShuffle eight paris of double-precision floating-point  values from zmm2 and zmm3/m512/m64bcst using  imm8 to select from each pair. store interleaved  results in zmm1 subject to writemask k1. Op/EnTuple TypeOperand 1Operand 2Operand 3Operand 4 ANAModRM:reg (r, w)ModRM:r/m (r)Imm8NA BNAModRM:reg (w)VEX.vvvv (r)ModRM:r/m (r)Imm8 CFullModRM:reg (w)EVEX.vvvv (r)ModRM:r/m (r)Imm8 image/svg+xml 128-bit Legacy SSE version: The second source can be an XMM register or an 128-bit memory location. The desti- nation operand and the first source operand is the same and is an XMM register. The upper bits (MAXVL-1:128) of  the corresponding ZMM register destination are unmodified. The select controls are the bit 1:0 of the imm8 byte,  imm8[7:2) are ignored. Operation VSHUFPD (EVEX encoded versions when SRC2 is a vector register) (KL, VL) = (2, 128), (4, 256), (8, 512) IF IMM0[0] = 0 THEN TMP_DEST[63:0] := SRC1[63:0] ELSE TMP_DEST[63:0] := SRC1[127:64] FI; IF IMM0[1] = 0 THEN TMP_DEST[127:64] := SRC2[63:0] ELSE TMP_DEST[127:64] := SRC2[127:64] FI; IF VL >= 256 IF IMM0[2] = 0 THEN TMP_DEST[191:128] := SRC1[191:128] ELSE TMP_DEST[191:128] := SRC1[255:192] FI; IF IMM0[3] = 0 THEN TMP_DEST[255:192] := SRC2[191:128] ELSE TMP_DEST[255:192] := SRC2[255:192] FI; FI; IF VL >= 512 IF IMM0[4] = 0 THEN TMP_DEST[319:256] := SRC1[319:256] ELSE TMP_DEST[319:256] := SRC1[383:320] FI; IF IMM0[5] = 0 THEN TMP_DEST[383:320] := SRC2[319:256] ELSE TMP_DEST[383:320] := SRC2[383:320] FI; IF IMM0[6] = 0 THEN TMP_DEST[447:384] := SRC1[447:384] ELSE TMP_DEST[447:384] := SRC1[511:448] FI; IF IMM0[7] = 0 THEN TMP_DEST[511:448] := SRC2[447:384] ELSE TMP_DEST[511:448] := SRC2[511:448] FI; FI; FOR j := 0 TO KL-1 i := j * 64 IF k1[j] OR *no writemask* THEN DEST[i+63:i] := TMP_DEST[i+63:i] ELSE  Figure 4-25.  256-bit VSHUFPD Operation of Four Pairs of DP FP Values Y2 or Y3 X2 or X3 Y0 or Y1 X0 or X1DEST X3 X2 SRC1 X1 X0 Y3 Y2 Y1 Y0 SRC2 image/svg+xml IF *merging-masking*; merging-masking THEN *DEST[i+63:i] remains unchanged* ELSE *zeroing-masking*; zeroing-masking DEST[i+63:i] := 0 FI FI; ENDFOR DEST[MAXVL-1:VL] := 0 VSHUFPD (EVEX encoded versions when SRC2 is memory) (KL, VL) = (2, 128), (4, 256), (8, 512) FOR j := 0 TO KL-1 i := j * 64 IF (EVEX.b = 1)  THEN TMP_SRC2[i+63:i] := SRC2[63:0] ELSE TMP_SRC2[i+63:i] := SRC2[i+63:i] FI; ENDFOR; IF IMM0[0] = 0 THEN TMP_DEST[63:0] := SRC1[63:0] ELSE TMP_DEST[63:0] := SRC1[127:64] FI; IF IMM0[1] = 0 THEN TMP_DEST[127:64] := TMP_SRC2[63:0] ELSE TMP_DEST[127:64] := TMP_SRC2[127:64] FI; IF VL >= 256 IF IMM0[2] = 0 THEN TMP_DEST[191:128] := SRC1[191:128] ELSE TMP_DEST[191:128] := SRC1[255:192] FI; IF IMM0[3] = 0 THEN TMP_DEST[255:192] := TMP_SRC2[191:128] ELSE TMP_DEST[255:192] := TMP_SRC2[255:192] FI; FI; IF VL >= 512 IF IMM0[4] = 0 THEN TMP_DEST[319:256] := SRC1[319:256] ELSE TMP_DEST[319:256] := SRC1[383:320] FI; IF IMM0[5] = 0 THEN TMP_DEST[383:320] := TMP_SRC2[319:256] ELSE TMP_DEST[383:320] := TMP_SRC2[383:320] FI; IF IMM0[6] = 0 THEN TMP_DEST[447:384] := SRC1[447:384] ELSE TMP_DEST[447:384] := SRC1[511:448] FI; IF IMM0[7] = 0 THEN TMP_DEST[511:448] := TMP_SRC2[447:384] ELSE TMP_DEST[511:448] := TMP_SRC2[511:448] FI; FI; FOR j := 0 TO KL-1 i := j * 64 IF k1[j] OR *no writemask* THEN DEST[i+63:i] := TMP_DEST[i+63:i] ELSE  IF *merging-masking*; merging-masking THEN *DEST[i+63:i] remains unchanged* image/svg+xml ELSE *zeroing-masking*; zeroing-masking DEST[i+63:i] := 0 FI FI; ENDFOR DEST[MAXVL-1:VL] := 0 VSHUFPD (VEX.256 encoded version) IF IMM0[0] = 0 THEN DEST[63:0] := SRC1[63:0] ELSE DEST[63:0] := SRC1[127:64] FI; IF IMM0[1] = 0 THEN DEST[127:64] := SRC2[63:0] ELSE DEST[127:64] := SRC2[127:64] FI; IF IMM0[2] = 0 THEN DEST[191:128] := SRC1[191:128] ELSE DEST[191:128] := SRC1[255:192] FI; IF IMM0[3] = 0 THEN DEST[255:192] := SRC2[191:128] ELSE DEST[255:192] := SRC2[255:192] FI; DEST[MAXVL-1:256] (Unmodified) VSHUFPD (VEX.128 encoded version) IF IMM0[0] = 0 THEN DEST[63:0] := SRC1[63:0] ELSE DEST[63:0] := SRC1[127:64] FI; IF IMM0[1] = 0 THEN DEST[127:64] := SRC2[63:0] ELSE DEST[127:64] := SRC2[127:64] FI; DEST[MAXVL-1:128] := 0 VSHUFPD (128-bit Legacy SSE version) IF IMM0[0] = 0 THEN DEST[63:0] := SRC1[63:0] ELSE DEST[63:0] := SRC1[127:64] FI; IF IMM0[1] = 0 THEN DEST[127:64] := SRC2[63:0] ELSE DEST[127:64] := SRC2[127:64] FI; DEST[MAXVL-1:128] (Unmodified) Intel C/C++ Compiler Intrinsic Equivalent VSHUFPD __m512d _mm512_shuffle_pd(__m512d a, __m512d b, int imm); VSHUFPD __m512d _mm512_mask_shuffle_pd(__m512d s, __mmask8 k, __m512d a, __m512d b, int imm); VSHUFPD __m512d _mm512_maskz_shuffle_pd( __mmask8 k, __m512d a, __m512d b, int imm); VSHUFPD __m256d _mm256_shuffle_pd (__m256d a, __m256d b, const int select); VSHUFPD __m256d _mm256_mask_shuffle_pd(__m256d s, __mmask8 k, __m256d a, __m256d b, int imm); VSHUFPD __m256d _mm256_maskz_shuffle_pd( __mmask8 k, __m256d a, __m256d b, int imm); SHUFPD __m128d _mm_shuffle_pd (__m128d a, __m128d b, const int select); VSHUFPD __m128d _mm_mask_shuffle_pd(__m128d s, __mmask8 k, __m128d a, __m128d b, int imm); VSHUFPD __m128d _mm_maskz_shuffle_pd( __mmask8 k, __m128d a, __m128d b, int imm); image/svg+xml SIMD Floating-Point Exceptions None Other Exceptions Non-EVEX-encoded instruction, see Table2-21, “Type 4 Class Exception Conditions”. EVEX-encoded instruction, see Table2-50, “Type E4NF Class Exception Conditions”.                      This UNOFFICIAL  reference was generated from the official Intel® 64 and IA-32 Architectures Software Developer’s Manual  by a dumb script.         There is no guarantee that some parts aren't mangled or broken and is distributed WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE .