image/svg+xmlMOVSHDUP—Replicate Single FP ValuesInstruction Operand EncodingDescriptionDuplicates odd-indexed single-precision floating-point values from the source operand (the second operand) to adjacent element pair in the destination operand (the first operand). See Figure 4-3. The source operand is an XMM, YMM or ZMM register or 128, 256 or 512-bit memory location and the destination operand is an XMM, YMM or ZMM register.128-bit Legacy SSE version: Bits (MAXVL-1:128) of the corresponding destination register remain unchanged.VEX.128 encoded version: Bits (MAXVL-1:128) of the destination register are zeroed.VEX.256 encoded version: Bits (MAXVL-1:256) of the destination register are zeroed.EVEX encoded version: The destination operand is updated at 32-bit granularity according to the writemask.Note: VEX.vvvv and EVEX.vvvv are reserved and must be 1111b otherwise instructions will #UD.Opcode/InstructionOp / En64/32 bit Mode SupportCPUID Feature FlagDescriptionF3 0F 16 /rMOVSHDUP xmm1, xmm2/m128AV/VSSE3Move odd index single-precision floating-point values from xmm2/mem and duplicate each element into xmm1.VEX.128.F3.0F.WIG 16 /rVMOVSHDUP xmm1, xmm2/m128AV/VAVXMove odd index single-precision floating-point values from xmm2/mem and duplicate each element into xmm1.VEX.256.F3.0F.WIG 16 /rVMOVSHDUP ymm1, ymm2/m256AV/VAVXMove odd index single-precision floating-point values from ymm2/mem and duplicate each element into ymm1.EVEX.128.F3.0F.W0 16 /rVMOVSHDUP xmm1 {k1}{z}, xmm2/m128BV/VAVX512VLAVX512FMove odd index single-precision floating-point values from xmm2/m128 and duplicate each element into xmm1 under writemask.EVEX.256.F3.0F.W0 16 /rVMOVSHDUP ymm1 {k1}{z}, ymm2/m256BV/VAVX512VLAVX512FMove odd index single-precision floating-point values from ymm2/m256 and duplicate each element into ymm1 under writemask.EVEX.512.F3.0F.W0 16 /rVMOVSHDUP zmm1 {k1}{z}, zmm2/m512BV/VAVX512FMove odd index single-precision floating-point values from zmm2/m512 and duplicate each element into zmm1 under writemask.Op/EnTuple TypeOperand 1Operand 2Operand 3Operand 4ANAModRM:reg (w)ModRM:r/m (r)NANABFull MemModRM:reg (w)ModRM:r/m (r)NANAFigure 4-3. MOVSHDUP OperationDESTSRCX4X5X6X7X1X1X3X3X5X5X7X7X0X1X2X3

image/svg+xmlOperationVMOVSHDUP (EVEX encoded versions)(KL, VL) = (4, 128), (8, 256), (16, 512)TMP_SRC[31:0] := SRC[63:32]TMP_SRC[63:32] := SRC[63:32]TMP_SRC[95:64] := SRC[127:96]TMP_SRC[127:96] := SRC[127:96]IF VL >= 256TMP_SRC[159:128] := SRC[191:160]TMP_SRC[191:160] := SRC[191:160]TMP_SRC[223:192] := SRC[255:224]TMP_SRC[255:224] := SRC[255:224]FI;IF VL >= 512TMP_SRC[287:256] := SRC[319:288]TMP_SRC[319:288] := SRC[319:288]TMP_SRC[351:320] := SRC[383:352]TMP_SRC[383:352] := SRC[383:352]TMP_SRC[415:384] := SRC[447:416]TMP_SRC[447:416] := SRC[447:416]TMP_SRC[479:448] := SRC[511:480]TMP_SRC[511:480] := SRC[511:480]FI;FOR j := 0 TO KL-1i := j * 32IF k1[j] OR *no writemask*THEN DEST[i+31:i] := TMP_SRC[i+31:i]ELSE IF *merging-masking*; merging-maskingTHEN *DEST[i+31:i] remains unchanged*ELSE ; zeroing-maskingDEST[i+31:i] := 0 FIFI;ENDFORDEST[MAXVL-1:VL] := 0VMOVSHDUP (VEX.256 encoded version)DEST[31:0] := SRC[63:32]DEST[63:32] := SRC[63:32]DEST[95:64] := SRC[127:96]DEST[127:96] := SRC[127:96]DEST[159:128] := SRC[191:160]DEST[191:160] := SRC[191:160]DEST[223:192] := SRC[255:224]DEST[255:224] := SRC[255:224]DEST[MAXVL-1:256] := 0VMOVSHDUP (VEX.128 encoded version)DEST[31:0] := SRC[63:32]DEST[63:32] := SRC[63:32]DEST[95:64] := SRC[127:96]DEST[127:96] := SRC[127:96]DEST[MAXVL-1:128] := 0

image/svg+xmlMOVSHDUP (128-bit Legacy SSE version)DEST[31:0] := SRC[63:32]DEST[63:32] := SRC[63:32]DEST[95:64] := SRC[127:96]DEST[127:96] := SRC[127:96]DEST[MAXVL-1:128] (Unmodified)Intel C/C++ Compiler Intrinsic EquivalentVMOVSHDUP __m512 _mm512_movehdup_ps( __m512 a);VMOVSHDUP __m512 _mm512_mask_movehdup_ps(__m512 s, __mmask16 k, __m512 a);VMOVSHDUP __m512 _mm512_maskz_movehdup_ps( __mmask16 k, __m512 a);VMOVSHDUP __m256 _mm256_mask_movehdup_ps(__m256 s, __mmask8 k, __m256 a);VMOVSHDUP __m256 _mm256_maskz_movehdup_ps( __mmask8 k, __m256 a);VMOVSHDUP __m128 _mm_mask_movehdup_ps(__m128 s, __mmask8 k, __m128 a);VMOVSHDUP __m128 _mm_maskz_movehdup_ps( __mmask8 k, __m128 a);VMOVSHDUP __m256 _mm256_movehdup_ps (__m256 a);VMOVSHDUP __m128 _mm_movehdup_ps (__m128 a);SIMD Floating-Point ExceptionsNoneOther ExceptionsNon-EVEX-encoded instruction, see Table2-21, “Type 4 Class Exception Conditions”. EVEX-encoded instruction, see Exceptions Type E4NF.nb in Table2-50, “Type E4NF Class Exception Conditions”.Additionally:#UDIf EVEX.vvvv != 1111B or VEX.vvvv != 1111B.

This UNOFFICIAL reference was generated from the official Intel® 64 and IA-32 Architectures Software Developer’s Manual by a dumb script. There is no guarantee that some parts aren't mangled or broken and is distributed WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.