image/svg+xmlMOVSD—Move or Merge Scalar Double-Precision Floating-Point ValueInstruction Operand EncodingOpcode/InstructionOp / En64/32 bit Mode SupportCPUID Feature FlagDescriptionF2 0F 10 /rMOVSD xmm1, xmm2AV/VSSE2Move scalar double-precision floating-point value from xmm2 to xmm1 register.F2 0F 10 /rMOVSD xmm1, m64AV/VSSE2Load scalar double-precision floating-point value from m64 to xmm1 register.F2 0F 11 /rMOVSD xmm1/m64, xmm2CV/VSSE2Move scalar double-precision floating-point value from xmm2 register to xmm1/m64.VEX.LIG.F2.0F.WIG 10 /rVMOVSD xmm1, xmm2, xmm3BV/VAVXMerge scalar double-precision floating-point value from xmm2 and xmm3 to xmm1 register.VEX.LIG.F2.0F.WIG 10 /rVMOVSD xmm1, m64DV/VAVXLoad scalar double-precision floating-point value from m64 to xmm1 register.VEX.LIG.F2.0F.WIG 11 /rVMOVSD xmm1, xmm2, xmm3EV/VAVXMerge scalar double-precision floating-point value from xmm2 and xmm3 registers to xmm1.VEX.LIG.F2.0F.WIG 11 /rVMOVSD m64, xmm1CV/VAVXStore scalar double-precision floating-point value from xmm1 register to m64.EVEX.LLIG.F2.0F.W1 10 /rVMOVSD xmm1 {k1}{z}, xmm2, xmm3BV/VAVX512FMerge scalar double-precision floating-point value from xmm2 and xmm3 registers to xmm1 under writemask k1.EVEX.LLIG.F2.0F.W1 10 /rVMOVSD xmm1 {k1}{z}, m64FV/VAVX512FLoad scalar double-precision floating-point value from m64 to xmm1 register under writemask k1.EVEX.LLIG.F2.0F.W1 11 /rVMOVSD xmm1 {k1}{z}, xmm2, xmm3EV/VAVX512FMerge scalar double-precision floating-point value from xmm2 and xmm3 registers to xmm1 under writemask k1.EVEX.LLIG.F2.0F.W1 11 /rVMOVSD m64 {k1}, xmm1GV/VAVX512FStore scalar double-precision floating-point value from xmm1 register to m64 under writemask k1.Op/EnTuple TypeOperand 1Operand 2Operand 3Operand 4ANAModRM:reg (r, w)ModRM:r/m (r)NANABNAModRM:reg (w)VEX.vvvv (r)ModRM:r/m (r)NACNAModRM:r/m (w)ModRM:reg (r)NANADNAModRM:reg (w)ModRM:r/m (r)NANAENAModRM:r/m (w)vvvv (r)ModRM:reg (r)NAFTuple1 ScalarModRM:reg (r, w)ModRM:r/m (r)NANAGTuple1 ScalarModRM:r/m (w)ModRM:reg (r)NANA

image/svg+xmlDescriptionMoves a scalar double-precision floating-point value from the source operand (second operand) to the destination operand (first operand). The source and destination operands can be XMM registers or 64-bit memory locations. This instruction can be used to move a double-precision floating-point value to and from the low quadword of an XMM register and a 64-bit memory location, or to move a double-precision floating-point value between the low quadwords of two XMM registers. The instruction cannot be used to transfer data between memory locations.Legacy version: When the source and destination operands are XMM registers, bits MAXVL:64 of the destination operand remains unchanged. When the source operand is a memory location and destination operand is an XMM registers, the quadword at bits 127:64 of the destination operand is cleared to all 0s, bits MAXVL:128 of the desti-nation operand remains unchanged. VEX and EVEX encoded register-register syntax: Moves a scalar double-precision floating-point value from the second source operand (the third operand) to the low quadword element of the destination operand (the first operand). Bits 127:64 of the destination operand are copied from the first source operand (the second operand). Bits (MAXVL-1:128) of the corresponding destination register are zeroed.VEX and EVEX encoded memory store syntax: When the source operand is a memory location and destination operand is an XMM registers, bits MAXVL:64 of the destination operand is cleared to all 0s.EVEX encoded versions: The low quadword of the destination is updated according to the writemask.Note: For VMOVSD (memory store and load forms), VEX.vvvv and EVEX.vvvv are reserved and must be 1111b, otherwise instruction will #UD.OperationVMOVSD (EVEX.LLIG.F2.0F 10 /r: VMOVSD xmm1, m64 with support for 32 registers)IF k1[0] or *no writemask*THENDEST[63:0] := SRC[63:0]ELSE IF *merging-masking*; merging-maskingTHEN *DEST[63:0] remains unchanged*ELSE ; zeroing-maskingTHEN DEST[63:0] := 0FI;FI;DEST[MAXVL-1:64] := 0VMOVSD (EVEX.LLIG.F2.0F 11 /r: VMOVSD m64, xmm1 with support for 32 registers)IF k1[0] or *no writemask*THENDEST[63:0] := SRC[63:0]ELSE*DEST[63:0] remains unchanged* ; merging-maskingFI;VMOVSD (EVEX.LLIG.F2.0F 11 /r: VMOVSD xmm1, xmm2, xmm3)IF k1[0] or *no writemask*THENDEST[63:0] := SRC2[63:0]ELSE IF *merging-masking*; merging-maskingTHEN *DEST[63:0] remains unchanged*ELSE ; zeroing-maskingTHEN DEST[63:0] := 0FI;FI;DEST[127:64] := SRC1[127:64]DEST[MAXVL-1:128] := 0

image/svg+xmlMOVSD (128-bit Legacy SSE version: MOVSD XMM1, XMM2)DEST[63:0] := SRC[63:0]DEST[MAXVL-1:64] (Unmodified)VMOVSD (VEX.128.F2.0F 11 /r: VMOVSD xmm1, xmm2, xmm3)DEST[63:0] := SRC2[63:0]DEST[127:64] := SRC1[127:64]DEST[MAXVL-1:128] := 0VMOVSD (VEX.128.F2.0F 10 /r: VMOVSD xmm1, xmm2, xmm3)DEST[63:0] := SRC2[63:0]DEST[127:64] := SRC1[127:64]DEST[MAXVL-1:128] := 0VMOVSD (VEX.128.F2.0F 10 /r: VMOVSD xmm1, m64)DEST[63:0] := SRC[63:0]DEST[MAXVL-1:64] := 0MOVSD/VMOVSD (128-bit versions: MOVSD m64, xmm1 or VMOVSD m64, xmm1)DEST[63:0] := SRC[63:0]MOVSD (128-bit Legacy SSE version: MOVSD XMM1, m64)DEST[63:0] := SRC[63:0]DEST[127:64] := 0DEST[MAXVL-1:128] (Unmodified)Intel C/C++ Compiler Intrinsic EquivalentVMOVSD __m128d _mm_mask_load_sd(__m128d s, __mmask8 k, double * p); VMOVSD __m128d _mm_maskz_load_sd( __mmask8 k, double * p); VMOVSD __m128d _mm_mask_move_sd(__m128d sh, __mmask8 k, __m128d sl, __m128d a); VMOVSD __m128d _mm_maskz_move_sd( __mmask8 k, __m128d s, __m128d a); VMOVSD void _mm_mask_store_sd(double * p, __mmask8 k, __m128d s); MOVSD __m128d _mm_load_sd (double *p)MOVSD void _mm_store_sd (double *p, __m128d a)MOVSD __m128d _mm_move_sd ( __m128d a, __m128d b)SIMD Floating-Point ExceptionsNoneOther ExceptionsNon-EVEX-encoded instruction, see Table2-22, “Type 5 Class Exception Conditions”; additionally:#UDIf VEX.vvvv != 1111B.EVEX-encoded instruction, see Table2-58, “Type E10 Class Exception Conditions”.

This UNOFFICIAL reference was generated from the official Intel® 64 and IA-32 Architectures Software Developer’s Manual by a dumb script. There is no guarantee that some parts aren't mangled or broken and is distributed WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.