image/svg+xmlDIVSD—Divide Scalar Double-Precision Floating-Point ValueInstruction Operand EncodingDescriptionDivides the low double-precision floating-point value in the first source operand by the low double-precision floating-point value in the second source operand, and stores the double-precision floating-point result in the desti-nation operand. The second source operand can be an XMM register or a 64-bit memory location. The first source and destination are XMM registers. 128-bit Legacy SSE version: The first source operand and the destination operand are the same. Bits (MAXVL-1:64) of the corresponding ZMM destination register remain unchanged.VEX.128 encoded version: The first source operand is an xmm register encoded by VEX.vvvv. The quadword at bits 127:64 of the destination operand is copied from the corresponding quadword of the first source operand. Bits (MAXVL-1:128) of the destination register are zeroed.EVEX.128 encoded version: The first source operand is an xmm register encoded by EVEX.vvvv. The quadword element of the destination operand at bits 127:64 are copied from the first source operand. Bits (MAXVL-1:128) of the destination register are zeroed.EVEX version: The low quadword element of the destination is updated according to the writemask.Software should ensure VDIVSD is encoded with VEX.L=0. Encoding VDIVSD with VEX.L=1 may encounter unpre-dictable behavior across different processor generations.Opcode/InstructionOp / En64/32 bit Mode SupportCPUID Feature FlagDescriptionF2 0F 5E /rDIVSD xmm1, xmm2/m64AV/VSSE2Divide low double-precision floating-point value in xmm1 by low double-precision floating-point value in xmm2/m64.VEX.LIG.F2.0F.WIG 5E /rVDIVSD xmm1, xmm2, xmm3/m64BV/VAVXDivide low double-precision floating-point value in xmm2 by low double-precision floating-point value in xmm3/m64.EVEX.LLIG.F2.0F.W1 5E /rVDIVSD xmm1 {k1}{z}, xmm2, xmm3/m64{er}CV/VAVX512FDivide low double-precision floating-point value in xmm2 by low double-precision floating-point value in xmm3/m64.Op/EnTuple TypeOperand 1Operand 2Operand 3Operand 4ANAModRM:reg (r, w)ModRM:r/m (r)NANABNAModRM:reg (w)VEX.vvvv (r)ModRM:r/m (r)NACTuple1 ScalarModRM:reg (w)EVEX.vvvv (r)ModRM:r/m (r)NA

image/svg+xmlOperationVDIVSD (EVEX encoded version)IF (EVEX.b = 1) AND SRC2 *is a register*THENSET_ROUNDING_MODE_FOR_THIS_INSTRUCTION(EVEX.RC);ELSE SET_ROUNDING_MODE_FOR_THIS_INSTRUCTION(MXCSR.RC);FI;IF k1[0] or *no writemask*THENDEST[63:0] := SRC1[63:0] / SRC2[63:0]ELSE IF *merging-masking*; merging-maskingTHEN *DEST[63:0] remains unchanged*ELSE ; zeroing-maskingTHEN DEST[63:0] := 0FI;FI;DEST[127:64] := SRC1[127:64]DEST[MAXVL-1:128] := 0VDIVSD (VEX.128 encoded version)DEST[63:0] := SRC1[63:0] / SRC2[63:0]DEST[127:64] := SRC1[127:64]DEST[MAXVL-1:128] := 0DIVSD (128-bit Legacy SSE version)DEST[63:0] := DEST[63:0] / SRC[63:0]DEST[MAXVL-1:64] (Unmodified)Intel C/C++ Compiler Intrinsic EquivalentVDIVSD __m128d _mm_mask_div_sd(__m128d s, __mmask8 k, __m128d a, __m128d b);VDIVSD __m128d _mm_maskz_div_sd( __mmask8 k, __m128d a, __m128d b);VDIVSD __m128d _mm_div_round_sd( __m128d a, __m128d b, int);VDIVSD __m128d _mm_mask_div_round_sd(__m128d s, __mmask8 k, __m128d a, __m128d b, int);VDIVSD __m128d _mm_maskz_div_round_sd( __mmask8 k, __m128d a, __m128d b, int);DIVSD __m128d _mm_div_sd (__m128d a, __m128d b);SIMD Floating-Point ExceptionsOverflow, Underflow, Invalid, Divide-by-Zero, Precision, DenormalOther ExceptionsVEX-encoded instructions, see Table2-20, “Type 3 Class Exception Conditions”.EVEX-encoded instructions, see Table2-47, “Type E3 Class Exception Conditions”.

This UNOFFICIAL reference was generated from the official Intel® 64 and IA-32 Architectures Software Developer’s Manual by a dumb script. There is no guarantee that some parts aren't mangled or broken and is distributed WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.