image/svg+xml VFNMSUB132SD/VFNMSUB213SD/VFNMSUB231SD—Fused Negative Multiply-Subtract of Scalar Double-Precision Floating-Point Values Instruction Operand Encoding Description VFNMSUB132SD: Multiplies the low packed double-precision floating-point value from the first source operand to the low packed double-precision floating-point value in the third source operand. From negated infinite precision intermediate result, subtracts the low double-precision floating-point value in the second source operand, performs rounding and stores the resulting packed double-precision floating-point value to the destination operand (first source operand). VFNMSUB213SD: Multiplies the low packed double-precision floating-point value from the second source operand to the low packed double-precision floating-point value in the first source operand. From negated infinite precision intermediate result, subtracts the low double-precision floating-point value in the third source operand, performs rounding and stores the resulting packed double-precision floating-point value to the destination operand (first source operand). VFNMSUB231SD: Multiplies the low packed double-precision floating-point value from the second source to the low packed double-precision floating-point value in the third source operand. From negated infinite precision interme- diate result, subtracts the low double-precision floating-point value in the first source operand, performs rounding and stores the resulting packed double-precision floating-point value to the destination operand (first source operand). VEX.128 and EVEX encoded version: The destination operand (also first source operand) is encoded in reg_field. The second source operand is encoded in VEX.vvvv/EVEX.vvvv. The third source operand is encoded in rm_field. Bits 127:64 of the destination are unchanged. Bits MAXVL-1:128 of the destination register are zeroed. Opcode/ Instruction Op / En 64/32 bit Mode Support CPUID Feature Flag Description VEX.LIG.66.0F38.W1 9F /r VFNMSUB132SD xmm1, xmm2, xmm3/m64 AV/VFMAMultiply scalar double-precision floating-point value from xmm1 and xmm3/mem, negate the multiplication result and subtract xmm2 and put result in xmm1. VEX.LIG.66.0F38.W1 AF /r VFNMSUB213SD xmm1, xmm2, xmm3/m64 AV/VFMAMultiply scalar double-precision floating-point value from xmm1 and xmm2, negate the multiplication result and subtract xmm3/mem and put result in xmm1. VEX.LIG.66.0F38.W1 BF /r VFNMSUB231SD xmm1, xmm2, xmm3/m64 AV/VFMAMultiply scalar double-precision floating-point value from xmm2 and xmm3/mem, negate the multiplication result and subtract xmm1 and put result in xmm1. EVEX.LLIG.66.0F38.W1 9F /r VFNMSUB132SD xmm1 {k1}{z}, xmm2, xmm3/m64{er} BV/VAVX512FMultiply scalar double-precision floating-point value from xmm1 and xmm3/m64, negate the multiplication result and subtract xmm2 and put result in xmm1. EVEX.LLIG.66.0F38.W1 AF /r VFNMSUB213SD xmm1 {k1}{z}, xmm2, xmm3/m64{er} BV/VAVX512FMultiply scalar double-precision floating-point value from xmm1 and xmm2, negate the multiplication result and subtract xmm3/m64 and put result in xmm1. EVEX.LLIG.66.0F38.W1 BF /r VFNMSUB231SD xmm1 {k1}{z}, xmm2, xmm3/m64{er} BV/VAVX512FMultiply scalar double-precision floating-point value from xmm2 and xmm3/m64, negate the multiplication result and subtract xmm1 and put result in xmm1. Op/EnTuple TypeOperand 1Operand 2Operand 3Operand 4 ANAModRM:reg (r, w)VEX.vvvv (r)ModRM:r/m (r)NA BTuple1 ScalarModRM:reg (r, w)EVEX.vvvv (r)ModRM:r/m (r)NA image/svg+xml EVEX encoded version: The low quadword element of the destination is updated according to the writemask. Compiler tools may optionally support a complementary mnemonic for each instruction mnemonic listed in the opcode/instruction column of the summary table. The behavior of the complementary mnemonic in situations involving NANs are governed by the definition of the instruction mnemonic defined in the opcode/instruction column. Operation In the operations below, “*” and “-” symbols represent multiplication and subtraction with infinite precision inputs and outputs (no rounding). VFNMSUB132SD DEST, SRC2, SRC3 (EVEX encoded version) IF (EVEX.b = 1) and SRC3 *is a register* THEN SET_ROUNDING_MODE_FOR_THIS_INSTRUCTION(EVEX.RC); ELSE SET_ROUNDING_MODE_FOR_THIS_INSTRUCTION(MXCSR.RC); FI; IF k1[0] or *no writemask* THENDEST[63:0] := RoundFPControl(-(DEST[63:0]*SRC3[63:0]) - SRC2[63:0]) ELSE IF *merging-masking*; merging-masking THEN *DEST[63:0] remains unchanged* ELSE ; zeroing-masking THEN DEST[63:0] := 0 FI; FI; DEST[127:64] := DEST[127:64] DEST[MAXVL-1:128] := 0 VFNMSUB213SD DEST, SRC2, SRC3 (EVEX encoded version) IF (EVEX.b = 1) and SRC3 *is a register* THEN SET_ROUNDING_MODE_FOR_THIS_INSTRUCTION(EVEX.RC); ELSE SET_ROUNDING_MODE_FOR_THIS_INSTRUCTION(MXCSR.RC); FI; IF k1[0] or *no writemask* THENDEST[63:0] := RoundFPControl(-(SRC2[63:0]*DEST[63:0]) - SRC3[63:0]) ELSE IF *merging-masking*; merging-masking THEN *DEST[63:0] remains unchanged* ELSE ; zeroing-masking THEN DEST[63:0] := 0 FI; FI; DEST[127:64] := DEST[127:64] DEST[MAXVL-1:128] := 0 image/svg+xml VFNMSUB231SD DEST, SRC2, SRC3 (EVEX encoded version) IF (EVEX.b = 1) and SRC3 *is a register* THEN SET_ROUNDING_MODE_FOR_THIS_INSTRUCTION(EVEX.RC); ELSE SET_ROUNDING_MODE_FOR_THIS_INSTRUCTION(MXCSR.RC); FI; IF k1[0] or *no writemask* THENDEST[63:0] := RoundFPControl(-(SRC2[63:0]*SRC3[63:0]) - DEST[63:0]) ELSE IF *merging-masking*; merging-masking THEN *DEST[63:0] remains unchanged* ELSE ; zeroing-masking THEN DEST[63:0] := 0 FI; FI; DEST[127:64] := DEST[127:64] DEST[MAXVL-1:128] := 0 VFNMSUB132SD DEST, SRC2, SRC3 (VEX encoded version) DEST[63:0] := RoundFPControl_MXCSR(- (DEST[63:0]*SRC3[63:0]) - SRC2[63:0]) DEST[127:64] := DEST[127:64] DEST[MAXVL-1:128] := 0 VFNMSUB213SD DEST, SRC2, SRC3 (VEX encoded version) DEST[63:0] := RoundFPControl_MXCSR(- (SRC2[63:0]*DEST[63:0]) - SRC3[63:0]) DEST[127:64] := DEST[127:64] DEST[MAXVL-1:128] := 0 VFNMSUB231SD DEST, SRC2, SRC3 (VEX encoded version) DEST[63:0] := RoundFPControl_MXCSR(- (SRC2[63:0]*SRC3[63:0]) - DEST[63:0]) DEST[127:64] := DEST[127:64] DEST[MAXVL-1:128] := 0 Intel C/C++ Compiler Intrinsic Equivalent VFNMSUBxxxSD __m128d _mm_fnmsub_round_sd(__m128d a, __m128d b, __m128d c, int r); VFNMSUBxxxSD __m128d _mm_mask_fnmsub_sd(__m128d a, __mmask8 k, __m128d b, __m128d c); VFNMSUBxxxSD __m128d _mm_maskz_fnmsub_sd(__mmask8 k, __m128d a, __m128d b, __m128d c); VFNMSUBxxxSD __m128d _mm_mask3_fnmsub_sd(__m128d a, __m128d b, __m128d c, __mmask8 k); VFNMSUBxxxSD __m128d _mm_mask_fnmsub_round_sd(__m128d a, __mmask8 k, __m128d b, __m128d c, int r); VFNMSUBxxxSD __m128d _mm_maskz_fnmsub_round_sd(__mmask8 k, __m128d a, __m128d b, __m128d c, int r); VFNMSUBxxxSD __m128d _mm_mask3_fnmsub_round_sd(__m128d a, __m128d b, __m128d c, __mmask8 k, int r); VFNMSUBxxxSD __m128d _mm_fnmsub_sd (__m128d a, __m128d b, __m128d c); SIMD Floating-Point Exceptions Overflow, Underflow, Invalid, Precision, Denormal Other Exceptions VEX-encoded instructions, see Table2-20, “Type 3 Class Exception Conditions”. EVEX-encoded instructions, see Table2-47, “Type E3 Class Exception Conditions”. This UNOFFICIAL reference was generated from the official Intel® 64 and IA-32 Architectures Software Developer’s Manual by a dumb script. There is no guarantee that some parts aren't mangled or broken and is distributed WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE .