image/svg+xmlMULPD—Multiply Packed Double-Precision Floating-Point ValuesInstruction Operand EncodingDescriptionMultiply packed double-precision floating-point values from the first source operand with corresponding values in the second source operand, and stores the packed double-precision floating-point results in the destination operand.EVEX encoded versions: The first source operand (the second operand) is a ZMM/YMM/XMM register. The second source operand can be a ZMM/YMM/XMM register, a 512/256/128-bit memory location or a 512/256/128-bit vector broadcasted from a 64-bit memory location. The destination operand is a ZMM/YMM/XMM register conditionally updated with writemask k1.VEX.256 encoded version: The first source operand is a YMM register. The second source operand can be a YMM register or a 256-bit memory location. The destination operand is a YMM register. Bits (MAXVL-1:256) of the corre-sponding destination ZMM register are zeroed.VEX.128 encoded version: The first source operand is a XMM register. The second source operand can be a XMM register or a 128-bit memory location. The destination operand is a XMM register. The upper bits (MAXVL-1:128) of the destination YMM register destination are zeroed.128-bit Legacy SSE version: The second source can be an XMM register or an 128-bit memory location. The desti-nation is not distinct from the first source XMM register and the upper bits (MAXVL-1:128) of the corresponding ZMM register destination are unmodified.Opcode/InstructionOp / En64/32 bit Mode SupportCPUID Feature FlagDescription66 0F 59 /rMULPD xmm1, xmm2/m128AV/VSSE2Multiply packed double-precision floating-point values in xmm2/m128 with xmm1 and store result in xmm1.VEX.128.66.0F.WIG 59 /rVMULPD xmm1,xmm2, xmm3/m128BV/VAVXMultiply packed double-precision floating-point values in xmm3/m128 with xmm2 and store result in xmm1.VEX.256.66.0F.WIG 59 /rVMULPD ymm1, ymm2, ymm3/m256BV/VAVXMultiply packed double-precision floating-point values in ymm3/m256 with ymm2 and store result in ymm1.EVEX.128.66.0F.W1 59 /rVMULPD xmm1 {k1}{z}, xmm2, xmm3/m128/m64bcstCV/VAVX512VLAVX512FMultiply packed double-precision floating-point values from xmm3/m128/m64bcst to xmm2 and store result in xmm1.EVEX.256.66.0F.W1 59 /rVMULPD ymm1 {k1}{z}, ymm2, ymm3/m256/m64bcstCV/VAVX512VLAVX512FMultiply packed double-precision floating-point values from ymm3/m256/m64bcst to ymm2 and store result in ymm1.EVEX.512.66.0F.W1 59 /rVMULPD zmm1 {k1}{z}, zmm2, zmm3/m512/m64bcst{er}CV/VAVX512FMultiply packed double-precision floating-point values in zmm3/m512/m64bcst with zmm2 and store result in zmm1.Op/EnTuple TypeOperand 1Operand 2Operand 3Operand 4ANAModRM:reg (r, w)ModRM:r/m (r)NANABNAModRM:reg (w)VEX.vvvv (r)ModRM:r/m (r)NACFullModRM:reg (w)EVEX.vvvv (r)ModRM:r/m (r)NA

image/svg+xmlOperationVMULPD (EVEX encoded versions)(KL, VL) = (2, 128), (4, 256), (8, 512)IF (VL = 512) AND (EVEX.b = 1) AND SRC2 *is a register*THENSET_ROUNDING_MODE_FOR_THIS_INSTRUCTION(EVEX.RC);ELSE SET_ROUNDING_MODE_FOR_THIS_INSTRUCTION(MXCSR.RC);FI;FOR j := 0 TO KL-1i := j * 64IF k1[j] OR *no writemask*THEN IF (EVEX.b = 1) AND (SRC2 *is memory*)THENDEST[i+63:i] := SRC1[i+63:i] * SRC2[63:0]ELSE DEST[i+63:i] := SRC1[i+63:i] * SRC2[i+63:i]FI;ELSE IF *merging-masking*; merging-maskingTHEN *DEST[i+63:i] remains unchanged*ELSE ; zeroing-maskingDEST[i+63:i] := 0FIFI;ENDFORDEST[MAXVL-1:VL] := 0VMULPD (VEX.256 encoded version)DEST[63:0] := SRC1[63:0] * SRC2[63:0]DEST[127:64] := SRC1[127:64] * SRC2[127:64]DEST[191:128] := SRC1[191:128] * SRC2[191:128]DEST[255:192] := SRC1[255:192] * SRC2[255:192]DEST[MAXVL-1:256] := 0;.VMULPD (VEX.128 encoded version)DEST[63:0] := SRC1[63:0] * SRC2[63:0]DEST[127:64] := SRC1[127:64] * SRC2[127:64]DEST[MAXVL-1:128] := 0MULPD (128-bit Legacy SSE version)DEST[63:0] := DEST[63:0] * SRC[63:0]DEST[127:64] := DEST[127:64] * SRC[127:64]DEST[MAXVL-1:128] (Unmodified)

image/svg+xmlIntel C/C++ Compiler Intrinsic EquivalentVMULPD __m512d _mm512_mul_pd( __m512d a, __m512d b);VMULPD __m512d _mm512_mask_mul_pd(__m512d s, __mmask8 k, __m512d a, __m512d b);VMULPD __m512d _mm512_maskz_mul_pd( __mmask8 k, __m512d a, __m512d b);VMULPD __m512d _mm512_mul_round_pd( __m512d a, __m512d b, int);VMULPD __m512d _mm512_mask_mul_round_pd(__m512d s, __mmask8 k, __m512d a, __m512d b, int);VMULPD __m512d _mm512_maskz_mul_round_pd( __mmask8 k, __m512d a, __m512d b, int);VMULPD __m256d _mm256_mul_pd (__m256d a, __m256d b);MULPD __m128d _mm_mul_pd (__m128d a, __m128d b);SIMD Floating-Point ExceptionsOverflow, Underflow, Invalid, Precision, DenormalOther ExceptionsNon-EVEX-encoded instruction, see Table2-19, “Type 2 Class Exception Conditions”.EVEX-encoded instruction, see Table2-46, “Type E2 Class Exception Conditions”.

This UNOFFICIAL reference was generated from the official Intel® 64 and IA-32 Architectures Software Developer’s Manual by a dumb script. There is no guarantee that some parts aren't mangled or broken and is distributed WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.