image/svg+xmlVPBLENDMD/VPBLENDMQ—Blend Int32/Int64 Vectors Using an OpMask ControlInstruction Operand EncodingDescriptionPerforms an element-by-element blending of dword/qword elements between the first source operand (the second operand) and the elements of the second source operand (the third operand) using an opmask register as select control. The blended result is written into the destination. The destination and first source operands are ZMM registers. The second source operand can be a ZMM register, a 512-bit memory location or a 512-bit vector broadcasted from a 32-bit memory location.The opmask register is not used as a writemask for this instruction. Instead, the mask is used as an element selector: every element of the destination is conditionally selected between first source or second source using the value of the related mask bit (0 for the first source operand, 1 for the second source operand).If EVEX.z is set, the elements with corresponding mask bit value of 0 in the destination operand are zeroed.Opcode/InstructionOp / En64/32 bit Mode SupportCPUID Feature FlagDescriptionEVEX.128.66.0F38.W0 64 /r VPBLENDMD xmm1 {k1}{z}, xmm2, xmm3/m128/m32bcstAV/VAVX512VLAVX512FBlend doubleword integer vector xmm2 and doubleword vector xmm3/m128/m32bcst and store the result in xmm1, under control mask.EVEX.256.66.0F38.W0 64 /r VPBLENDMD ymm1 {k1}{z}, ymm2, ymm3/m256/m32bcstAV/VAVX512VLAVX512FBlend doubleword integer vector ymm2 and doubleword vector ymm3/m256/m32bcst and store the result in ymm1, under control mask.EVEX.512.66.0F38.W0 64 /r VPBLENDMD zmm1 {k1}{z}, zmm2, zmm3/m512/m32bcstAV/VAVX512FBlend doubleword integer vector zmm2 and doubleword vector zmm3/m512/m32bcst and store the result in zmm1, under control mask.EVEX.128.66.0F38.W1 64 /rVPBLENDMQ xmm1 {k1}{z}, xmm2, xmm3/m128/m64bcstAV/VAVX512VLAVX512FBlend quadword integer vector xmm2 and quadword vector xmm3/m128/m64bcst and store the result in xmm1, under control mask.EVEX.256.66.0F38.W1 64 /rVPBLENDMQ ymm1 {k1}{z}, ymm2, ymm3/m256/m64bcstAV/VAVX512VLAVX512FBlend quadword integer vector ymm2 and quadword vector ymm3/m256/m64bcst and store the result in ymm1, under control mask.EVEX.512.66.0F38.W1 64 /rVPBLENDMQ zmm1 {k1}{z}, zmm2, zmm3/m512/m64bcstAV/VAVX512FBlend quadword integer vector zmm2 and quadword vector zmm3/m512/m64bcst and store the result in zmm1, under control mask.Op/EnTuple TypeOperand 1Operand 2Operand 3Operand 4AFullModRM:reg (w)EVEX.vvvv (r)ModRM:r/m (r)NA

image/svg+xmlOperationVPBLENDMD (EVEX encoded versions)(KL, VL) = (4, 128), (8, 256), (16, 512)FOR j := 0 TO KL-1i := j * 32IF k1[j] OR *no controlmask*THENIF (EVEX.b = 1) AND (SRC2 *is memory*)THENDEST[i+31:i] := SRC2[31:0]ELSE DEST[i+31:i] := SRC2[i+31:i]FI;ELSE IF *merging-masking*; merging-maskingTHEN DEST[i+31:i] := SRC1[i+31:i]ELSE ; zeroing-maskingDEST[i+31:i] := 0FI;FI;ENDFORDEST[MAXVL-1:VL] := 0;VPBLENDMD (EVEX encoded versions)(KL, VL) = (4, 128), (8, 256), (16, 512)FOR j := 0 TO KL-1i := j * 32IF k1[j] OR *no controlmask*THENIF (EVEX.b = 1) AND (SRC2 *is memory*)THENDEST[i+31:i] := SRC2[31:0]ELSE DEST[i+31:i] := SRC2[i+31:i]FI;ELSE IF *merging-masking*; merging-maskingTHEN DEST[i+31:i] := SRC1[i+31:i]ELSE ; zeroing-maskingDEST[i+31:i] := 0FI;FI;ENDFORDEST[MAXVL-1:VL] := 0

image/svg+xmlIntel C/C++ Compiler Intrinsic EquivalentVPBLENDMD __m512i _mm512_mask_blend_epi32(__mmask16 k, __m512i a, __m512i b);VPBLENDMD __m256i _mm256_mask_blend_epi32(__mmask8 m, __m256i a, __m256i b);VPBLENDMD __m128i _mm_mask_blend_epi32(__mmask8 m, __m128i a, __m128i b);VPBLENDMQ __m512i _mm512_mask_blend_epi64(__mmask8 k, __m512i a, __m512i b);VPBLENDMQ __m256i _mm256_mask_blend_epi64(__mmask8 m, __m256i a, __m256i b);VPBLENDMQ __m128i _mm_mask_blend_epi64(__mmask8 m, __m128i a, __m128i b);SIMD Floating-Point ExceptionsNoneOther ExceptionsSee Table2-49, “Type E4 Class Exception Conditions”.

This UNOFFICIAL reference was generated from the official Intel® 64 and IA-32 Architectures Software Developer’s Manual by a dumb script. There is no guarantee that some parts aren't mangled or broken and is distributed WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.