image/svg+xmlVPBROADCASTB/W/D/Q—Load with Broadcast Integer Data from General Purpose RegisterInstruction Operand EncodingDescriptionBroadcasts a 8-bit, 16-bit, 32-bit or 64-bit value from a general-purpose register (the second operand) to all the locations in the destination vector register (the first operand) using the writemask k1.EVEX.vvvv is reserved and must be 1111b otherwise instructions will #UD.Opcode/InstructionOp / En64/32 bit Mode SupportCPUID Feature FlagDescriptionEVEX.128.66.0F38.W0 7A /rVPBROADCASTB xmm1 {k1}{z}, regAV/VAVX512VLAVX512BWBroadcast an 8-bit value from a GPR to all bytes in the 128-bit destination subject to writemask k1.EVEX.256.66.0F38.W0 7A /rVPBROADCASTB ymm1 {k1}{z}, regAV/VAVX512VLAVX512BWBroadcast an 8-bit value from a GPR to all bytes in the 256-bit destination subject to writemask k1.EVEX.512.66.0F38.W0 7A /rVPBROADCASTB zmm1 {k1}{z}, regAV/VAVX512BWBroadcast an 8-bit value from a GPR to all bytes in the 512-bit destination subject to writemask k1.EVEX.128.66.0F38.W0 7B /rVPBROADCASTW xmm1 {k1}{z}, regAV/VAVX512VLAVX512BWBroadcast a 16-bit value from a GPR to all words in the 128-bit destination subject to writemask k1.EVEX.256.66.0F38.W0 7B /rVPBROADCASTW ymm1 {k1}{z}, regAV/VAVX512VLAVX512BWBroadcast a 16-bit value from a GPR to all words in the 256-bit destination subject to writemask k1.EVEX.512.66.0F38.W0 7B /rVPBROADCASTW zmm1 {k1}{z}, regAV/VAVX512BWBroadcast a 16-bit value from a GPR to all words in the 512-bit destination subject to writemask k1.EVEX.128.66.0F38.W0 7C /rVPBROADCASTD xmm1 {k1}{z}, r32AV/VAVX512VLAVX512FBroadcast a 32-bit value from a GPR to all double-words in the 128-bit destination subject to writemask k1.EVEX.256.66.0F38.W0 7C /rVPBROADCASTD ymm1 {k1}{z}, r32AV/VAVX512VLAVX512FBroadcast a 32-bit value from a GPR to all double-words in the 256-bit destination subject to writemask k1.EVEX.512.66.0F38.W0 7C /rVPBROADCASTD zmm1 {k1}{z}, r32AV/VAVX512FBroadcast a 32-bit value from a GPR to all double-words in the 512-bit destination subject to writemask k1.EVEX.128.66.0F38.W1 7C /rVPBROADCASTQ xmm1 {k1}{z}, r64AV/N.E.1AVX512VLAVX512FBroadcast a 64-bit value from a GPR to all quad-words in the 128-bit destination subject to writemask k1.EVEX.256.66.0F38.W1 7C /rVPBROADCASTQ ymm1 {k1}{z}, r64AV/N.E.1AVX512VLAVX512FBroadcast a 64-bit value from a GPR to all quad-words in the 256-bit destination subject to writemask k1.EVEX.512.66.0F38.W1 7C /rVPBROADCASTQ zmm1 {k1}{z}, r64AV/N.E.1NOTES:1. EVEX.W in non-64 bit is ignored; the instructions behaves as if the W0 version is used.AVX512FBroadcast a 64-bit value from a GPR to all quad-words in the 512-bit destination subject to writemask k1.Op/EnTuple TypeOperand 1Operand 2Operand 3Operand 4ATuple1 ScalarModRM:reg (w)ModRM:r/m (r)NANA

image/svg+xmlOperationVPBROADCASTB (EVEX encoded versions)(KL, VL) = (16, 128), (32, 256), (64, 512)FOR j := 0 TO KL-1i := j * 8IF k1[j] OR *no writemask*THEN DEST[i+7:i] := SRC[7:0]ELSE IF *merging-masking*; merging-maskingTHEN *DEST[i+7:i] remains unchanged*ELSE ; zeroing-maskingDEST[i+7:i] := 0FIFI;ENDFORDEST[MAXVL-1:VL] := 0VPBROADCASTW (EVEX encoded versions)(KL, VL) = (8, 128), (16, 256), (32, 512)FOR j := 0 TO KL-1i := j * 16IF k1[j] OR *no writemask*THEN DEST[i+15:i] := SRC[15:0]ELSE IF *merging-masking*; merging-maskingTHEN *DEST[i+15:i] remains unchanged*ELSE ; zeroing-maskingDEST[i+15:i] := 0FIFI;ENDFORDEST[MAXVL-1:VL] := 0VPBROADCASTD (EVEX encoded versions)(KL, VL) = (4, 128), (8, 256), (16, 512)FOR j := 0 TO KL-1i := j * 32IF k1[j] OR *no writemask*THEN DEST[i+31:i] := SRC[31:0]ELSE IF *merging-masking*; merging-maskingTHEN *DEST[i+31:i] remains unchanged*ELSE ; zeroing-maskingDEST[i+31:i] := 0FIFI;ENDFORDEST[MAXVL-1:VL] := 0

image/svg+xmlVPBROADCASTQ (EVEX encoded versions)(KL, VL) = (2, 128), (4, 256), (8, 512)FOR j := 0 TO KL-1i := j * 64IF k1[j] OR *no writemask*THEN DEST[i+63:i] := SRC[63:0]ELSE IF *merging-masking*; merging-maskingTHEN *DEST[i+63:i] remains unchanged*ELSE ; zeroing-maskingDEST[i+63:i] := 0FIFI;ENDFORDEST[MAXVL-1:VL] := 0Intel C/C++ Compiler Intrinsic EquivalentVPBROADCASTB __m512i _mm512_mask_set1_epi8(__m512i s, __mmask64 k, int a);VPBROADCASTB __m512i _mm512_maskz_set1_epi8( __mmask64 k, int a);VPBROADCASTB __m256i _mm256_mask_set1_epi8(__m256i s, __mmask32 k, int a);VPBROADCASTB __m256i _mm256_maskz_set1_epi8( __mmask32 k, int a);VPBROADCASTB __m128i _mm_mask_set1_epi8(__m128i s, __mmask16 k, int a);VPBROADCASTB __m128i _mm_maskz_set1_epi8( __mmask16 k, int a);VPBROADCASTD __m512i _mm512_mask_set1_epi32(__m512i s, __mmask16 k, int a);VPBROADCASTD __m512i _mm512_maskz_set1_epi32( __mmask16 k, int a);VPBROADCASTD __m256i _mm256_mask_set1_epi32(__m256i s, __mmask8 k, int a);VPBROADCASTD __m256i _mm256_maskz_set1_epi32( __mmask8 k, int a);VPBROADCASTD __m128i _mm_mask_set1_epi32(__m128i s, __mmask8 k, int a);VPBROADCASTD __m128i _mm_maskz_set1_epi32( __mmask8 k, int a);VPBROADCASTQ __m512i _mm512_mask_set1_epi64(__m512i s, __mmask8 k, __int64 a);VPBROADCASTQ __m512i _mm512_maskz_set1_epi64( __mmask8 k, __int64 a);VPBROADCASTQ __m256i _mm256_mask_set1_epi64(__m256i s, __mmask8 k, __int64 a);VPBROADCASTQ __m256i _mm256_maskz_set1_epi64( __mmask8 k, __int64 a);VPBROADCASTQ __m128i _mm_mask_set1_epi64(__m128i s, __mmask8 k, __int64 a);VPBROADCASTQ __m128i _mm_maskz_set1_epi64( __mmask8 k, __int64 a);VPBROADCASTW __m512i _mm512_mask_set1_epi16(__m512i s, __mmask32 k, int a);VPBROADCASTW __m512i _mm512_maskz_set1_epi16( __mmask32 k, int a);VPBROADCASTW __m256i _mm256_mask_set1_epi16(__m256i s, __mmask16 k, int a);VPBROADCASTW __m256i _mm256_maskz_set1_epi16( __mmask16 k, int a);VPBROADCASTW __m128i _mm_mask_set1_epi16(__m128i s, __mmask8 k, int a);VPBROADCASTW __m128i _mm_maskz_set1_epi16( __mmask8 k, int a);ExceptionsEVEX-encoded instructions, see Table2-55, “Type E7NM Class Exception Conditions”; additionally:#UDIf EVEX.vvvv != 1111B.

This UNOFFICIAL reference was generated from the official Intel® 64 and IA-32 Architectures Software Developer’s Manual by a dumb script. There is no guarantee that some parts aren't mangled or broken and is distributed WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.