image/svg+xmlVBROADCAST—Load with Broadcast Floating-Point DataOpcode/InstructionOp / En64/32 bit Mode SupportCPUID Feature FlagDescriptionVEX.128.66.0F38.W0 18 /rVBROADCASTSS xmm1, m32AV/VAVXBroadcast single-precision floating-point element in mem to four locations in xmm1.VEX.256.66.0F38.W0 18 /rVBROADCASTSS ymm1, m32AV/VAVXBroadcast single-precision floating-point element in mem to eight locations in ymm1.VEX.256.66.0F38.W0 19 /rVBROADCASTSD ymm1, m64AV/VAVXBroadcast double-precision floating-point element in mem to four locations in ymm1.VEX.256.66.0F38.W0 1A /rVBROADCASTF128 ymm1, m128AV/VAVXBroadcast 128 bits of floating-point data in mem to low and high 128-bits in ymm1.VEX.128.66.0F38.W0 18/rVBROADCASTSS xmm1, xmm2AV/VAVX2Broadcast the low single-precision floating-point element in the source operand to four locations in xmm1.VEX.256.66.0F38.W0 18 /rVBROADCASTSS ymm1, xmm2AV/VAVX2Broadcast low single-precision floating-point element in the source operand to eight locations in ymm1.VEX.256.66.0F38.W0 19 /rVBROADCASTSD ymm1, xmm2AV/VAVX2Broadcast low double-precision floating-point element in the source operand to four locations in ymm1.EVEX.256.66.0F38.W1 19 /rVBROADCASTSD ymm1 {k1}{z}, xmm2/m64BV/VAVX512VLAVX512FBroadcast low double-precision floating-point element in xmm2/m64 to four locations in ymm1 using writemask k1.EVEX.512.66.0F38.W1 19 /rVBROADCASTSD zmm1 {k1}{z}, xmm2/m64BV/VAVX512FBroadcast low double-precision floating-point element in xmm2/m64 to eight locations in zmm1 using writemask k1.EVEX.256.66.0F38.W0 19 /rVBROADCASTF32X2 ymm1 {k1}{z}, xmm2/m64CV/VAVX512VLAVX512DQBroadcast two single-precision floating-point elements in xmm2/m64 to locations in ymm1 using writemask k1.EVEX.512.66.0F38.W0 19 /rVBROADCASTF32X2 zmm1 {k1}{z}, xmm2/m64CV/VAVX512DQBroadcast two single-precision floating-point elements in xmm2/m64 to locations in zmm1 using writemask k1.EVEX.128.66.0F38.W0 18 /rVBROADCASTSS xmm1 {k1}{z}, xmm2/m32BV/VAVX512VLAVX512FBroadcast low single-precision floating-point element in xmm2/m32 to all locations in xmm1 using writemask k1.EVEX.256.66.0F38.W0 18 /rVBROADCASTSS ymm1 {k1}{z}, xmm2/m32BV/VAVX512VLAVX512FBroadcast low single-precision floating-point element in xmm2/m32 to all locations in ymm1 using writemask k1.EVEX.512.66.0F38.W0 18 /rVBROADCASTSS zmm1 {k1}{z}, xmm2/m32BV/VAVX512FBroadcast low single-precision floating-point element in xmm2/m32 to all locations in zmm1 using writemask k1.EVEX.256.66.0F38.W0 1A /rVBROADCASTF32X4 ymm1 {k1}{z}, m128DV/VAVX512VLAVX512FBroadcast 128 bits of 4 single-precision floating-point data in mem to locations in ymm1 using writemask k1.EVEX.512.66.0F38.W0 1A /rVBROADCASTF32X4 zmm1 {k1}{z}, m128DV/VAVX512FBroadcast 128 bits of 4 single-precision floating-point data in mem to locations in zmm1 using writemask k1.EVEX.256.66.0F38.W1 1A /rVBROADCASTF64X2 ymm1 {k1}{z}, m128CV/VAVX512VLAVX512DQBroadcast 128 bits of 2 double-precision floating-point data in mem to locations in ymm1 using writemask k1.

image/svg+xmlInstruction Operand EncodingDescriptionVBROADCASTSD/VBROADCASTSS/VBROADCASTF128 load floating-point values as one tuple from the source operand (second operand) in memory and broadcast to all elements of the destination operand (first operand). VEX256-encoded versions: The destination operand is a YMM register. The source operand is either a 32-bit, 64-bit, or 128-bit memory location. Register source encodings are reserved and will #UD. Bits (MAXVL-1:256) of the destination register are zeroed.EVEX-encoded versions: The destination operand is a ZMM/YMM/XMM register and updated according to the writemask k1. The source operand is either a 32-bit, 64-bit memory location or the low doubleword/quadword element of an XMM register. VBROADCASTF32X2/VBROADCASTF32X4/VBROADCASTF64X2/VBROADCASTF32X8/VBROADCASTF64X4 load floating-point values as tuples from the source operand (the second operand) in memory or register and broadcast to all elements of the destination operand (the first operand). The destination operand is a YMM/ZMM register updated according to the writemask k1. The source operand is either a register or 64-bit/128-bit/256-bit memory location.VBROADCASTSD and VBROADCASTF128,F32x4 and F64x2 are only supported as 256-bit and 512-bit wide versions and up. VBROADCASTSS is supported in 128-bit, 256-bit and 512-bit wide versions. F32x8 and F64x4 are only supported as 512-bit wide versions.VBROADCASTF32X2/VBROADCASTF32X4/VBROADCASTF32X8 have 32-bit granularity. VBROADCASTF64X2 and VBROADCASTF64X4 have 64-bit granularity. Note: VEX.vvvv and EVEX.vvvv are reserved and must be 1111b otherwise instructions will #UD.If VBROADCASTSD or VBROADCASTF128 is encoded with VEX.L= 0, an attempt to execute the instruction encoded with VEX.L= 0 will cause an #UD exception.EVEX.512.66.0F38.W1 1A /rVBROADCASTF64X2 zmm1 {k1}{z}, m128CV/VAVX512DQBroadcast 128 bits of 2 double-precision floating-point data in mem to locations in zmm1 using writemask k1.EVEX.512.66.0F38.W0 1B /rVBROADCASTF32X8 zmm1 {k1}{z}, m256EV/VAVX512DQBroadcast 256 bits of 8 single-precision floating-point data in mem to locations in zmm1 using writemask k1.EVEX.512.66.0F38.W1 1B /rVBROADCASTF64X4 zmm1 {k1}{z}, m256DV/VAVX512FBroadcast 256 bits of 4 double-precision floating-point data in mem to locations in zmm1 using writemask k1.Op/EnTuple TypeOperand 1Operand 2Operand 3Operand 4ANAModRM:reg (w)ModRM:r/m (r)NANABTuple1 ScalarModRM:reg (w)ModRM:r/m (r)NANACTuple2ModRM:reg (w)ModRM:r/m (r)NANADTuple4ModRM:reg (w)ModRM:r/m (r)NANAETuple8ModRM:reg (w)ModRM:r/m (r)NANAOpcode/InstructionOp / En64/32 bit Mode SupportCPUID Feature FlagDescription

image/svg+xmlFigure 5-1. VBROADCASTSS Operation (VEX.256 encoded version)Figure 5-2. VBROADCASTSS Operation (VEX.128-bit version)Figure 5-3. VBROADCASTSD Operation (VEX.256-bit version)Figure 5-4. VBROADCASTF128 Operation (VEX.256-bit version)DESTm32X0X0X0X0X0X0X0X0X0DESTm32X0X0X0X00X0000DESTm64X0X0X0X0X0DESTm128X0X0X0

image/svg+xmlOperationVBROADCASTSS (128 bit version VEX and legacy)temp := SRC[31:0]DEST[31:0] := tempDEST[63:32] := tempDEST[95:64] := tempDEST[127:96] := tempDEST[MAXVL-1:128] := 0VBROADCASTSS (VEX.256 encoded version)temp := SRC[31:0]DEST[31:0] := tempDEST[63:32] := tempDEST[95:64] := tempDEST[127:96] := tempDEST[159:128] := tempDEST[191:160] := tempDEST[223:192] := tempDEST[255:224] := tempDEST[MAXVL-1:256] := 0VBROADCASTSS (EVEX encoded versions)(KL, VL) (4, 128), (8, 256),= (16, 512)FOR j := 0 TO KL-1i := j * 32IF k1[j] OR *no writemask*THEN DEST[i+31:i] := SRC[31:0]ELSE IF *merging-masking*; merging-maskingTHEN *DEST[i+31:i] remains unchanged*ELSE ; zeroing-maskingDEST[i+31:i] := 0FIFI;ENDFORDEST[MAXVL-1:VL] := 0Figure 5-5. VBROADCASTF64X4 Operation (512-bit version with writemask all 1s)DESTm256X0X0X0

image/svg+xmlVBROADCASTSD (VEX.256 encoded version)temp := SRC[63:0]DEST[63:0] := tempDEST[127:64] := tempDEST[191:128] := tempDEST[255:192] := tempDEST[MAXVL-1:256] := 0VBROADCASTSD (EVEX encoded versions)(KL, VL) = (4, 256), (8, 512)FOR j := 0 TO KL-1i := j * 64IF k1[j] OR *no writemask*THEN DEST[i+63:i] := SRC[63:0]ELSE IF *merging-masking*; merging-maskingTHEN *DEST[i+63:i] remains unchanged*ELSE ; zeroing-maskingDEST[i+63:i] := 0FIFI;ENDFORDEST[MAXVL-1:VL] := 0VBROADCASTF32x2 (EVEX encoded versions)(KL, VL) = (8, 256), (16, 512)FOR j := 0 TO KL-1i := j * 32n := (j mod 2) * 32IF k1[j] OR *no writemask*THEN DEST[i+31:i] := SRC[n+31:n]ELSE IF *merging-masking*; merging-maskingTHEN *DEST[i+31:i] remains unchanged*ELSE ; zeroing-maskingDEST[i+31:i] := 0FIFI;ENDFORDEST[MAXVL-1:VL] := 0VBROADCASTF128 (VEX.256 encoded version)temp := SRC[127:0]DEST[127:0] := tempDEST[255:128] := tempDEST[MAXVL-1:256] := 0

image/svg+xmlVBROADCASTF32X4 (EVEX encoded versions)(KL, VL) = (8, 256), (16, 512)FOR j := 0 TO KL-1i := j* 32n := (j modulo 4) * 32IF k1[j] OR *no writemask*THEN DEST[i+31:i] := SRC[n+31:n]ELSE IF *merging-masking*; merging-maskingTHEN *DEST[i+31:i] remains unchanged*ELSE ; zeroing-maskingDEST[i+31:i] := 0FIFI;ENDFORDEST[MAXVL-1:VL] := 0VBROADCASTF64X2 (EVEX encoded versions)(KL, VL) = (4, 256), (8, 512)FOR j := 0 TO KL-1i := j * 64n := (j modulo 2) * 64IF k1[j] OR *no writemask*THEN DEST[i+63:i] := SRC[n+63:n]ELSE IF *merging-masking*; merging-maskingTHEN *DEST[i+63:i] remains unchanged*ELSE ; zeroing-maskingDEST[i+63:i] = 0FIFI;ENDFOR;VBROADCASTF32X8 (EVEX.U1.512 encoded version)FOR j := 0 TO 15i := j * 32n := (j modulo 8) * 32IF k1[j] OR *no writemask*THEN DEST[i+31:i] := SRC[n+31:n]ELSE IF *merging-masking*; merging-maskingTHEN *DEST[i+31:i] remains unchanged*ELSE ; zeroing-maskingDEST[i+31:i] := 0FIFI;ENDFORDEST[MAXVL-1:VL] := 0

image/svg+xmlVBROADCASTF64X4 (EVEX.512 encoded version)FOR j := 0 TO 7i := j * 64n := (j modulo 4) * 64IF k1[j] OR *no writemask*THEN DEST[i+63:i] := SRC[n+63:n]ELSE IF *merging-masking*; merging-maskingTHEN *DEST[i+63:i] remains unchanged*ELSE ; zeroing-maskingDEST[i+63:i] := 0FIFI;ENDFORDEST[MAXVL-1:VL] := 0Intel C/C++ Compiler Intrinsic EquivalentVBROADCASTF32x2 __m512 _mm512_broadcast_f32x2( __m128 a);VBROADCASTF32x2 __m512 _mm512_mask_broadcast_f32x2(__m512 s, __mmask16 k, __m128 a);VBROADCASTF32x2 __m512 _mm512_maskz_broadcast_f32x2( __mmask16 k, __m128 a);VBROADCASTF32x2 __m256 _mm256_broadcast_f32x2( __m128 a);VBROADCASTF32x2 __m256 _mm256_mask_broadcast_f32x2(__m256 s, __mmask8 k, __m128 a);VBROADCASTF32x2 __m256 _mm256_maskz_broadcast_f32x2( __mmask8 k, __m128 a);VBROADCASTF32x4 __m512 _mm512_broadcast_f32x4( __m128 a);VBROADCASTF32x4 __m512 _mm512_mask_broadcast_f32x4(__m512 s, __mmask16 k, __m128 a);VBROADCASTF32x4 __m512 _mm512_maskz_broadcast_f32x4( __mmask16 k, __m128 a);VBROADCASTF32x4 __m256 _mm256_broadcast_f32x4( __m128 a);VBROADCASTF32x4 __m256 _mm256_mask_broadcast_f32x4(__m256 s, __mmask8 k, __m128 a);VBROADCASTF32x4 __m256 _mm256_maskz_broadcast_f32x4( __mmask8 k, __m128 a);VBROADCASTF32x8 __m512 _mm512_broadcast_f32x8( __m256 a);VBROADCASTF32x8 __m512 _mm512_mask_broadcast_f32x8(__m512 s, __mmask16 k, __m256 a);VBROADCASTF32x8 __m512 _mm512_maskz_broadcast_f32x8( __mmask16 k, __m256 a);VBROADCASTF64x2 __m512d _mm512_broadcast_f64x2( __m128d a);VBROADCASTF64x2 __m512d _mm512_mask_broadcast_f64x2(__m512d s, __mmask8 k, __m128d a);VBROADCASTF64x2 __m512d _mm512_maskz_broadcast_f64x2( __mmask8 k, __m128d a);VBROADCASTF64x2 __m256d _mm256_broadcast_f64x2( __m128d a);VBROADCASTF64x2 __m256d _mm256_mask_broadcast_f64x2(__m256d s, __mmask8 k, __m128d a);VBROADCASTF64x2 __m256d _mm256_maskz_broadcast_f64x2( __mmask8 k, __m128d a);VBROADCASTF64x4 __m512d _mm512_broadcast_f64x4( __m256d a);VBROADCASTF64x4 __m512d _mm512_mask_broadcast_f64x4(__m512d s, __mmask8 k, __m256d a);VBROADCASTF64x4 __m512d _mm512_maskz_broadcast_f64x4( __mmask8 k, __m256d a);VBROADCASTSD __m512d _mm512_broadcastsd_pd( __m128d a); VBROADCASTSD __m512d _mm512_mask_broadcastsd_pd(__m512d s, __mmask8 k, __m128d a); VBROADCASTSD __m512d _mm512_maskz_broadcastsd_pd(__mmask8 k, __m128d a); VBROADCASTSD __m256d _mm256_broadcastsd_pd(__m128d a);VBROADCASTSD __m256d _mm256_mask_broadcastsd_pd(__m256d s, __mmask8 k, __m128d a);VBROADCASTSD __m256d _mm256_maskz_broadcastsd_pd( __mmask8 k, __m128d a);VBROADCASTSD __m256d _mm256_broadcast_sd(double *a);VBROADCASTSS __m512 _mm512_broadcastss_ps( __m128 a); VBROADCASTSS __m512 _mm512_mask_broadcastss_ps(__m512 s, __mmask16 k, __m128 a); VBROADCASTSS __m512 _mm512_maskz_broadcastss_ps( __mmask16 k, __m128 a); VBROADCASTSS __m256 _mm256_broadcastss_ps(__m128 a);VBROADCASTSS __m256 _mm256_mask_broadcastss_ps(__m256 s, __mmask8 k, __m128 a);VBROADCASTSS __m256 _mm256_maskz_broadcastss_ps( __mmask8 k, __m128 a);

image/svg+xmlVBROADCASTSS __m128 _mm_broadcastss_ps(__m128 a);VBROADCASTSS __m128 _mm_mask_broadcastss_ps(__m128 s, __mmask8 k, __m128 a);VBROADCASTSS __m128 _mm_maskz_broadcastss_ps( __mmask8 k, __m128 a);VBROADCASTSS __m128 _mm_broadcast_ss(float *a);VBROADCASTSS __m256 _mm256_broadcast_ss(float *a);VBROADCASTF128 __m256 _mm256_broadcast_ps(__m128 * a);VBROADCASTF128 __m256d _mm256_broadcast_pd(__m128d * a);ExceptionsVEX-encoded instructions, see Table2-23, “Type 6 Class Exception Conditions”. EVEX-encoded instructions, see Table2-53, “Type E6 Class Exception Conditions”.Additionally:#UDIf VEX.L = 0 for VBROADCASTSD or VBROADCASTF128.If EVEX.L’L = 0 for VBROADCASTSD/VBROADCASTF32X2/VBROADCASTF32X4/VBROADCASTF64X2.If EVEX.L’L < 10b for VBROADCASTF32X8/VBROADCASTF64X4.

This UNOFFICIAL reference was generated from the official Intel® 64 and IA-32 Architectures Software Developer’s Manual by a dumb script. There is no guarantee that some parts aren't mangled or broken and is distributed WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.