image/svg+xmlVINSERTI128/VINSERTI32x4/VINSERTI64x2/VINSERTI32x8/VINSERTI64x4—Insert Packed Integer ValuesInstruction Operand EncodingDescriptionVINSERTI32x4 and VINSERTI64x2 inserts 128-bits of packed integer values from the second source operand (the third operand) into the destination operand (the first operand) at an 128-bit granular offset multiplied by imm8[0] (256-bit) or imm8[1:0]. The remaining portions of the destination are copied from the corresponding fields of the first source operand (the second operand). The second source operand can be either an XMM register or a 128-bit memory location. The high 6/7bits of the immediate are ignored. The destination operand is a ZMM/YMM register and updated at 32 and 64-bit granularity according to the writemask.VINSERTI32x8 and VINSERTI64x4 inserts 256-bits of packed integer values from the second source operand (the third operand) into the destination operand (the first operand) at a 256-bit granular offset multiplied by imm8[0]. The remaining portions of the destination are copied from the corresponding fields of the first source operand (the second operand). The second source operand can be either an YMM register or a 256-bit memory location. The upper bits of the immediate are ignored. The destination operand is a ZMM register and updated at 32 and 64-bit granularity according to the writemask.VINSERTI128 inserts 128-bits of packed integer data from the second source operand (the third operand) into the destination operand (the first operand) at a 128-bit granular offset multiplied by imm8[0]. The remaining portions of the destination are copied from the corresponding fields of the first source operand (the second operand). The second source operand can be either an XMM register or a 128-bit memory location. The high 7 bits of the imme-diate are ignored. VEX.L must be 1, otherwise attempt to execute this instruction with VEX.L=0 will cause #UD.Opcode/InstructionOp / En64/32 bit Mode SupportCPUID Feature FlagDescriptionVEX.256.66.0F3A.W0 38 /r ibVINSERTI128 ymm1, ymm2, xmm3/m128, imm8AV/VAVX2Insert 128 bits of integer data from xmm3/m128 and the remaining values from ymm2 into ymm1.EVEX.256.66.0F3A.W0 38 /r ibVINSERTI32X4 ymm1 {k1}{z}, ymm2, xmm3/m128, imm8CV/VAVX512VLAVX512FInsert 128 bits of packed doubleword integer values from xmm3/m128 and the remaining values from ymm2 into ymm1 under writemask k1.EVEX.512.66.0F3A.W0 38 /r ibVINSERTI32X4 zmm1 {k1}{z}, zmm2, xmm3/m128, imm8CV/VAVX512FInsert 128 bits of packed doubleword integer values from xmm3/m128 and the remaining values from zmm2 into zmm1 under writemask k1.EVEX.256.66.0F3A.W1 38 /r ibVINSERTI64X2 ymm1 {k1}{z}, ymm2, xmm3/m128, imm8BV/VAVX512VLAVX512DQInsert 128 bits of packed quadword integer values from xmm3/m128 and the remaining values from ymm2 into ymm1 under writemask k1.EVEX.512.66.0F3A.W1 38 /r ibVINSERTI64X2 zmm1 {k1}{z}, zmm2, xmm3/m128, imm8BV/VAVX512DQInsert 128 bits of packed quadword integer values from xmm3/m128 and the remaining values from zmm2 into zmm1 under writemask k1.EVEX.512.66.0F3A.W0 3A /r ibVINSERTI32X8 zmm1 {k1}{z}, zmm2, ymm3/m256, imm8DV/VAVX512DQInsert 256 bits of packed doubleword integer values from ymm3/m256 and the remaining values from zmm2 into zmm1 under writemask k1.EVEX.512.66.0F3A.W1 3A /r ibVINSERTI64X4 zmm1 {k1}{z}, zmm2, ymm3/m256, imm8CV/VAVX512FInsert 256 bits of packed quadword integer values from ymm3/m256 and the remaining values from zmm2 into zmm1 under writemask k1.Op/EnTuple TypeOperand 1Operand 2Operand 3Operand 4ANAModRM:reg (w)VEX.vvvv (r)ModRM:r/m (r)Imm8BTuple2ModRM:reg (w)EVEX.vvvv (r)ModRM:r/m (r)Imm8CTuple4ModRM:reg (w)EVEX.vvvv (r)ModRM:r/m (r)Imm8DTuple8ModRM:reg (w)EVEX.vvvv (r)ModRM:r/m (r)Imm8

image/svg+xmlOperationVINSERTI32x4 (EVEX encoded versions) (KL, VL) = (8, 256), (16, 512)TEMP_DEST[VL-1:0] := SRC1[VL-1:0]IF VL = 256CASE (imm8[0]) OF0: TMP_DEST[127:0] := SRC2[127:0]1: TMP_DEST[255:128] := SRC2[127:0]ESAC.FI;IF VL = 512CASE (imm8[1:0]) OF00: TMP_DEST[127:0] := SRC2[127:0]01: TMP_DEST[255:128] := SRC2[127:0]10: TMP_DEST[383:256] := SRC2[127:0]11: TMP_DEST[511:384] := SRC2[127:0]ESAC.FI;FOR j := 0 TO KL-1i := j * 32IF k1[j] OR *no writemask*THEN DEST[i+31:i] := TMP_DEST[i+31:i]ELSE IF *merging-masking*; merging-maskingTHEN *DEST[i+31:i] remains unchanged*ELSE ; zeroing-maskingDEST[i+31:i] := 0FIFI;ENDFORDEST[MAXVL-1:VL] := 0VINSERTI64x2 (EVEX encoded versions) (KL, VL) = (4, 256), (8, 512)TEMP_DEST[VL-1:0] := SRC1[VL-1:0]IF VL = 256CASE (imm8[0]) OF0: TMP_DEST[127:0] := SRC2[127:0]1: TMP_DEST[255:128] := SRC2[127:0]ESAC.FI;IF VL = 512CASE (imm8[1:0]) OF00: TMP_DEST[127:0] := SRC2[127:0]01: TMP_DEST[255:128] := SRC2[127:0]10: TMP_DEST[383:256] := SRC2[127:0]11: TMP_DEST[511:384] := SRC2[127:0]ESAC.FI;FOR j := 0 TO KL-1i := j * 64IF k1[j] OR *no writemask*THEN DEST[i+63:i] := TMP_DEST[i+63:i]ELSE

image/svg+xmlIF *merging-masking*; merging-maskingTHEN *DEST[i+63:i] remains unchanged*ELSE ; zeroing-maskingDEST[i+63:i] := 0FIFI;ENDFORDEST[MAXVL-1:VL] := 0VINSERTI32x8 (EVEX.U1.512 encoded version)TEMP_DEST[VL-1:0] := SRC1[VL-1:0]CASE (imm8[0]) OF0: TMP_DEST[255:0] := SRC2[255:0]1: TMP_DEST[511:256] := SRC2[255:0]ESAC.FOR j := 0 TO 15i := j * 32IF k1[j] OR *no writemask*THEN DEST[i+31:i] := TMP_DEST[i+31:i]ELSE IF *merging-masking*; merging-maskingTHEN *DEST[i+31:i] remains unchanged*ELSE ; zeroing-maskingDEST[i+31:i] := 0FIFI;ENDFORDEST[MAXVL-1:VL] := 0VINSERTI64x4 (EVEX.512 encoded version) VL = 512TEMP_DEST[VL-1:0] := SRC1[VL-1:0]CASE (imm8[0]) OF0: TMP_DEST[255:0] := SRC2[255:0]1: TMP_DEST[511:256] := SRC2[255:0]ESAC.FOR j := 0 TO 7i := j * 64IF k1[j] OR *no writemask*THEN DEST[i+63:i] := TMP_DEST[i+63:i]ELSE IF *merging-masking*; merging-maskingTHEN *DEST[i+63:i] remains unchanged*ELSE ; zeroing-maskingDEST[i+63:i] := 0FIFI;ENDFORDEST[MAXVL-1:VL] := 0

image/svg+xmlVINSERTI128TEMP[255:0] := SRC1[255:0]CASE (imm8[0]) OF0: TEMP[127:0] := SRC2[127:0]1: TEMP[255:128] := SRC2[127:0]ESACDEST := TEMPIntel C/C++ Compiler Intrinsic EquivalentVINSERTI32x4 _mm512i _inserti32x4( __m512i a, __m128i b, int imm);VINSERTI32x4 _mm512i _mask_inserti32x4(__m512i s, __mmask16 k, __m512i a, __m128i b, int imm);VINSERTI32x4 _mm512i _maskz_inserti32x4( __mmask16 k, __m512i a, __m128i b, int imm);VINSERTI32x4 __m256i _mm256_inserti32x4( __m256i a, __m128i b, int imm);VINSERTI32x4 __m256i _mm256_mask_inserti32x4(__m256i s, __mmask8 k, __m256i a, __m128i b, int imm);VINSERTI32x4 __m256i _mm256_maskz_inserti32x4( __mmask8 k, __m256i a, __m128i b, int imm);VINSERTI32x8 __m512i _mm512_inserti32x8( __m512i a, __m256i b, int imm);VINSERTI32x8 __m512i _mm512_mask_inserti32x8(__m512i s, __mmask16 k, __m512i a, __m256i b, int imm);VINSERTI32x8 __m512i _mm512_maskz_inserti32x8( __mmask16 k, __m512i a, __m256i b, int imm);VINSERTI64x2 __m512i _mm512_inserti64x2( __m512i a, __m128i b, int imm);VINSERTI64x2 __m512i _mm512_mask_inserti64x2(__m512i s, __mmask8 k, __m512i a, __m128i b, int imm);VINSERTI64x2 __m512i _mm512_maskz_inserti64x2( __mmask8 k, __m512i a, __m128i b, int imm);VINSERTI64x2 __m256i _mm256_inserti64x2( __m256i a, __m128i b, int imm);VINSERTI64x2 __m256i _mm256_mask_inserti64x2(__m256i s, __mmask8 k, __m256i a, __m128i b, int imm);VINSERTI64x2 __m256i _mm256_maskz_inserti64x2( __mmask8 k, __m256i a, __m128i b, int imm);VINSERTI64x4 _mm512_inserti64x4( __m512i a, __m256i b, int imm);VINSERTI64x4 _mm512_mask_inserti64x4(__m512i s, __mmask8 k, __m512i a, __m256i b, int imm);VINSERTI64x4 _mm512_maskz_inserti64x4( __mmask m, __m512i a, __m256i b, int imm);VINSERTI128 __m256i _mm256_insertf128_si256 (__m256i a, __m128i b, int offset);SIMD Floating-Point ExceptionsNoneOther ExceptionsVEX-encoded instruction, see Table2-23, “Type 6 Class Exception Conditions”; additionally:#UDIf VEX.L = 0.EVEX-encoded instruction, see Table2-54, “Type E6NF Class Exception Conditions”.

This UNOFFICIAL reference was generated from the official Intel® 64 and IA-32 Architectures Software Developer’s Manual by a dumb script. There is no guarantee that some parts aren't mangled or broken and is distributed WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.