image/svg+xmlPMOVZX—Packed Move with Zero Extend Opcode/InstructionOp / En64/32 bit Mode SupportCPUID Feature FlagDescription66 0f 38 30 /rPMOVZXBW xmm1, xmm2/m64AV/VSSE4_1Zero extend 8 packed 8-bit integers in the low 8 bytes of xmm2/m64 to 8 packed 16-bit integers in xmm1.66 0f 38 31 /rPMOVZXBD xmm1, xmm2/m32AV/VSSE4_1Zero extend 4 packed 8-bit integers in the low 4 bytes of xmm2/m32 to 4 packed 32-bit integers in xmm1.66 0f 38 32 /rPMOVZXBQ xmm1, xmm2/m16AV/VSSE4_1Zero extend 2 packed 8-bit integers in the low 2 bytes of xmm2/m16 to 2 packed 64-bit integers in xmm1.66 0f 38 33 /rPMOVZXWD xmm1, xmm2/m64AV/VSSE4_1Zero extend 4 packed 16-bit integers in the low 8 bytes of xmm2/m64 to 4 packed 32-bit integers in xmm1.66 0f 38 34 /rPMOVZXWQ xmm1, xmm2/m32AV/VSSE4_1Zero extend 2 packed 16-bit integers in the low 4 bytes of xmm2/m32 to 2 packed 64-bit integers in xmm1.66 0f 38 35 /rPMOVZXDQ xmm1, xmm2/m64AV/VSSE4_1Zero extend 2 packed 32-bit integers in the low 8 bytes of xmm2/m64 to 2 packed 64-bit integers in xmm1.VEX.128.66.0F38.WIG 30 /rVPMOVZXBW xmm1, xmm2/m64AV/VAVXZero extend 8 packed 8-bit integers in the low 8 bytes of xmm2/m64 to 8 packed 16-bit integers in xmm1.VEX.128.66.0F38.WIG 31 /rVPMOVZXBD xmm1, xmm2/m32AV/VAVXZero extend 4 packed 8-bit integers in the low 4 bytes of xmm2/m32 to 4 packed 32-bit integers in xmm1.VEX.128.66.0F38.WIG 32 /rVPMOVZXBQ xmm1, xmm2/m16AV/VAVXZero extend 2 packed 8-bit integers in the low 2 bytes of xmm2/m16 to 2 packed 64-bit integers in xmm1.VEX.128.66.0F38.WIG 33 /rVPMOVZXWD xmm1, xmm2/m64AV/VAVXZero extend 4 packed 16-bit integers in the low 8 bytes of xmm2/m64 to 4 packed 32-bit integers in xmm1.VEX.128.66.0F38.WIG 34 /rVPMOVZXWQ xmm1, xmm2/m32AV/VAVXZero extend 2 packed 16-bit integers in the low 4 bytes of xmm2/m32 to 2 packed 64-bit integers in xmm1.VEX.128.66.0F 38.WIG 35 /rVPMOVZXDQ xmm1, xmm2/m64AV/VAVXZero extend 2 packed 32-bit integers in the low 8 bytes of xmm2/m64 to 2 packed 64-bit integers in xmm1.VEX.256.66.0F38.WIG 30 /rVPMOVZXBW ymm1, xmm2/m128AV/VAVX2Zero extend 16 packed 8-bit integers in xmm2/m128 to 16 packed 16-bit integers in ymm1.VEX.256.66.0F38.WIG 31 /rVPMOVZXBD ymm1, xmm2/m64AV/VAVX2Zero extend 8 packed 8-bit integers in the low 8 bytes of xmm2/m64 to 8 packed 32-bit integers in ymm1.VEX.256.66.0F38.WIG 32 /rVPMOVZXBQ ymm1, xmm2/m32AV/VAVX2Zero extend 4 packed 8-bit integers in the low 4 bytes of xmm2/m32 to 4 packed 64-bit integers in ymm1.VEX.256.66.0F38.WIG 33 /rVPMOVZXWD ymm1, xmm2/m128AV/VAVX2Zero extend 8 packed 16-bit integers xmm2/m128 to 8 packed 32-bit integers in ymm1.VEX.256.66.0F38.WIG 34 /rVPMOVZXWQ ymm1, xmm2/m64AV/VAVX2Zero extend 4 packed 16-bit integers in the low 8 bytes of xmm2/m64 to 4 packed 64-bit integers in xmm1.VEX.256.66.0F38.WIG 35 /rVPMOVZXDQ ymm1, xmm2/m128AV/VAVX2Zero extend 4 packed 32-bit integers in xmm2/m128 to 4 packed 64-bit integers in ymm1.

image/svg+xmlEVEX.128.66.0F38 30.WIG /rVPMOVZXBW xmm1 {k1}{z}, xmm2/m64BV/VAVX512VLAVX512BWZero extend 8 packed 8-bit integers in the low 8 bytes of xmm2/m64 to 8 packed 16-bit integers in xmm1.EVEX.256.66.0F38.WIG 30 /rVPMOVZXBW ymm1 {k1}{z}, xmm2/m128BV/VAVX512VLAVX512BWZero extend 16 packed 8-bit integers in xmm2/m128 to 16 packed 16-bit integers in ymm1.EVEX.512.66.0F38.WIG 30 /rVPMOVZXBW zmm1 {k1}{z}, ymm2/m256BV/VAVX512BWZero extend 32 packed 8-bit integers in ymm2/m256 to 32 packed 16-bit integers in zmm1.EVEX.128.66.0F38.WIG 31 /rVPMOVZXBD xmm1 {k1}{z}, xmm2/m32CV/VAVX512VLAVX512FZero extend 4 packed 8-bit integers in the low 4 bytes of xmm2/m32 to 4 packed 32-bit integers in xmm1 subject to writemask k1.EVEX.256.66.0F38.WIG 31 /rVPMOVZXBD ymm1 {k1}{z}, xmm2/m64CV/VAVX512VLAVX512FZero extend 8 packed 8-bit integers in the low 8 bytes of xmm2/m64 to 8 packed 32-bit integers in ymm1 subject to writemask k1.EVEX.512.66.0F38.WIG 31 /rVPMOVZXBD zmm1 {k1}{z}, xmm2/m128CV/VAVX512FZero extend 16 packed 8-bit integers in xmm2/m128 to 16 packed 32-bit integers in zmm1 subject to writemask k1.EVEX.128.66.0F38.WIG 32 /rVPMOVZXBQ xmm1 {k1}{z}, xmm2/m16DV/VAVX512VLAVX512FZero extend 2 packed 8-bit integers in the low 2 bytes of xmm2/m16 to 2 packed 64-bit integers in xmm1 subject to writemask k1.EVEX.256.66.0F38.WIG 32 /rVPMOVZXBQ ymm1 {k1}{z}, xmm2/m32DV/VAVX512VLAVX512FZero extend 4 packed 8-bit integers in the low 4 bytes of xmm2/m32 to 4 packed 64-bit integers in ymm1 subject to writemask k1.EVEX.512.66.0F38.WIG 32 /rVPMOVZXBQ zmm1 {k1}{z}, xmm2/m64DV/VAVX512FZero extend 8 packed 8-bit integers in the low 8 bytes of xmm2/m64 to 8 packed 64-bit integers in zmm1 subject to writemask k1.EVEX.128.66.0F38.WIG 33 /rVPMOVZXWD xmm1 {k1}{z}, xmm2/m64BV/VAVX512VLAVX512FZero extend 4 packed 16-bit integers in the low 8 bytes of xmm2/m64 to 4 packed 32-bit integers in xmm1 subject to writemask k1.EVEX.256.66.0F38.WIG 33 /rVPMOVZXWD ymm1 {k1}{z}, xmm2/m128BV/VAVX512VLAVX512FZero extend 8 packed 16-bit integers in xmm2/m128 to 8 packed 32-bit integers in zmm1 subject to writemask k1.EVEX.512.66.0F38.WIG 33 /rVPMOVZXWD zmm1 {k1}{z}, ymm2/m256BV/VAVX512FZero extend 16 packed 16-bit integers in ymm2/m256 to 16 packed 32-bit integers in zmm1 subject to writemask k1.EVEX.128.66.0F38.WIG 34 /rVPMOVZXWQ xmm1 {k1}{z}, xmm2/m32CV/VAVX512VLAVX512FZero extend 2 packed 16-bit integers in the low 4 bytes of xmm2/m32 to 2 packed 64-bit integers in xmm1 subject to writemask k1.EVEX.256.66.0F38.WIG 34 /rVPMOVZXWQ ymm1 {k1}{z}, xmm2/m64CV/VAVX512VLAVX512FZero extend 4 packed 16-bit integers in the low 8 bytes of xmm2/m64 to 4 packed 64-bit integers in ymm1 subject to writemask k1.EVEX.512.66.0F38.WIG 34 /rVPMOVZXWQ zmm1 {k1}{z}, xmm2/m128CV/VAVX512FZero extend 8 packed 16-bit integers in xmm2/m128 to 8 packed 64-bit integers in zmm1 subject to writemask k1.Opcode/InstructionOp / En64/32 bit Mode SupportCPUID Feature FlagDescription

image/svg+xmlInstruction Operand EncodingDescriptionLegacy, VEX and EVEX encoded versions: Packed byte, word, or dword integers starting from the low bytes of the source operand (second operand) are zero extended to word, dword, or quadword integers and stored in packed signed bytes the destination operand.128-bit Legacy SSE version: Bits (MAXVL-1:128) of the corresponding destination register remain unchanged.VEX.128 encoded version: Bits (MAXVL-1:128) of the corresponding destination register are zeroed.VEX.256 encoded version: Bits (MAXVL-1:256) of the corresponding destination register are zeroed. EVEX encoded versions: Packed dword integers starting from the low bytes of the source operand (second operand) are zero extended to quadword integers and stored to the destination operand under the writemask.The destination register is XMM, YMM or ZMM Register.Note: VEX.vvvv and EVEX.vvvv are reserved and must be 1111b otherwise instructions will #UD.OperationPacked_Zero_Extend_BYTE_to_WORD(DEST, SRC)DEST[15:0] := ZeroExtend(SRC[7:0]);DEST[31:16] := ZeroExtend(SRC[15:8]);DEST[47:32] := ZeroExtend(SRC[23:16]);DEST[63:48] := ZeroExtend(SRC[31:24]);DEST[79:64] := ZeroExtend(SRC[39:32]);DEST[95:80] := ZeroExtend(SRC[47:40]);DEST[111:96] := ZeroExtend(SRC[55:48]);DEST[127:112] := ZeroExtend(SRC[63:56]);Packed_Zero_Extend_BYTE_to_DWORD(DEST, SRC)DEST[31:0] := ZeroExtend(SRC[7:0]);DEST[63:32] := ZeroExtend(SRC[15:8]);DEST[95:64] := ZeroExtend(SRC[23:16]);DEST[127:96] := ZeroExtend(SRC[31:24]);EVEX.128.66.0F38.W0 35 /rVPMOVZXDQ xmm1 {k1}{z}, xmm2/m64BV/VAVX512VLAVX512FZero extend 2 packed 32-bit integers in the low 8 bytes of xmm2/m64 to 2 packed 64-bit integers in zmm1 using writemask k1.EVEX.256.66.0F38.W0 35 /rVPMOVZXDQ ymm1 {k1}{z}, xmm2/m128BV/VAVX512VLAVX512FZero extend 4 packed 32-bit integers in xmm2/m128 to 4 packed 64-bit integers in zmm1 using writemask k1.EVEX.512.66.0F38.W0 35 /rVPMOVZXDQ zmm1 {k1}{z}, ymm2/m256BV/VAVX512FZero extend 8 packed 32-bit integers in ymm2/m256 to 8 packed 64-bit integers in zmm1 using writemask k1.Op/EnTuple TypeOperand 1Operand 2Operand 3Operand 4ANAModRM:reg (w)ModRM:r/m (r)NANABHalf MemModRM:reg (w)ModRM:r/m (r)NANACQuarter MemModRM:reg (w)ModRM:r/m (r)NANADEighth MemModRM:reg (w)ModRM:r/m (r)NANAOpcode/InstructionOp / En64/32 bit Mode SupportCPUID Feature FlagDescription

image/svg+xmlPacked_Zero_Extend_BYTE_to_QWORD(DEST, SRC)DEST[63:0] := ZeroExtend(SRC[7:0]);DEST[127:64] := ZeroExtend(SRC[15:8]);Packed_Zero_Extend_WORD_to_DWORD(DEST, SRC)DEST[31:0] := ZeroExtend(SRC[15:0]);DEST[63:32] := ZeroExtend(SRC[31:16]);DEST[95:64] := ZeroExtend(SRC[47:32]);DEST[127:96] := ZeroExtend(SRC[63:48]);Packed_Zero_Extend_WORD_to_QWORD(DEST, SRC)DEST[63:0] := ZeroExtend(SRC[15:0]);DEST[127:64] := ZeroExtend(SRC[31:16]);Packed_Zero_Extend_DWORD_to_QWORD(DEST, SRC)DEST[63:0] := ZeroExtend(SRC[31:0]);DEST[127:64] := ZeroExtend(SRC[63:32]);VPMOVZXBW (EVEX encoded versions)(KL, VL) = (8, 128), (16, 256), (32, 512)Packed_Zero_Extend_BYTE_to_WORD(TMP_DEST[127:0], SRC[63:0])IF VL >= 256Packed_Zero_Extend_BYTE_to_WORD(TMP_DEST[255:128], SRC[127:64])FI;IF VL >= 512Packed_Zero_Extend_BYTE_to_WORD(TMP_DEST[383:256], SRC[191:128])Packed_Zero_Extend_BYTE_to_WORD(TMP_DEST[511:384], SRC[255:192])FI;FOR j := 0 TO KL-1i := j * 16IF k1[j] OR *no writemask*THEN DEST[i+15:i] := TEMP_DEST[i+15:i]ELSE IF *merging-masking*; merging-maskingTHEN *DEST[i+15:i] remains unchanged*ELSE *zeroing-masking*; zeroing-maskingDEST[i+15:i] := 0FIFI;ENDFORDEST[MAXVL-1:VL] := 0VPMOVZXBD (EVEX encoded versions)(KL, VL) = (4, 128), (8, 256), (16, 512)Packed_Zero_Extend_BYTE_to_DWORD(TMP_DEST[127:0], SRC[31:0])IF VL >= 256Packed_Zero_Extend_BYTE_to_DWORD(TMP_DEST[255:128], SRC[63:32])FI;IF VL >= 512Packed_Zero_Extend_BYTE_to_DWORD(TMP_DEST[383:256], SRC[95:64])Packed_Zero_Extend_BYTE_to_DWORD(TMP_DEST[511:384], SRC[127:96])FI;FOR j := 0 TO KL-1i := j * 32

image/svg+xmlIF k1[j] OR *no writemask*THEN DEST[i+31:i] := TEMP_DEST[i+31:i]ELSE IF *merging-masking*; merging-maskingTHEN *DEST[i+31:i] remains unchanged*ELSE *zeroing-masking*; zeroing-maskingDEST[i+31:i] := 0FIFI;ENDFORDEST[MAXVL-1:VL] := 0VPMOVZXBQ (EVEX encoded versions)(KL, VL) = (2, 128), (4, 256), (8, 512)Packed_Zero_Extend_BYTE_to_QWORD(TMP_DEST[127:0], SRC[15:0])IF VL >= 256Packed_Zero_Extend_BYTE_to_QWORD(TMP_DEST[255:128], SRC[31:16])FI;IF VL >= 512Packed_Zero_Extend_BYTE_to_QWORD(TMP_DEST[383:256], SRC[47:32])Packed_Zero_Extend_BYTE_to_QWORD(TMP_DEST[511:384], SRC[63:48])FI;FOR j := 0 TO KL-1i := j * 64IF k1[j] OR *no writemask*THEN DEST[i+63:i] := TEMP_DEST[i+63:i]ELSE IF *merging-masking*; merging-maskingTHEN *DEST[i+63:i] remains unchanged*ELSE *zeroing-masking*; zeroing-maskingDEST[i+63:i] := 0FIFI;ENDFORDEST[MAXVL-1:VL] := 0VPMOVZXWD (EVEX encoded versions)(KL, VL) = (4, 128), (8, 256), (16, 512)Packed_Zero_Extend_WORD_to_DWORD(TMP_DEST[127:0], SRC[63:0])IF VL >= 256Packed_Zero_Extend_WORD_to_DWORD(TMP_DEST[255:128], SRC[127:64])FI;IF VL >= 512Packed_Zero_Extend_WORD_to_DWORD(TMP_DEST[383:256], SRC[191:128])Packed_Zero_Extend_WORD_to_DWORD(TMP_DEST[511:384], SRC[256:192])FI;FOR j := 0 TO KL-1i := j * 32IF k1[j] OR *no writemask*THEN DEST[i+31:i] := TEMP_DEST[i+31:i]ELSE IF *merging-masking*; merging-maskingTHEN *DEST[i+31:i] remains unchanged*ELSE *zeroing-masking*; zeroing-masking

image/svg+xmlDEST[i+31:i] := 0FIFI;ENDFORDEST[MAXVL-1:VL] := 0VPMOVZXWQ (EVEX encoded versions)(KL, VL) = (2, 128), (4, 256), (8, 512)Packed_Zero_Extend_WORD_to_QWORD(TMP_DEST[127:0], SRC[31:0])IF VL >= 256Packed_Zero_Extend_WORD_to_QWORD(TMP_DEST[255:128], SRC[63:32])FI;IF VL >= 512Packed_Zero_Extend_WORD_to_QWORD(TMP_DEST[383:256], SRC[95:64])Packed_Zero_Extend_WORD_to_QWORD(TMP_DEST[511:384], SRC[127:96])FI;FOR j := 0 TO KL-1i := j * 64IF k1[j] OR *no writemask*THEN DEST[i+63:i] := TEMP_DEST[i+63:i]ELSE IF *merging-masking*; merging-maskingTHEN *DEST[i+63:i] remains unchanged*ELSE *zeroing-masking*; zeroing-maskingDEST[i+63:i] := 0FIFI;ENDFORDEST[MAXVL-1:VL] := 0VPMOVZXDQ (EVEX encoded versions)(KL, VL) = (2, 128), (4, 256), (8, 512)Packed_Zero_Extend_DWORD_to_QWORD(TEMP_DEST[127:0], SRC[63:0])IF VL >= 256Packed_Zero_Extend_DWORD_to_QWORD(TEMP_DEST[255:128], SRC[127:64])FI;IF VL >= 512Packed_Zero_Extend_DWORD_to_QWORD(TEMP_DEST[383:256], SRC[191:128])Packed_Zero_Extend_DWORD_to_QWORD(TEMP_DEST[511:384], SRC[255:192])FI;FOR j := 0 TO KL-1i := j * 64IF k1[j] OR *no writemask*THEN DEST[i+63:i] := TEMP_DEST[i+63:i]ELSE IF *merging-masking*; merging-maskingTHEN *DEST[i+63:i] remains unchanged*ELSE *zeroing-masking*; zeroing-maskingDEST[i+63:i] := 0FIFI;ENDFORDEST[MAXVL-1:VL] := 0

image/svg+xmlVPMOVZXBW (VEX.256 encoded version)Packed_Zero_Extend_BYTE_to_WORD(DEST[127:0], SRC[63:0])Packed_Zero_Extend_BYTE_to_WORD(DEST[255:128], SRC[127:64])DEST[MAXVL-1:256] := 0VPMOVZXBD (VEX.256 encoded version)Packed_Zero_Extend_BYTE_to_DWORD(DEST[127:0], SRC[31:0])Packed_Zero_Extend_BYTE_to_DWORD(DEST[255:128], SRC[63:32])DEST[MAXVL-1:256] := 0VPMOVZXBQ (VEX.256 encoded version)Packed_Zero_Extend_BYTE_to_QWORD(DEST[127:0], SRC[15:0])Packed_Zero_Extend_BYTE_to_QWORD(DEST[255:128], SRC[31:16])DEST[MAXVL-1:256] := 0VPMOVZXWD (VEX.256 encoded version)Packed_Zero_Extend_WORD_to_DWORD(DEST[127:0], SRC[63:0])Packed_Zero_Extend_WORD_to_DWORD(DEST[255:128], SRC[127:64])DEST[MAXVL-1:256] := 0VPMOVZXWQ (VEX.256 encoded version)Packed_Zero_Extend_WORD_to_QWORD(DEST[127:0], SRC[31:0])Packed_Zero_Extend_WORD_to_QWORD(DEST[255:128], SRC[63:32])DEST[MAXVL-1:256] := 0VPMOVZXDQ (VEX.256 encoded version)Packed_Zero_Extend_DWORD_to_QWORD(DEST[127:0], SRC[63:0])Packed_Zero_Extend_DWORD_to_QWORD(DEST[255:128], SRC[127:64])DEST[MAXVL-1:256] := 0VPMOVZXBW (VEX.128 encoded version)Packed_Zero_Extend_BYTE_to_WORD()DEST[MAXVL-1:128] := 0VPMOVZXBD (VEX.128 encoded version)Packed_Zero_Extend_BYTE_to_DWORD()DEST[MAXVL-1:128] := 0VPMOVZXBQ (VEX.128 encoded version)Packed_Zero_Extend_BYTE_to_QWORD()DEST[MAXVL-1:128] := 0VPMOVZXWD (VEX.128 encoded version)Packed_Zero_Extend_WORD_to_DWORD()DEST[MAXVL-1:128] := 0VPMOVZXWQ (VEX.128 encoded version)Packed_Zero_Extend_WORD_to_QWORD()DEST[MAXVL-1:128] := 0VPMOVZXDQ (VEX.128 encoded version)Packed_Zero_Extend_DWORD_to_QWORD()DEST[MAXVL-1:128] := 0

image/svg+xmlPMOVZXBWPacked_Zero_Extend_BYTE_to_WORD()DEST[MAXVL-1:128] (Unmodified)PMOVZXBDPacked_Zero_Extend_BYTE_to_DWORD()DEST[MAXVL-1:128] (Unmodified)PMOVZXBQPacked_Zero_Extend_BYTE_to_QWORD()DEST[MAXVL-1:128] (Unmodified)PMOVZXWDPacked_Zero_Extend_WORD_to_DWORD()DEST[MAXVL-1:128] (Unmodified)PMOVZXWQPacked_Zero_Extend_WORD_to_QWORD()DEST[MAXVL-1:128] (Unmodified)PMOVZXDQPacked_Zero_Extend_DWORD_to_QWORD()DEST[MAXVL-1:128] (Unmodified)Intel C/C++ Compiler Intrinsic EquivalentVPMOVZXBW __m512i _mm512_cvtepu8_epi16(__m256i a);VPMOVZXBW __m512i _mm512_mask_cvtepu8_epi16(__m512i a, __mmask32 k, __m256i b);VPMOVZXBW __m512i _mm512_maskz_cvtepu8_epi16( __mmask32 k, __m256i b);VPMOVZXBD __m512i _mm512_cvtepu8_epi32(__m128i a);VPMOVZXBD __m512i _mm512_mask_cvtepu8_epi32(__m512i a, __mmask16 k, __m128i b);VPMOVZXBD __m512i _mm512_maskz_cvtepu8_epi32( __mmask16 k, __m128i b);VPMOVZXBQ __m512i _mm512_cvtepu8_epi64(__m128i a);VPMOVZXBQ __m512i _mm512_mask_cvtepu8_epi64(__m512i a, __mmask8 k, __m128i b);VPMOVZXBQ __m512i _mm512_maskz_cvtepu8_epi64( __mmask8 k, __m128i a);VPMOVZXDQ __m512i _mm512_cvtepu32_epi64(__m256i a);VPMOVZXDQ __m512i _mm512_mask_cvtepu32_epi64(__m512i a, __mmask8 k, __m256i b);VPMOVZXDQ __m512i _mm512_maskz_cvtepu32_epi64( __mmask8 k, __m256i a);VPMOVZXWD __m512i _mm512_cvtepu16_epi32(__m128i a);VPMOVZXWD __m512i _mm512_mask_cvtepu16_epi32(__m512i a, __mmask16 k, __m128i b);VPMOVZXWD __m512i _mm512_maskz_cvtepu16_epi32(__mmask16 k, __m128i a);VPMOVZXWQ __m512i _mm512_cvtepu16_epi64(__m256i a);VPMOVZXWQ __m512i _mm512_mask_cvtepu16_epi64(__m512i a, __mmask8 k, __m256i b);VPMOVZXWQ __m512i _mm512_maskz_cvtepu16_epi64( __mmask8 k, __m256i a);VPMOVZXBW __m256i _mm256_cvtepu8_epi16(__m256i a);VPMOVZXBW __m256i _mm256_mask_cvtepu8_epi16(__m256i a, __mmask16 k, __m128i b);VPMOVZXBW __m256i _mm256_maskz_cvtepu8_epi16( __mmask16 k, __m128i b);VPMOVZXBD __m256i _mm256_cvtepu8_epi32(__m128i a);VPMOVZXBD __m256i _mm256_mask_cvtepu8_epi32(__m256i a, __mmask8 k, __m128i b);VPMOVZXBD __m256i _mm256_maskz_cvtepu8_epi32( __mmask8 k, __m128i b);VPMOVZXBQ __m256i _mm256_cvtepu8_epi64(__m128i a);VPMOVZXBQ __m256i _mm256_mask_cvtepu8_epi64(__m256i a, __mmask8 k, __m128i b);VPMOVZXBQ __m256i _mm256_maskz_cvtepu8_epi64( __mmask8 k, __m128i a);VPMOVZXDQ __m256i _mm256_cvtepu32_epi64(__m128i a);VPMOVZXDQ __m256i _mm256_mask_cvtepu32_epi64(__m256i a, __mmask8 k, __m128i b);

image/svg+xmlVPMOVZXDQ __m256i _mm256_maskz_cvtepu32_epi64( __mmask8 k, __m128i a);VPMOVZXWD __m256i _mm256_cvtepu16_epi32(__m128i a);VPMOVZXWD __m256i _mm256_mask_cvtepu16_epi32(__m256i a, __mmask16 k, __m128i b);VPMOVZXWD __m256i _mm256_maskz_cvtepu16_epi32(__mmask16 k, __m128i a);VPMOVZXWQ __m256i _mm256_cvtepu16_epi64(__m128i a);VPMOVZXWQ __m256i _mm256_mask_cvtepu16_epi64(__m256i a, __mmask8 k, __m128i b);VPMOVZXWQ __m256i _mm256_maskz_cvtepu16_epi64( __mmask8 k, __m128i a);VPMOVZXBW __m128i _mm_mask_cvtepu8_epi16(__m128i a, __mmask8 k, __m128i b);VPMOVZXBW __m128i _mm_maskz_cvtepu8_epi16( __mmask8 k, __m128i b);VPMOVZXBD __m128i _mm_mask_cvtepu8_epi32(__m128i a, __mmask8 k, __m128i b);VPMOVZXBD __m128i _mm_maskz_cvtepu8_epi32( __mmask8 k, __m128i b);VPMOVZXBQ __m128i _mm_mask_cvtepu8_epi64(__m128i a, __mmask8 k, __m128i b);VPMOVZXBQ __m128i _mm_maskz_cvtepu8_epi64( __mmask8 k, __m128i a);VPMOVZXDQ __m128i _mm_mask_cvtepu32_epi64(__m128i a, __mmask8 k, __m128i b);VPMOVZXDQ __m128i _mm_maskz_cvtepu32_epi64( __mmask8 k, __m128i a);VPMOVZXWD __m128i _mm_mask_cvtepu16_epi32(__m128i a, __mmask16 k, __m128i b);VPMOVZXWD __m128i _mm_maskz_cvtepu16_epi32(__mmask8 k, __m128i a);VPMOVZXWQ __m128i _mm_mask_cvtepu16_epi64(__m128i a, __mmask8 k, __m128i b);VPMOVZXWQ __m128i _mm_maskz_cvtepu16_epi64( __mmask8 k, __m128i a);PMOVZXBW __m128i _mm_ cvtepu8_epi16 ( __m128i a);PMOVZXBD __m128i _mm_ cvtepu8_epi32 ( __m128i a);PMOVZXBQ __m128i _mm_ cvtepu8_epi64 ( __m128i a);PMOVZXWD __m128i _mm_ cvtepu16_epi32 ( __m128i a);PMOVZXWQ __m128i _mm_ cvtepu16_epi64 ( __m128i a);PMOVZXDQ __m128i _mm_ cvtepu32_epi64 ( __m128i a);SIMD Floating-Point ExceptionsNoneOther ExceptionsNon-EVEX-encoded instruction, see Table2-22, “Type 5 Class Exception Conditions”.EVEX-encoded instruction, see Table2-51, “Type E5 Class Exception Conditions”.Additionally:#UD If VEX.vvvv != 1111B, or EVEX.vvvv != 1111B.

This UNOFFICIAL reference was generated from the official Intel® 64 and IA-32 Architectures Software Developer’s Manual by a dumb script. There is no guarantee that some parts aren't mangled or broken and is distributed WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.