image/svg+xml VPMOVQB/VPMOVSQB/VPMOVUSQB—Down Convert QWord to Byte Instruction Operand Encoding Description VPMOVQB down converts 64-bit integer elements in the source operand (the second operand) into packed byte elements using truncation. VPMOVSQB converts signed 64-bit integers into packed signed bytes using signed satu- ration. VPMOVUSQB convert unsigned quad-word values into unsigned byte values using unsigned saturation. The source operand is a vector register. The destination operand is an XMM register or a memory location. Down-converted byte elements are written to the destination operand (the first operand) from the least-significant byte. Byte elements of the destination operand are updated according to the writemask. Bits ( MAXVL-1 :64) of the destination are zeroed. EVEX.vvvv is reserved and must be 1111b otherwise instructions will #UD. Opcode/ Instruction Op / En 64/32 bit Mode Support CPUID Feature Flag Description EVEX.128.F3.0F38.W0 32 / r VPMOVQB xmm1/m16 {k1}{z}, xmm2 AV/VAVX512VL AVX512F Converts 2 packed quad-word integers from xmm2 into 2 packed byte integers in xmm1/m16 with truncation under writemask k1. EVEX.128.F3.0F38.W0 22 / r VPMOVSQB xmm1/m16 {k1}{z}, xmm2 AV/VAVX512VL AVX512F Converts 2 packed signed quad-word integers from xmm2 into 2 packed signed byte integers in xmm1/m16 using signed saturation under writemask k1. EVEX.128.F3.0F38.W0 12 / r VPMOVUSQB xmm1/m16 {k1}{z}, xmm2 AV/VAVX512VL AVX512F Converts 2 packed unsigned quad-word integers from xmm2 into 2 packed unsigned byte integers in xmm1/m16 using unsigned saturation under writemask k1. EVEX.256.F3.0F38.W0 32 / r VPMOVQB xmm1/m32 {k1}{z}, ymm2 AV/VAVX512VL AVX512F Converts 4 packed quad-word integers from ymm2 into 4 packed byte integers in xmm1/m32 with truncation under writemask k1. EVEX.256.F3.0F38.W0 22 / r VPMOVSQB xmm1/m32 {k1}{z}, ymm2 AV/VAVX512VL AVX512F Converts 4 packed signed quad-word integers from ymm2 into 4 packed signed byte integers in xmm1/m32 using signed saturation under writemask k1. EVEX.256.F3.0F38.W0 12 / r VPMOVUSQB xmm1/m32 {k1}{z}, ymm2 AV/VAVX512VL AVX512F Converts 4 packed unsigned quad-word integers from ymm2 into 4 packed unsigned byte integers in xmm1/m32 using unsigned saturation under writemask k1. EVEX.512.F3.0F38.W0 32 / r VPMOVQB xmm1/m64 {k1}{z}, zmm2 AV/VAVX512FConverts 8 packed quad-word integers from zmm2 into 8 packed byte integers in xmm1/m64 with truncation under writemask k1. EVEX.512.F3.0F38.W0 22 / r VPMOVSQB xmm1/m64 {k1}{z}, zmm2 AV/VAVX512FConverts 8 packed signed quad-word integers from zmm2 into 8 packed signed byte integers in xmm1/m64 using signed saturation under writemask k1. EVEX.512.F3.0F38.W0 12 / r VPMOVUSQB xmm1/m64 {k1}{z}, zmm2 AV/VAVX512FConverts 8 packed unsigned quad-word integers from zmm2 into 8 packed unsigned byte integers in xmm1/m64 using unsigned saturation under writemask k1. Op/EnTuple TypeOperand 1Operand 2Operand 3Operand 4 AEighth MemModRM:r/m (w)ModRM:reg (r)NANA image/svg+xml Operation VPMOVQB instruction (EVEX encoded versions) when dest is a register (KL, VL) = (2, 128), (4, 256), (8, 512) FOR j := 0 TO KL-1 i := j * 8 m := j * 64 IF k1[j] OR *no writemask* THEN DEST[i+7:i] := TruncateQuadWordToByte (SRC[m+63:m]) ELSE IF *merging-masking*; merging-masking THEN *DEST[i+7:i] remains unchanged* ELSE *zeroing-masking*; zeroing-masking DEST[i+7:i] := 0 FI FI; ENDFOR DEST[MAXVL-1:VL/8] := 0; VPMOVQB instruction (EVEX encoded versions) when dest is memory (KL, VL) = (2, 128), (4, 256), (8, 512) FOR j := 0 TO KL-1 i := j * 8 m := j * 64 IF k1[j] OR *no writemask* THEN DEST[i+7:i] := TruncateQuadWordToByte (SRC[m+63:m]) ELSE *DEST[i+7:i] remains unchanged*; merging-masking FI; ENDFOR VPMOVSQB instruction (EVEX encoded versions) when dest is a register (KL, VL) = (2, 128), (4, 256), (8, 512) FOR j := 0 TO KL-1 i := j * 8 m := j * 64 IF k1[j] OR *no writemask* THEN DEST[i+7:i] := SaturateSignedQuadWordToByte (SRC[m+63:m]) ELSE IF *merging-masking*; merging-masking THEN *DEST[i+7:i] remains unchanged* ELSE *zeroing-masking*; zeroing-masking DEST[i+7:i] := 0 FI FI; ENDFOR DEST[MAXVL-1:VL/8] := 0; image/svg+xml VPMOVSQB instruction (EVEX encoded versions) when dest is memory (KL, VL) = (2, 128), (4, 256), (8, 512) FOR j := 0 TO KL-1 i := j * 8 m := j * 64 IF k1[j] OR *no writemask* THEN DEST[i+7:i] := SaturateSignedQuadWordToByte (SRC[m+63:m]) ELSE *DEST[i+7:i] remains unchanged*; merging-masking FI; ENDFOR VPMOVUSQB instruction (EVEX encoded versions) when dest is a register (KL, VL) = (2, 128), (4, 256), (8, 512) FOR j := 0 TO KL-1 i := j * 8 m := j * 64 IF k1[j] OR *no writemask* THEN DEST[i+7:i] := SaturateUnsignedQuadWordToByte (SRC[m+63:m]) ELSE IF *merging-masking*; merging-masking THEN *DEST[i+7:i] remains unchanged* ELSE *zeroing-masking*; zeroing-masking DEST[i+7:i] := 0 FI FI; ENDFOR DEST[MAXVL-1:VL/8] := 0; VPMOVUSQB instruction (EVEX encoded versions) when dest is memory (KL, VL) = (2, 128), (4, 256), (8, 512) FOR j := 0 TO KL-1 i := j * 8 m := j * 64 IF k1[j] OR *no writemask* THEN DEST[i+7:i] := SaturateUnsignedQuadWordToByte (SRC[m+63:m]) ELSE *DEST[i+7:i] remains unchanged*; merging-masking FI; ENDFOR image/svg+xml Intel C/C++ Compiler Intrinsic Equivalents VPMOVQB __m128i _mm512_cvtepi64_epi8( __m512i a); VPMOVQB __m128i _mm512_mask_cvtepi64_epi8(__m128i s, __mmask8 k, __m512i a); VPMOVQB __m128i _mm512_maskz_cvtepi64_epi8( __mmask8 k, __m512i a); VPMOVQB void _mm512_mask_cvtepi64_storeu_epi8(void * d, __mmask8 k, __m512i a); VPMOVSQB __m128i _mm512_cvtsepi64_epi8( __m512i a); VPMOVSQB __m128i _mm512_mask_cvtsepi64_epi8(__m128i s, __mmask8 k, __m512i a); VPMOVSQB __m128i _mm512_maskz_cvtsepi64_epi8( __mmask8 k, __m512i a); VPMOVSQB void _mm512_mask_cvtsepi64_storeu_epi8(void * d, __mmask8 k, __m512i a); VPMOVUSQB __m128i _mm512_cvtusepi64_epi8( __m512i a); VPMOVUSQB __m128i _mm512_mask_cvtusepi64_epi8(__m128i s, __mmask8 k, __m512i a); VPMOVUSQB __m128i _mm512_maskz_cvtusepi64_epi8( __mmask8 k, __m512i a); VPMOVUSQB void _mm512_mask_cvtusepi64_storeu_epi8(void * d, __mmask8 k, __m512i a); VPMOVUSQB __m128i _mm256_cvtusepi64_epi8(__m256i a); VPMOVUSQB __m128i _mm256_mask_cvtusepi64_epi8(__m128i a, __mmask8 k, __m256i b); VPMOVUSQB __m128i _mm256_maskz_cvtusepi64_epi8( __mmask8 k, __m256i b); VPMOVUSQB void _mm256_mask_cvtusepi64_storeu_epi8(void * , __mmask8 k, __m256i b); VPMOVUSQB __m128i _mm_cvtusepi64_epi8(__m128i a); VPMOVUSQB __m128i _mm_mask_cvtusepi64_epi8(__m128i a, __mmask8 k, __m128i b); VPMOVUSQB __m128i _mm_maskz_cvtusepi64_epi8( __mmask8 k, __m128i b); VPMOVUSQB void _mm_mask_cvtusepi64_storeu_epi8(void * , __mmask8 k, __m128i b); VPMOVSQB __m128i _mm256_cvtsepi64_epi8(__m256i a); VPMOVSQB __m128i _mm256_mask_cvtsepi64_epi8(__m128i a, __mmask8 k, __m256i b); VPMOVSQB __m128i _mm256_maskz_cvtsepi64_epi8( __mmask8 k, __m256i b); VPMOVSQB void _mm256_mask_cvtsepi64_storeu_epi8(void * , __mmask8 k, __m256i b); VPMOVSQB __m128i _mm_cvtsepi64_epi8(__m128i a); VPMOVSQB __m128i _mm_mask_cvtsepi64_epi8(__m128i a, __mmask8 k, __m128i b); VPMOVSQB __m128i _mm_maskz_cvtsepi64_epi8( __mmask8 k, __m128i b); VPMOVSQB void _mm_mask_cvtsepi64_storeu_epi8(void * , __mmask8 k, __m128i b); VPMOVQB __m128i _mm256_cvtepi64_epi8(__m256i a); VPMOVQB __m128i _mm256_mask_cvtepi64_epi8(__m128i a, __mmask8 k, __m256i b); VPMOVQB __m128i _mm256_maskz_cvtepi64_epi8( __mmask8 k, __m256i b); VPMOVQB void _mm256_mask_cvtepi64_storeu_epi8(void * , __mmask8 k, __m256i b); VPMOVQB __m128i _mm_cvtepi64_epi8(__m128i a); VPMOVQB __m128i _mm_mask_cvtepi64_epi8(__m128i a, __mmask8 k, __m128i b); VPMOVQB __m128i _mm_maskz_cvtepi64_epi8( __mmask8 k, __m128i b); VPMOVQB void _mm_mask_cvtepi64_storeu_epi8(void * , __mmask8 k, __m128i b); SIMD Floating-Point Exceptions None Other Exceptions EVEX-encoded instruction, see Table2-53, “Type E6 Class Exception Conditions”; additionally: #UD If EVEX.vvvv != 1111B. This UNOFFICIAL reference was generated from the official Intel® 64 and IA-32 Architectures Software Developer’s Manual by a dumb script. There is no guarantee that some parts aren't mangled or broken and is distributed WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE .