image/svg+xml VPMOVQD/VPMOVSQD/VPMOVUSQD—Down Convert QWord to DWord Instruction Operand Encoding Description VPMOVQW down converts 64-bit integer elements in the source operand (the second operand) into packed double- words using truncation. VPMOVSQW converts signed 64-bit integers into packed signed doublewords using signed saturation. VPMOVUSQW convert unsigned quad-word values into unsigned double-word values using unsigned saturation. The source operand is a ZMM/YMM/XMM register. The destination operand is a YMM/XMM/XMM register or a 256/128/64-bit memory location. Down-converted doubleword elements are written to the destination operand (the first operand) from the least- significant doubleword. Doubleword elements of the destination operand are updated according to the writemask. Bits ( MAXVL-1 :256/128/64) of the register destination are zeroed. EVEX.vvvv is reserved and must be 1111b otherwise instructions will #UD. Opcode/ Instruction Op / En 64/32 bit Mode Support CPUID Feature Flag Description EVEX.128.F3.0F38.W0 35 / r VPMOVQD xmm1/m128 {k1}{z}, xmm2 AV/VAVX512VL AVX512F Converts 2 packed quad-word integers from xmm2 into 2 packed double-word integers in xmm1/m128 with truncation subject to writemask k1. EVEX.128.F3.0F38.W0 25 / r VPMOVSQD xmm1/m64 {k1}{z}, xmm2 AV/VAVX512VL AVX512F Converts 2 packed signed quad-word integers from xmm2 into 2 packed signed double-word integers in xmm1/m64 using signed saturation subject to writemask k1. EVEX.128.F3.0F38.W0 15 / r VPMOVUSQD xmm1/m64 {k1}{z}, xmm2 AV/VAVX512VL AVX512F Converts 2 packed unsigned quad-word integers from xmm2 into 2 packed unsigned double-word integers in xmm1/m64 using unsigned saturation subject to writemask k1. EVEX.256.F3.0F38.W0 35 / r VPMOVQD xmm1/m128 {k1}{z}, ymm2 AV/VAVX512VL AVX512F Converts 4 packed quad-word integers from ymm2 into 4 packed double-word integers in xmm1/m128 with truncation subject to writemask k1. EVEX.256.F3.0F38.W0 25 / r VPMOVSQD xmm1/m128 {k1}{z}, ymm2 AV/VAVX512VL AVX512F Converts 4 packed signed quad-word integers from ymm2 into 4 packed signed double-word integers in xmm1/m128 using signed saturation subject to writemask k1. EVEX.256.F3.0F38.W0 15 / r VPMOVUSQD xmm1/m128 {k1}{z}, ymm2 AV/VAVX512VL AVX512F Converts 4 packed unsigned quad-word integers from y mm2 into 4 packed unsigned double-word integers in xmm1/m128 using unsigned saturation subject to writemask k1. EVEX.512.F3.0F38.W0 35 / r VPMOVQD ymm1/m256 {k1}{z}, zmm2 AV/VAVX512FConverts 8 packed quad-word integers from zmm2 into 8 packed double-word integers in ymm1/m256 with truncation subject to writemask k1. EVEX.512.F3.0F38.W0 25 / r VPMOVSQD ymm1/m256 {k1}{z}, zmm2 AV/VAVX512FConverts 8 packed signed quad-word integers from zmm2 into 8 packed signed double-word integers in ymm1/m256 using signed saturation subject to writemask k1. EVEX.512.F3.0F38.W0 15 / r VPMOVUSQD ymm1/m256 {k1}{z}, zmm2 AV/VAVX512FConverts 8 packed unsigned quad-word integers from zmm2 into 8 packed unsigned double-word integers in ymm1/m256 using unsigned saturation subject to writemask k1. Op/EnTuple TypeOperand 1Operand 2Operand 3Operand 4 AHalf MemModRM:r/m (w)ModRM:reg (r)NANA image/svg+xml Operation VPMOVQD instruction (EVEX encoded version) reg-reg form (KL, VL) = (2, 128), (4, 256), (8, 512) FOR j := 0 TO KL-1 i := j * 32 m := j * 64 IF k1[j] OR *no writemask* THEN DEST[i+31:i] := TruncateQuadWordToDWord (SRC[m+63:m]) ELSE *zeroing-masking*; zeroing-masking DEST[i+31:i] := 0 FI FI; ENDFOR DEST[MAXVL-1:VL/2] := 0; VPMOVQD instruction (EVEX encoded version) memory form (KL, VL) = (2, 128), (4, 256), (8, 512) FOR j := 0 TO KL-1 i := j * 32 m := j * 64 IF k1[j] OR *no writemask* THEN DEST[i+31:i] := TruncateQuadWordToDWord (SRC[m+63:m]) ELSE *DEST[i+31:i] remains unchanged*; merging-masking FI; ENDFOR VPMOVSQD instruction (EVEX encoded version) reg-reg form (KL, VL) = (2, 128), (4, 256), (8, 512) FOR j := 0 TO KL-1 i := j * 32 m := j * 64 IF k1[j] OR *no writemask* THEN DEST[i+31:i] := SaturateSignedQuadWordToDWord (SRC[m+63:m]) ELSE IF *merging-masking*; merging-masking THEN *DEST[i+31:i] remains unchanged* ELSE *zeroing-masking*; zeroing-masking DEST[i+31:i] := 0 FI FI; ENDFOR DEST[MAXVL-1:VL/2] := 0; image/svg+xml VPMOVSQD instruction (EVEX encoded version) memory form (KL, VL) = (2, 128), (4, 256), (8, 512) FOR j := 0 TO KL-1 i := j * 32 m := j * 64 IF k1[j] OR *no writemask* THEN DEST[i+31:i] := SaturateSignedQuadWordToDWord (SRC[m+63:m]) ELSE *DEST[i+31:i] remains unchanged*; merging-masking FI; ENDFOR VPMOVUSQD instruction (EVEX encoded version) reg-reg form (KL, VL) = (2, 128), (4, 256), (8, 512) FOR j := 0 TO KL-1 i := j * 32 m := j * 64 IF k1[j] OR *no writemask* THEN DEST[i+31:i] := SaturateUnsignedQuadWordToDWord (SRC[m+63:m]) ELSE IF *merging-masking*; merging-masking THEN *DEST[i+31:i] remains unchanged* ELSE *zeroing-masking*; zeroing-masking DEST[i+31:i] := 0 FI FI; ENDFOR DEST[MAXVL-1:VL/2] := 0; VPMOVUSQD instruction (EVEX encoded version) memory form (KL, VL) = (2, 128), (4, 256), (8, 512) FOR j := 0 TO KL-1 i := j * 32 m := j * 64 IF k1[j] OR *no writemask* THEN DEST[i+31:i] := SaturateUnsignedQuadWordToDWord (SRC[m+63:m]) ELSE *DEST[i+31:i] remains unchanged*; merging-masking FI; ENDFOR image/svg+xml Intel C/C++ Compiler Intrinsic Equivalents VPMOVQD __m256i _mm512_cvtepi64_epi32( __m512i a); VPMOVQD __m256i _mm512_mask_cvtepi64_epi32(__m256i s, __mmask8 k, __m512i a); VPMOVQD __m256i _mm512_maskz_cvtepi64_epi32( __mmask8 k, __m512i a); VPMOVQD void _mm512_mask_cvtepi64_storeu_epi32(void * d, __mmask8 k, __m512i a); VPMOVSQD __m256i _mm512_cvtsepi64_epi32( __m512i a); VPMOVSQD __m256i _mm512_mask_cvtsepi64_epi32(__m256i s, __mmask8 k, __m512i a); VPMOVSQD __m256i _mm512_maskz_cvtsepi64_epi32( __mmask8 k, __m512i a); VPMOVSQD void _mm512_mask_cvtsepi64_storeu_epi32(void * d, __mmask8 k, __m512i a); VPMOVUSQD __m256i _mm512_cvtusepi64_epi32( __m512i a); VPMOVUSQD __m256i _mm512_mask_cvtusepi64_epi32(__m256i s, __mmask8 k, __m512i a); VPMOVUSQD __m256i _mm512_maskz_cvtusepi64_epi32( __mmask8 k, __m512i a); VPMOVUSQD void _mm512_mask_cvtusepi64_storeu_epi32(void * d, __mmask8 k, __m512i a); VPMOVUSQD __m128i _mm256_cvtusepi64_epi32(__m256i a); VPMOVUSQD __m128i _mm256_mask_cvtusepi64_epi32(__m128i a, __mmask8 k, __m256i b); VPMOVUSQD __m128i _mm256_maskz_cvtusepi64_epi32( __mmask8 k, __m256i b); VPMOVUSQD void _mm256_mask_cvtusepi64_storeu_epi32(void * , __mmask8 k, __m256i b); VPMOVUSQD __m128i _mm_cvtusepi64_epi32(__m128i a); VPMOVUSQD __m128i _mm_mask_cvtusepi64_epi32(__m128i a, __mmask8 k, __m128i b); VPMOVUSQD __m128i _mm_maskz_cvtusepi64_epi32( __mmask8 k, __m128i b); VPMOVUSQD void _mm_mask_cvtusepi64_storeu_epi32(void * , __mmask8 k, __m128i b); VPMOVSQD __m128i _mm256_cvtsepi64_epi32(__m256i a); VPMOVSQD __m128i _mm256_mask_cvtsepi64_epi32(__m128i a, __mmask8 k, __m256i b); VPMOVSQD __m128i _mm256_maskz_cvtsepi64_epi32( __mmask8 k, __m256i b); VPMOVSQD void _mm256_mask_cvtsepi64_storeu_epi32(void * , __mmask8 k, __m256i b); VPMOVSQD __m128i _mm_cvtsepi64_epi32(__m128i a); VPMOVSQD __m128i _mm_mask_cvtsepi64_epi32(__m128i a, __mmask8 k, __m128i b); VPMOVSQD __m128i _mm_maskz_cvtsepi64_epi32( __mmask8 k, __m128i b); VPMOVSQD void _mm_mask_cvtsepi64_storeu_epi32(void * , __mmask8 k, __m128i b); VPMOVQD __m128i _mm256_cvtepi64_epi32(__m256i a); VPMOVQD __m128i _mm256_mask_cvtepi64_epi32(__m128i a, __mmask8 k, __m256i b); VPMOVQD __m128i _mm256_maskz_cvtepi64_epi32( __mmask8 k, __m256i b); VPMOVQD void _mm256_mask_cvtepi64_storeu_epi32(void * , __mmask8 k, __m256i b); VPMOVQD __m128i _mm_cvtepi64_epi32(__m128i a); VPMOVQD __m128i _mm_mask_cvtepi64_epi32(__m128i a, __mmask8 k, __m128i b); VPMOVQD __m128i _mm_maskz_cvtepi64_epi32( __mmask8 k, __m128i b); VPMOVQD void _mm_mask_cvtepi64_storeu_epi32(void * , __mmask8 k, __m128i b); SIMD Floating-Point Exceptions None Other Exceptions EVEX-encoded instruction, see Table2-53, “Type E6 Class Exception Conditions”; additionally: #UD If EVEX.vvvv != 1111B. This UNOFFICIAL reference was generated from the official Intel® 64 and IA-32 Architectures Software Developer’s Manual by a dumb script. There is no guarantee that some parts aren't mangled or broken and is distributed WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE .