image/svg+xmlVPMOVQD/VPMOVSQD/VPMOVUSQD—Down Convert QWord to DWordInstruction Operand EncodingDescription VPMOVQW down converts 64-bit integer elements in the source operand (the second operand) into packed double-words using truncation. VPMOVSQW converts signed 64-bit integers into packed signed doublewords using signed saturation. VPMOVUSQW convert unsigned quad-word values into unsigned double-word values using unsigned saturation. The source operand is a ZMM/YMM/XMM register. The destination operand is a YMM/XMM/XMM register or a 256/128/64-bit memory location.Down-converted doubleword elements are written to the destination operand (the first operand) from the least-significant doubleword. Doubleword elements of the destination operand are updated according to the writemask. Bits (MAXVL-1:256/128/64) of the register destination are zeroed.EVEX.vvvv is reserved and must be 1111b otherwise instructions will #UD.Opcode/InstructionOp / En64/32 bit Mode SupportCPUID Feature FlagDescriptionEVEX.128.F3.0F38.W0 35 /rVPMOVQD xmm1/m128 {k1}{z}, xmm2AV/VAVX512VLAVX512FConverts 2 packed quad-word integers from xmm2 into 2 packed double-word integers in xmm1/m128 with truncation subject to writemask k1.EVEX.128.F3.0F38.W0 25 /rVPMOVSQD xmm1/m64 {k1}{z}, xmm2AV/VAVX512VLAVX512FConverts 2 packed signed quad-word integers from xmm2 into 2 packed signed double-word integers in xmm1/m64 using signed saturation subject to writemask k1.EVEX.128.F3.0F38.W0 15 /rVPMOVUSQD xmm1/m64 {k1}{z}, xmm2AV/VAVX512VLAVX512FConverts 2 packed unsigned quad-word integers from xmm2 into 2 packed unsigned double-word integers in xmm1/m64 using unsigned saturation subject to writemask k1.EVEX.256.F3.0F38.W0 35 /rVPMOVQD xmm1/m128 {k1}{z}, ymm2AV/VAVX512VLAVX512FConverts 4 packed quad-word integers from ymm2 into 4 packed double-word integers in xmm1/m128 with truncation subject to writemask k1.EVEX.256.F3.0F38.W0 25 /rVPMOVSQD xmm1/m128 {k1}{z}, ymm2AV/VAVX512VLAVX512FConverts 4 packed signed quad-word integers from ymm2 into 4 packed signed double-word integers in xmm1/m128 using signed saturation subject to writemask k1.EVEX.256.F3.0F38.W0 15 /rVPMOVUSQD xmm1/m128 {k1}{z}, ymm2AV/VAVX512VLAVX512FConverts 4 packed unsigned quad-word integers from ymm2 into 4 packed unsigned double-word integers in xmm1/m128 using unsigned saturation subject to writemask k1.EVEX.512.F3.0F38.W0 35 /rVPMOVQD ymm1/m256 {k1}{z}, zmm2AV/VAVX512FConverts 8 packed quad-word integers from zmm2 into 8 packed double-word integers in ymm1/m256 with truncation subject to writemask k1.EVEX.512.F3.0F38.W0 25 /rVPMOVSQD ymm1/m256 {k1}{z}, zmm2AV/VAVX512FConverts 8 packed signed quad-word integers from zmm2 into 8 packed signed double-word integers in ymm1/m256 using signed saturation subject to writemask k1.EVEX.512.F3.0F38.W0 15 /rVPMOVUSQD ymm1/m256 {k1}{z}, zmm2AV/VAVX512FConverts 8 packed unsigned quad-word integers from zmm2 into 8 packed unsigned double-word integers in ymm1/m256 using unsigned saturation subject to writemask k1.Op/EnTuple TypeOperand 1Operand 2Operand 3Operand 4AHalf MemModRM:r/m (w)ModRM:reg (r)NANA

image/svg+xmlOperationVPMOVQD instruction (EVEX encoded version) reg-reg form(KL, VL) = (2, 128), (4, 256), (8, 512)FOR j := 0 TO KL-1i := j * 32m := j * 64IF k1[j] OR *no writemask*THEN DEST[i+31:i] := TruncateQuadWordToDWord (SRC[m+63:m])ELSE *zeroing-masking*; zeroing-maskingDEST[i+31:i] := 0FIFI;ENDFORDEST[MAXVL-1:VL/2] := 0;VPMOVQD instruction (EVEX encoded version) memory form(KL, VL) = (2, 128), (4, 256), (8, 512)FOR j := 0 TO KL-1i := j * 32m := j * 64IF k1[j] OR *no writemask*THEN DEST[i+31:i] := TruncateQuadWordToDWord (SRC[m+63:m])ELSE *DEST[i+31:i] remains unchanged*; merging-maskingFI;ENDFORVPMOVSQD instruction (EVEX encoded version) reg-reg form(KL, VL) = (2, 128), (4, 256), (8, 512)FOR j := 0 TO KL-1i := j * 32m := j * 64IF k1[j] OR *no writemask*THEN DEST[i+31:i] := SaturateSignedQuadWordToDWord (SRC[m+63:m])ELSE IF *merging-masking*; merging-maskingTHEN *DEST[i+31:i] remains unchanged*ELSE *zeroing-masking*; zeroing-maskingDEST[i+31:i] := 0FIFI;ENDFORDEST[MAXVL-1:VL/2] := 0;

image/svg+xmlVPMOVSQD instruction (EVEX encoded version) memory form(KL, VL) = (2, 128), (4, 256), (8, 512)FOR j := 0 TO KL-1i := j * 32m := j * 64IF k1[j] OR *no writemask*THEN DEST[i+31:i] := SaturateSignedQuadWordToDWord (SRC[m+63:m])ELSE *DEST[i+31:i] remains unchanged*; merging-maskingFI;ENDFORVPMOVUSQD instruction (EVEX encoded version) reg-reg form(KL, VL) = (2, 128), (4, 256), (8, 512)FOR j := 0 TO KL-1i := j * 32m := j * 64IF k1[j] OR *no writemask*THEN DEST[i+31:i] := SaturateUnsignedQuadWordToDWord (SRC[m+63:m])ELSE IF *merging-masking*; merging-maskingTHEN *DEST[i+31:i] remains unchanged*ELSE *zeroing-masking*; zeroing-maskingDEST[i+31:i] := 0FIFI;ENDFORDEST[MAXVL-1:VL/2] := 0;VPMOVUSQD instruction (EVEX encoded version) memory form(KL, VL) = (2, 128), (4, 256), (8, 512)FOR j := 0 TO KL-1i := j * 32m := j * 64IF k1[j] OR *no writemask*THEN DEST[i+31:i] := SaturateUnsignedQuadWordToDWord (SRC[m+63:m])ELSE *DEST[i+31:i] remains unchanged*; merging-maskingFI;ENDFOR

image/svg+xmlIntel C/C++ Compiler Intrinsic EquivalentsVPMOVQD __m256i _mm512_cvtepi64_epi32( __m512i a);VPMOVQD __m256i _mm512_mask_cvtepi64_epi32(__m256i s, __mmask8 k, __m512i a);VPMOVQD __m256i _mm512_maskz_cvtepi64_epi32( __mmask8 k, __m512i a);VPMOVQD void _mm512_mask_cvtepi64_storeu_epi32(void * d, __mmask8 k, __m512i a);VPMOVSQD __m256i _mm512_cvtsepi64_epi32( __m512i a);VPMOVSQD __m256i _mm512_mask_cvtsepi64_epi32(__m256i s, __mmask8 k, __m512i a);VPMOVSQD __m256i _mm512_maskz_cvtsepi64_epi32( __mmask8 k, __m512i a);VPMOVSQD void _mm512_mask_cvtsepi64_storeu_epi32(void * d, __mmask8 k, __m512i a);VPMOVUSQD __m256i _mm512_cvtusepi64_epi32( __m512i a);VPMOVUSQD __m256i _mm512_mask_cvtusepi64_epi32(__m256i s, __mmask8 k, __m512i a);VPMOVUSQD __m256i _mm512_maskz_cvtusepi64_epi32( __mmask8 k, __m512i a);VPMOVUSQD void _mm512_mask_cvtusepi64_storeu_epi32(void * d, __mmask8 k, __m512i a);VPMOVUSQD __m128i _mm256_cvtusepi64_epi32(__m256i a);VPMOVUSQD __m128i _mm256_mask_cvtusepi64_epi32(__m128i a, __mmask8 k, __m256i b);VPMOVUSQD __m128i _mm256_maskz_cvtusepi64_epi32( __mmask8 k, __m256i b);VPMOVUSQD void _mm256_mask_cvtusepi64_storeu_epi32(void * , __mmask8 k, __m256i b);VPMOVUSQD __m128i _mm_cvtusepi64_epi32(__m128i a);VPMOVUSQD __m128i _mm_mask_cvtusepi64_epi32(__m128i a, __mmask8 k, __m128i b);VPMOVUSQD __m128i _mm_maskz_cvtusepi64_epi32( __mmask8 k, __m128i b);VPMOVUSQD void _mm_mask_cvtusepi64_storeu_epi32(void * , __mmask8 k, __m128i b);VPMOVSQD __m128i _mm256_cvtsepi64_epi32(__m256i a);VPMOVSQD __m128i _mm256_mask_cvtsepi64_epi32(__m128i a, __mmask8 k, __m256i b);VPMOVSQD __m128i _mm256_maskz_cvtsepi64_epi32( __mmask8 k, __m256i b);VPMOVSQD void _mm256_mask_cvtsepi64_storeu_epi32(void * , __mmask8 k, __m256i b);VPMOVSQD __m128i _mm_cvtsepi64_epi32(__m128i a);VPMOVSQD __m128i _mm_mask_cvtsepi64_epi32(__m128i a, __mmask8 k, __m128i b);VPMOVSQD __m128i _mm_maskz_cvtsepi64_epi32( __mmask8 k, __m128i b);VPMOVSQD void _mm_mask_cvtsepi64_storeu_epi32(void * , __mmask8 k, __m128i b);VPMOVQD __m128i _mm256_cvtepi64_epi32(__m256i a);VPMOVQD __m128i _mm256_mask_cvtepi64_epi32(__m128i a, __mmask8 k, __m256i b);VPMOVQD __m128i _mm256_maskz_cvtepi64_epi32( __mmask8 k, __m256i b);VPMOVQD void _mm256_mask_cvtepi64_storeu_epi32(void * , __mmask8 k, __m256i b);VPMOVQD __m128i _mm_cvtepi64_epi32(__m128i a);VPMOVQD __m128i _mm_mask_cvtepi64_epi32(__m128i a, __mmask8 k, __m128i b);VPMOVQD __m128i _mm_maskz_cvtepi64_epi32( __mmask8 k, __m128i b);VPMOVQD void _mm_mask_cvtepi64_storeu_epi32(void * , __mmask8 k, __m128i b);SIMD Floating-Point ExceptionsNoneOther ExceptionsEVEX-encoded instruction, see Table2-53, “Type E6 Class Exception Conditions”; additionally:#UD If EVEX.vvvv != 1111B.

This UNOFFICIAL reference was generated from the official Intel® 64 and IA-32 Architectures Software Developer’s Manual by a dumb script. There is no guarantee that some parts aren't mangled or broken and is distributed WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.