image/svg+xmlVPERMQ—Qwords Element PermutationInstruction Operand EncodingDescriptionThe imm8 version: Copies quadwords from the source operand (the second operand) to the destination operand (the first operand) according to the indices specified by the immediate operand (the third operand). Each two-bit value in the immediate byte selects a qword element in the source operand. VEX version: The source operand can be a YMM register or a memory location. Bits (MAXVL-1:256) of the corre-sponding destination register are zeroed.In EVEX.512 encoded version, The elements in the destination are updated using the writemask k1 and the imm8 bits are reused as control bits for the upper 256-bit half when the control bits are coming from immediate. The source operand can be a ZMM register, a 512-bit memory location or a 512-bit vector broadcasted from a 64-bit memory location.Immediate control versions: VEX.vvvv and EVEX.vvvv are reserved and must be 1111b otherwise instructions will #UD.The vector control version: Copies quadwords from the second source operand (the third operand) to the destina-tion operand (the first operand) according to the indices in the first source operand (the second operand). The first 3 bits of each 64 bit element in the index operand selects which quadword in the second source operand to copy. The first and second operands are ZMM registers, the third operand can be a ZMM register, a 512-bit memory loca-tion or a 512-bit vector broadcasted from a 64-bit memory location. The elements in the destination are updated using the writemask k1.Note that this instruction permits a qword in the source operand to be copied to multiple locations in the destination operand. If VPERMPQ is encoded with VEX.L= 0 or EVEX.128, an attempt to execute the instruction will cause an #UD excep-tion.Opcode/InstructionOp / En64/32 bit Mode SupportCPUID Feature FlagDescriptionVEX.256.66.0F3A.W1 00 /r ibVPERMQ ymm1, ymm2/m256, imm8AV/VAVX2Permute qwords in ymm2/m256 using indices in imm8 and store the result in ymm1.EVEX.256.66.0F3A.W1 00 /r ibVPERMQ ymm1 {k1}{z}, ymm2/m256/m64bcst, imm8BV/VAVX512VLAVX512FPermute qwords in ymm2/m256/m64bcst using indexes in imm8 and store the result in ymm1.EVEX.512.66.0F3A.W1 00 /r ibVPERMQ zmm1 {k1}{z}, zmm2/m512/m64bcst, imm8BV/VAVX512FPermute qwords in zmm2/m512/m64bcst using indices in imm8 and store the result in zmm1.EVEX.256.66.0F38.W1 36 /rVPERMQ ymm1 {k1}{z}, ymm2, ymm3/m256/m64bcstCV/VAVX512VLAVX512FPermute qwords in ymm3/m256/m64bcst using indexes in ymm2 and store the result in ymm1.EVEX.512.66.0F38.W1 36 /r VPERMQ zmm1 {k1}{z}, zmm2, zmm3/m512/m64bcstCV/VAVX512FPermute qwords in zmm3/m512/m64bcst using indices in zmm2 and store the result in zmm1.Op/EnTuple TypeOperand 1Operand 2Operand 3Operand 4ANAModRM:reg (w)ModRM:r/m (r)Imm8NABFullModRM:reg (w)ModRM:r/m (r)Imm8NACFullModRM:reg (w)EVEX.vvvv (r)ModRM:r/m (r)NA

image/svg+xmlOperationVPERMQ (EVEX - imm8 control forms)(KL, VL) = (4, 256), (8, 512)FOR j := 0 TO KL-1i := j * 64IF (EVEX.b = 1) AND (SRC *is memory*)THEN TMP_SRC[i+63:i] := SRC[63:0];ELSE TMP_SRC[i+63:i] := SRC[i+63:i];FI;ENDFOR;TMP_DEST[63:0] := (TMP_SRC[255:0] >> (IMM8[1:0] * 64))[63:0];TMP_DEST[127:64] := (TMP_SRC[255:0] >> (IMM8[3:2] * 64))[63:0];TMP_DEST[191:128] := (TMP_SRC[255:0] >> (IMM8[5:4] * 64))[63:0];TMP_DEST[255:192] := (TMP_SRC[255:0] >> (IMM8[7:6] * 64))[63:0];IF VL >= 512TMP_DEST[319:256] := (TMP_SRC[511:256] >> (IMM8[1:0] * 64))[63:0];TMP_DEST[383:320] := (TMP_SRC[511:256] >> (IMM8[3:2] * 64))[63:0];TMP_DEST[447:384] := (TMP_SRC[511:256] >> (IMM8[5:4] * 64))[63:0];TMP_DEST[511:448] := (TMP_SRC[511:256] >> (IMM8[7:6] * 64))[63:0];FI;FOR j := 0 TO KL-1i := j * 64IF k1[j] OR *no writemask*THEN DEST[i+63:i] := TMP_DEST[i+63:i]ELSE IF *merging-masking*; merging-maskingTHEN *DEST[i+63:i] remains unchanged*ELSE ; zeroing-maskingDEST[i+63:i] := 0;zeroing-maskingFI;FI;ENDFORDEST[MAXVL-1:VL] := 0VPERMQ (EVEX - vector control forms)(KL, VL) = (4, 256), (8, 512)FOR j := 0 TO KL-1i := j * 64IF (EVEX.b = 1) AND (SRC2 *is memory*)THEN TMP_SRC2[i+63:i] := SRC2[63:0];ELSE TMP_SRC2[i+63:i] := SRC2[i+63:i];FI;ENDFOR;IF VL = 256TMP_DEST[63:0] := (TMP_SRC2[255:0] >> (SRC1[1:0] * 64))[63:0];TMP_DEST[127:64] := (TMP_SRC2[255:0] >> (SRC1[65:64] * 64))[63:0];TMP_DEST[191:128] := (TMP_SRC2[255:0] >> (SRC1[129:128] * 64))[63:0];TMP_DEST[255:192] := (TMP_SRC2[255:0] >> (SRC1[193:192] * 64))[63:0];FI;IF VL = 512TMP_DEST[63:0] := (TMP_SRC2[511:0] >> (SRC1[2:0] * 64))[63:0];TMP_DEST[127:64] := (TMP_SRC2[511:0] >> (SRC1[66:64] * 64))[63:0];TMP_DEST[191:128] := (TMP_SRC2[511:0] >> (SRC1[130:128] * 64))[63:0];TMP_DEST[255:192] := (TMP_SRC2[511:0] >> (SRC1[194:192] * 64))[63:0];

image/svg+xmlTMP_DEST[319:256] := (TMP_SRC2[511:0] >> (SRC1[258:256] * 64))[63:0];TMP_DEST[383:320] := (TMP_SRC2[511:0] >> (SRC1[322:320] * 64))[63:0];TMP_DEST[447:384] := (TMP_SRC2[511:0] >> (SRC1[386:384] * 64))[63:0];TMP_DEST[511:448] := (TMP_SRC2[511:0] >> (SRC1[450:448] * 64))[63:0];FI;FOR j := 0 TO KL-1i := j * 64IF k1[j] OR *no writemask*THEN DEST[i+63:i] := TMP_DEST[i+63:i]ELSE IF *merging-masking*; merging-maskingTHEN *DEST[i+63:i] remains unchanged*ELSE ; zeroing-maskingDEST[i+63:i] := 0;zeroing-maskingFI;FI;ENDFORDEST[MAXVL-1:VL] := 0VPERMQ (VEX.256 encoded version)DEST[63:0] := (SRC[255:0] >> (IMM8[1:0] * 64))[63:0];DEST[127:64] := (SRC[255:0] >> (IMM8[3:2] * 64))[63:0];DEST[191:128] := (SRC[255:0] >> (IMM8[5:4] * 64))[63:0];DEST[255:192] := (SRC[255:0] >> (IMM8[7:6] * 64))[63:0];DEST[MAXVL-1:256] := 0Intel C/C++ Compiler Intrinsic EquivalentVPERMQ __m512i _mm512_permutex_epi64( __m512i a, int imm);VPERMQ __m512i _mm512_mask_permutex_epi64(__m512i s, __mmask8 k, __m512i a, int imm);VPERMQ __m512i _mm512_maskz_permutex_epi64( __mmask8 k, __m512i a, int imm);VPERMQ __m512i _mm512_permutexvar_epi64( __m512i a, __m512i b);VPERMQ __m512i _mm512_mask_permutexvar_epi64(__m512i s, __mmask8 k, __m512i a, __m512i b);VPERMQ __m512i _mm512_maskz_permutexvar_epi64( __mmask8 k, __m512i a, __m512i b);VPERMQ __m256i _mm256_permutex_epi64( __m256i a, int imm);VPERMQ __m256i _mm256_mask_permutex_epi64(__m256i s, __mmask8 k, __m256i a, int imm);VPERMQ __m256i _mm256_maskz_permutex_epi64( __mmask8 k, __m256i a, int imm);VPERMQ __m256i _mm256_permutexvar_epi64( __m256i a, __m256i b);VPERMQ __m256i _mm256_mask_permutexvar_epi64(__m256i s, __mmask8 k, __m256i a, __m256i b);VPERMQ __m256i _mm256_maskz_permutexvar_epi64( __mmask8 k, __m256i a, __m256i b);SIMD Floating-Point ExceptionsNoneOther ExceptionsNon-EVEX-encoded instruction, see Table2-21, “Type 4 Class Exception Conditions”; additionally:#UDIf VEX.L = 0.If VEX.vvvv != 1111B.EVEX-encoded instruction, see Table2-50, “Type E4NF Class Exception Conditions”; additionally:#UDIf encoded with EVEX.128.If EVEX.vvvv != 1111B and with imm8.

This UNOFFICIAL reference was generated from the official Intel® 64 and IA-32 Architectures Software Developer’s Manual by a dumb script. There is no guarantee that some parts aren't mangled or broken and is distributed WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.