image/svg+xmlPSLLDQ—Shift Double Quadword Left LogicalInstruction Operand EncodingDescriptionShifts the destination operand (first operand) to the left by the number of bytes specified in the count operand (second operand). The empty low-order bytes are cleared (set to all 0s). If the value specified by the count operand is greater than 15, the destination operand is set to all 0s. The count operand is an 8-bit immediate.128-bit Legacy SSE version: The source and destination operands are the same. Bits (MAXVL-1:128) of the corre-sponding YMM destination register remain unchanged.VEX.128 encoded version: The source and destination operands are XMM registers. Bits (MAXVL-1:128) of the destination YMM register are zeroed. VEX.256 encoded version: The source operand is YMM register. The destination operand is an YMM register. Bits (MAXVL-1:256) of the corresponding ZMM register are zeroed. The count operand applies to both the low and high 128-bit lanes.EVEX encoded versions: The source operand is a ZMM/YMM/XMM register or a 512/256/128-bit memory location. The destination operand is a ZMM/YMM/XMM register. The count operand applies to each 128-bit lanes.OperationVPSLLDQ (EVEX.U1.512 encoded version)TEMP := COUNTIF (TEMP > 15) THEN TEMP := 16; FIDEST[127:0] := SRC[127:0] << (TEMP * 8)DEST[255:128] := SRC[255:128] << (TEMP * 8)DEST[383:256] := SRC[383:256] << (TEMP * 8)DEST[511:384] := SRC[511:384] << (TEMP * 8)DEST[MAXVL-1:512] := 0Opcode/InstructionOp/ En64/32 bit Mode SupportCPUID Feature FlagDescription66 0F 73 /7 ibPSLLDQ xmm1, imm8 AV/VSSE2Shift xmm1 left by imm8 bytes while shifting in 0s.VEX.128.66.0F.WIG 73 /7 ibVPSLLDQ xmm1, xmm2, imm8BV/VAVXShift xmm2 left by imm8 bytes while shifting in 0s and store result in xmm1.VEX.256.66.0F.WIG 73 /7 ibVPSLLDQ ymm1, ymm2, imm8BV/VAVX2Shift ymm2 left by imm8 bytes while shifting in 0s and store result in ymm1.EVEX.128.66.0F.WIG 73 /7 ibVPSLLDQ xmm1,xmm2/ m128, imm8CV/VAVX512VLAVX512BWShift xmm2/m128 left by imm8 bytes while shifting in 0s and store result in xmm1.EVEX.256.66.0F.WIG 73 /7 ibVPSLLDQ ymm1, ymm2/m256, imm8CV/VAVX512VLAVX512BWShift ymm2/m256 left by imm8 bytes while shifting in 0s and store result in ymm1.EVEX.512.66.0F.WIG 73 /7 ibVPSLLDQ zmm1, zmm2/m512, imm8CV/VAVX512BWShift zmm2/m512 left by imm8 bytes while shifting in 0s and store result in zmm1.Op/EnTuple TypeOperand 1Operand 2Operand 3Operand 4ANAModRM:r/m (r, w)imm8NANABNAVEX.vvvv (w)ModRM:r/m (r)imm8NACFull MemEVEX.vvvv (w)ModRM:r/m (R)Imm8NA

image/svg+xmlVPSLLDQ (VEX.256 and EVEX.256 encoded version)TEMP := COUNTIF (TEMP > 15) THEN TEMP := 16; FIDEST[127:0] := SRC[127:0] << (TEMP * 8)DEST[255:128] := SRC[255:128] << (TEMP * 8)DEST[MAXVL-1:256] := 0VPSLLDQ (VEX.128 and EVEX.128 encoded version)TEMP := COUNTIF (TEMP > 15) THEN TEMP := 16; FIDEST := SRC << (TEMP * 8)DEST[MAXVL-1:128] := 0PSLLDQ(128-bit Legacy SSE version)TEMP := COUNTIF (TEMP > 15) THEN TEMP := 16; FIDEST := DEST << (TEMP * 8)DEST[MAXVL-1:128] (Unmodified)Intel C/C++ Compiler Intrinsic Equivalent(V)PSLLDQ:__m128i _mm_slli_si128 ( __m128i a, int imm)VPSLLDQ:__m256i _mm256_slli_si256 ( __m256i a, const int imm)VPSLLDQ __m512i _mm512_bslli_epi128 ( __m512i a, const int imm)Flags AffectedNone.Numeric ExceptionsNone.Other ExceptionsNon-EVEX-encoded instruction, see Table2-24, “Type 7 Class Exception Conditions”. EVEX-encoded instruction, see Exceptions Type E4NF.nb in Table2-50, “Type E4NF Class Exception Conditions”.

This UNOFFICIAL reference was generated from the official Intel® 64 and IA-32 Architectures Software Developer’s Manual by a dumb script. There is no guarantee that some parts aren't mangled or broken and is distributed WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.