image/svg+xmlPSRLDQ—Shift Double Quadword Right LogicalInstruction Operand EncodingDescriptionShifts the destination operand (first operand) to the right by the number of bytes specified in the count operand (second operand). The empty high-order bytes are cleared (set to all 0s). If the value specified by the count operand is greater than 15, the destination operand is set to all 0s. The count operand is an 8-bit immediate.In 64-bit mode and not encoded with VEX/EVEX, using a REX prefix in the form of REX.R permits this instruction to access additional registers (XMM8-XMM15).128-bit Legacy SSE version: The source and destination operands are the same. Bits (MAXVL-1:128) of the corre-sponding YMM destination register remain unchanged.VEX.128 encoded version: The source and destination operands are XMM registers. Bits (MAXVL-1:128) of the destination YMM register are zeroed. VEX.256 encoded version: The source operand is a YMM register. The destination operand is a YMM register. The count operand applies to both the low and high 128-bit lanes.VEX.256 encoded version: The source operand is YMM register. The destination operand is an YMM register. Bits (MAXVL-1:256) of the corresponding ZMM register are zeroed. The count operand applies to both the low and high 128-bit lanes.EVEX encoded versions: The source operand is a ZMM/YMM/XMM register or a 512/256/128-bit memory location. The destination operand is a ZMM/YMM/XMM register. The count operand applies to each 128-bit lanes.Note: VEX.vvvv/EVEX.vvvv encodes the destination register.Opcode/InstructionOp/ En64/32 bit Mode SupportCPUID Feature FlagDescription66 0F 73 /3 ibPSRLDQ xmm1, imm8AV/VSSE2Shift xmm1 right by imm8 while shifting in 0s.VEX.128.66.0F.WIG 73 /3 ibVPSRLDQ xmm1, xmm2, imm8BV/VAVXShift xmm2 right by imm8 bytes while shifting in 0s.VEX.256.66.0F.WIG 73 /3 ibVPSRLDQ ymm1, ymm2, imm8BV/VAVX2Shift ymm1 right by imm8 bytes while shifting in 0s.EVEX.128.66.0F.WIG 73 /3 ibVPSRLDQ xmm1, xmm2/m128, imm8CV/VAVX512VLAVX512BWShift xmm2/m128 right by imm8 bytes while shifting in 0s and store result in xmm1.EVEX.256.66.0F.WIG 73 /3 ibVPSRLDQ ymm1, ymm2/m256, imm8CV/VAVX512VLAVX512BWShift ymm2/m256 right by imm8 bytes while shifting in 0s and store result in ymm1.EVEX.512.66.0F.WIG 73 /3 ibVPSRLDQ zmm1, zmm2/m512, imm8CV/VAVX512BWShift zmm2/m512 right by imm8 bytes while shifting in 0s and store result in zmm1.Op/EnTuple TypeOperand 1Operand 2Operand 3Operand 4ANAModRM:r/m (r, w)imm8NANABNAVEX.vvvv (w)ModRM:r/m (r)imm8NACFull MemEVEX.vvvv (w)ModRM:r/m (R)Imm8NA

image/svg+xmlOperationVPSRLDQ (EVEX.512 encoded version)TEMP := COUNTIF (TEMP > 15) THEN TEMP := 16; FIDEST[127:0] := SRC[127:0] >> (TEMP * 8)DEST[255:128] := SRC[255:128] >> (TEMP * 8)DEST[383:256] := SRC[383:256] >> (TEMP * 8)DEST[511:384] := SRC[511:384] >> (TEMP * 8)DEST[MAXVL-1:512] := 0;VPSRLDQ (VEX.256 and EVEX.256 encoded version)TEMP := COUNTIF (TEMP > 15) THEN TEMP := 16; FIDEST[127:0] := SRC[127:0] >> (TEMP * 8)DEST[255:128] := SRC[255:128] >> (TEMP * 8)DEST[MAXVL-1:256] := 0;VPSRLDQ (VEX.128 and EVEX.128 encoded version)TEMP := COUNTIF (TEMP > 15) THEN TEMP := 16; FIDEST := SRC >> (TEMP * 8)DEST[MAXVL-1:128] := 0;PSRLDQ(128-bit Legacy SSE version)TEMP := COUNTIF (TEMP > 15) THEN TEMP := 16; FIDEST := DEST >> (TEMP * 8)DEST[MAXVL-1:128] (Unmodified)Intel C/C++ Compiler Intrinsic Equivalents(V)PSRLDQ __m128i _mm_srli_si128 ( __m128i a, int imm)VPSRLDQ __m256i _mm256_bsrli_epi128 ( __m256i, const int)VPSRLDQ __m512i _mm512_bsrli_epi128 ( __m512i, int)Flags AffectedNone.Numeric ExceptionsNone.Other ExceptionsNon-EVEX-encoded instruction, see Table2-24, “Type 7 Class Exception Conditions”.EVEX-encoded instruction, see Exceptions Type E4NF.nb in Table2-50, “Type E4NF Class Exception Conditions”.

This UNOFFICIAL reference was generated from the official Intel® 64 and IA-32 Architectures Software Developer’s Manual by a dumb script. There is no guarantee that some parts aren't mangled or broken and is distributed WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.