image/svg+xmlVPRORD/VPRORVD/VPRORQ/VPRORVQ—Bit Rotate RightInstruction Operand EncodingOpcode/InstructionOp / En64/32 bit Mode SupportCPUID Feature FlagDescriptionEVEX.128.66.0F38.W0 14 /rVPRORVD xmm1 {k1}{z}, xmm2, xmm3/m128/m32bcstBV/VAVX512VLAVX512FRotate doublewords in xmm2 right by count in the corresponding element of xmm3/m128/m32bcst, store result using writemask k1.EVEX.128.66.0F.W0 72 /0 ibVPRORD xmm1 {k1}{z}, xmm2/m128/m32bcst, imm8AV/VAVX512VLAVX512FRotate doublewords in xmm2/m128/m32bcst right by imm8, store result using writemask k1.EVEX.128.66.0F38.W1 14 /rVPRORVQ xmm1 {k1}{z}, xmm2, xmm3/m128/m64bcstBV/VAVX512VLAVX512FRotate quadwords in xmm2 right by count in the corresponding element of xmm3/m128/m64bcst, store result using writemask k1.EVEX.128.66.0F.W1 72 /0 ibVPRORQ xmm1 {k1}{z}, xmm2/m128/m64bcst, imm8AV/VAVX512VLAVX512FRotate quadwords in xmm2/m128/m64bcst right by imm8, store result using writemask k1.EVEX.256.66.0F38.W0 14 /rVPRORVD ymm1 {k1}{z}, ymm2, ymm3/m256/m32bcstBV/VAVX512VLAVX512FRotate doublewords in ymm2 right by count in the corresponding element of ymm3/m256/m32bcst, store using result writemask k1.EVEX.256.66.0F.W0 72 /0 ibVPRORD ymm1 {k1}{z}, ymm2/m256/m32bcst, imm8AV/VAVX512VLAVX512FRotate doublewords in ymm2/m256/m32bcst right by imm8, store result using writemask k1.EVEX.256.66.0F38.W1 14 /rVPRORVQ ymm1 {k1}{z}, ymm2, ymm3/m256/m64bcstBV/VAVX512VLAVX512FRotate quadwords in ymm2 right by count in the corresponding element of ymm3/m256/m64bcst, store result using writemask k1.EVEX.256.66.0F.W1 72 /0 ibVPRORQ ymm1 {k1}{z}, ymm2/m256/m64bcst, imm8AV/VAVX512VLAVX512FRotate quadwords in ymm2/m256/m64bcst right by imm8, store result using writemask k1.EVEX.512.66.0F38.W0 14 /rVPRORVD zmm1 {k1}{z}, zmm2, zmm3/m512/m32bcstBV/VAVX512FRotate doublewords in zmm2 right by count in the corresponding element of zmm3/m512/m32bcst, store result using writemask k1.EVEX.512.66.0F.W0 72 /0 ibVPRORD zmm1 {k1}{z}, zmm2/m512/m32bcst, imm8AV/VAVX512FRotate doublewords in zmm2/m512/m32bcst right by imm8, store result using writemask k1.EVEX.512.66.0F38.W1 14 /rVPRORVQ zmm1 {k1}{z}, zmm2, zmm3/m512/m64bcstBV/VAVX512FRotate quadwords in zmm2 right by count in the corresponding element of zmm3/m512/m64bcst, store result using writemask k1.EVEX.512.66.0F.W1 72 /0 ibVPRORQ zmm1 {k1}{z}, zmm2/m512/m64bcst, imm8AV/VAVX512FRotate quadwords in zmm2/m512/m64bcst right by imm8, store result using writemask k1.Op/EnTuple TypeOperand 1Operand 2Operand 3Operand 4AFullVEX.vvvv (w)ModRM:r/m (R)Imm8NABFullModRM:reg (w)EVEX.vvvv (r)ModRM:r/m (r)NA

image/svg+xmlDescriptionRotates the bits in the individual data elements (doublewords, or quadword) in the first source operand to the right by the number of bits specified in the count operand. If the value specified by the count operand is greater than 31 (for doublewords), or 63 (for a quadword), then the count operand modulo the data size (32 or 64) is used. EVEX.128 encoded version: The destination operand is a XMM register. The source operand is a XMM register or a memory location (for immediate form). The count operand can come either from an XMM register or a memory location or an 8-bit immediate. Bits (MAXVL-1:128) of the corresponding ZMM register are zeroed.EVEX.256 encoded version: The destination operand is a YMM register. The source operand is a YMM register or a memory location (for immediate form). The count operand can come either from an XMM register or a memory location or an 8-bit immediate. Bits (MAXVL-1:256) of the corresponding ZMM register are zeroed.EVEX.512 encoded version: The destination operand is a ZMM register updated according to the writemask. For the count operand in immediate form, the source operand can be a ZMM register, a 512-bit memory location or a 512-bit vector broadcasted from a 32/64-bit memory location, the count operand is an 8-bit immediate. For the count operand in variable form, the first source operand (the second operand) is a ZMM register and the counter operand (the third operand) is a ZMM register, a 512-bit memory location or a 512-bit vector broadcasted from a 32/64-bit memory location. OperationRIGHT_ROTATE_DWORDS(SRC, COUNT_SRC)COUNT := COUNT_SRC modulo 32;DEST[31:0] := (SRC >> COUNT) | (SRC << (32 - COUNT));RIGHT_ROTATE_QWORDS(SRC, COUNT_SRC)COUNT := COUNT_SRC modulo 64;DEST[63:0] := (SRC >> COUNT) | (SRC << (64 - COUNT));VPRORD (EVEX encoded versions)(KL, VL) = (4, 128), (8, 256), (16, 512)FOR j := 0 TO KL-1i := j * 32IF k1[j] OR *no writemask* THENIF (EVEX.b = 1) AND (SRC1 *is memory*)THEN DEST[i+31:i] := RIGHT_ROTATE_DWORDS( SRC1[31:0], imm8)ELSE DEST[i+31:i] := RIGHT_ROTATE_DWORDS(SRC1[i+31:i], imm8)FI;ELSE IF *merging-masking*; merging-maskingTHEN *DEST[i+31:i] remains unchanged*ELSE *zeroing-masking*; zeroing-masking DEST[i+31:i] := 0FIFI;ENDFORDEST[MAXVL-1:VL] := 0

image/svg+xmlVPRORVD (EVEX encoded versions)(KL, VL) = (4, 128), (8, 256), (16, 512)FOR j := 0 TO KL-1i := j * 32IF k1[j] OR *no writemask* THENIF (EVEX.b = 1) AND (SRC2 *is memory*)THEN DEST[i+31:i] := RIGHT_ROTATE_DWORDS(SRC1[i+31:i], SRC2[31:0])ELSE DEST[i+31:i] := RIGHT_ROTATE_DWORDS(SRC1[i+31:i], SRC2[i+31:i])FI;ELSE IF *merging-masking*; merging-maskingTHEN *DEST[i+31:i] remains unchanged*ELSE *zeroing-masking*; zeroing-masking DEST[i+31:i] := 0FIFI;ENDFORDEST[MAXVL-1:VL] := 0VPRORQ (EVEX encoded versions)(KL, VL) = (2, 128), (4, 256), (8, 512)FOR j := 0 TO KL-1i := j * 64IF k1[j] OR *no writemask* THENIF (EVEX.b = 1) AND (SRC1 *is memory*)THEN DEST[i+63:i] := RIGHT_ROTATE_QWORDS(SRC1[63:0], imm8)ELSE DEST[i+63:i] := RIGHT_ROTATE_QWORDS(SRC1[i+63:i], imm8])FI;ELSE IF *merging-masking*; merging-maskingTHEN *DEST[i+63:i] remains unchanged*ELSE *zeroing-masking*; zeroing-masking DEST[i+63:i] := 0FIFI;ENDFORDEST[MAXVL-1:VL] := 0

image/svg+xmlVPRORVQ (EVEX encoded versions)(KL, VL) = (2, 128), (4, 256), (8, 512)FOR j := 0 TO KL-1i := j * 64IF k1[j] OR *no writemask* THENIF (EVEX.b = 1) AND (SRC2 *is memory*)THEN DEST[i+63:i] := RIGHT_ROTATE_QWORDS(SRC1[i+63:i], SRC2[63:0])ELSE DEST[i+63:i] := RIGHT_ROTATE_QWORDS(SRC1[i+63:i], SRC2[i+63:i])FI;ELSE IF *merging-masking*; merging-maskingTHEN *DEST[i+63:i] remains unchanged*ELSE *zeroing-masking*; zeroing-masking DEST[i+63:i] := 0FIFI;ENDFORDEST[MAXVL-1:VL] := 0Intel C/C++ Compiler Intrinsic EquivalentVPRORD __m512i _mm512_ror_epi32(__m512i a, int imm);VPRORD __m512i _mm512_mask_ror_epi32(__m512i a, __mmask16 k, __m512i b, int imm);VPRORD __m512i _mm512_maskz_ror_epi32( __mmask16 k, __m512i a, int imm);VPRORD __m256i _mm256_ror_epi32(__m256i a, int imm);VPRORD __m256i _mm256_mask_ror_epi32(__m256i a, __mmask8 k, __m256i b, int imm);VPRORD __m256i _mm256_maskz_ror_epi32( __mmask8 k, __m256i a, int imm);VPRORD __m128i _mm_ror_epi32(__m128i a, int imm);VPRORD __m128i _mm_mask_ror_epi32(__m128i a, __mmask8 k, __m128i b, int imm);VPRORD __m128i _mm_maskz_ror_epi32( __mmask8 k, __m128i a, int imm);VPRORQ __m512i _mm512_ror_epi64(__m512i a, int imm);VPRORQ __m512i _mm512_mask_ror_epi64(__m512i a, __mmask8 k, __m512i b, int imm);VPRORQ __m512i _mm512_maskz_ror_epi64(__mmask8 k, __m512i a, int imm);VPRORQ __m256i _mm256_ror_epi64(__m256i a, int imm);VPRORQ __m256i _mm256_mask_ror_epi64(__m256i a, __mmask8 k, __m256i b, int imm);VPRORQ __m256i _mm256_maskz_ror_epi64( __mmask8 k, __m256i a, int imm);VPRORQ __m128i _mm_ror_epi64(__m128i a, int imm);VPRORQ __m128i _mm_mask_ror_epi64(__m128i a, __mmask8 k, __m128i b, int imm);VPRORQ __m128i _mm_maskz_ror_epi64( __mmask8 k, __m128i a, int imm);VPRORVD __m512i _mm512_rorv_epi32(__m512i a, __m512i cnt);VPRORVD __m512i _mm512_mask_rorv_epi32(__m512i a, __mmask16 k, __m512i b, __m512i cnt);VPRORVD __m512i _mm512_maskz_rorv_epi32(__mmask16 k, __m512i a, __m512i cnt);VPRORVD __m256i _mm256_rorv_epi32(__m256i a, __m256i cnt);VPRORVD __m256i _mm256_mask_rorv_epi32(__m256i a, __mmask8 k, __m256i b, __m256i cnt);VPRORVD __m256i _mm256_maskz_rorv_epi32(__mmask8 k, __m256i a, __m256i cnt);VPRORVD __m128i _mm_rorv_epi32(__m128i a, __m128i cnt);VPRORVD __m128i _mm_mask_rorv_epi32(__m128i a, __mmask8 k, __m128i b, __m128i cnt);VPRORVD __m128i _mm_maskz_rorv_epi32(__mmask8 k, __m128i a, __m128i cnt);VPRORVQ __m512i _mm512_rorv_epi64(__m512i a, __m512i cnt);VPRORVQ __m512i _mm512_mask_rorv_epi64(__m512i a, __mmask8 k, __m512i b, __m512i cnt);VPRORVQ __m512i _mm512_maskz_rorv_epi64( __mmask8 k, __m512i a, __m512i cnt);VPRORVQ __m256i _mm256_rorv_epi64(__m256i a, __m256i cnt);VPRORVQ __m256i _mm256_mask_rorv_epi64(__m256i a, __mmask8 k, __m256i b, __m256i cnt);VPRORVQ __m256i _mm256_maskz_rorv_epi64(__mmask8 k, __m256i a, __m256i cnt);VPRORVQ __m128i _mm_rorv_epi64(__m128i a, __m128i cnt);

image/svg+xmlVPRORVQ __m128i _mm_mask_rorv_epi64(__m128i a, __mmask8 k, __m128i b, __m128i cnt);VPRORVQ __m128i _mm_maskz_rorv_epi64(__mmask8 k, __m128i a, __m128i cnt);SIMD Floating-Point ExceptionsNoneOther ExceptionsEVEX-encoded instruction, see Table2-49, “Type E4 Class Exception Conditions”.

This UNOFFICIAL reference was generated from the official Intel® 64 and IA-32 Architectures Software Developer’s Manual by a dumb script. There is no guarantee that some parts aren't mangled or broken and is distributed WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.