PSHUFB — Packed Shuffle BytesInstruction Operand EncodingDescription PSHUFB performs in-place shuffles of bytes in the destination operand (the first operand) according to the shuffle control mask in the source operand (the second operand). The instruction permutes the data in the destination operand, leaving the shuffle mask unaffected. If the most significant bit (bit[7]) of each byte of the shuffle control mask is set, then constant zero is written in the result byte. Each byte in the shuffle control mask forms an index to permute the corresponding byte in the destination operand. The value of each index is the least significant 4 bits (128-bit operation) or 3 bits (64-bit operation) of the shuffle control byte. When the source operand is a 128-bit memory operand, the operand must be aligned on a 16-byte boundary or a general-protection exception (#GP) will be generated. In 64-bit mode and not encoded with VEX/EVEX, use the REX prefix to access XMM8-XMM15 registers. Legacy SSE version 64-bit operand: Both operands can be MMX registers.128-bit Legacy SSE version: The first source operand and the destination operand are the same. Bits (MAXVL-1:128) of the corresponding YMM destination register remain unchanged.VEX.128 encoded version: The destination operand is the first operand, the first source operand is the second operand, the second source operand is the third operand. Bits (MAXVL-1:128) of the destination YMM register are zeroed. VEX.256 encoded version: Bits (255:128) of the destination YMM register stores the 16-byte shuffle result of the upper 16 bytes of the first source operand, using the upper 16-bytes of the second source operand as control mask. Opcode/InstructionOp/ En64/32 bit Mode SupportCPUID Feature FlagDescriptionNP 0F 38 00 /r1 PSHUFB mm1, mm2/m64AV/VSSSE3Shuffle bytes in mm1 according to contents of mm2/m64. 66 0F 38 00 /r PSHUFB xmm1, xmm2/m128AV/VSSSE3Shuffle bytes in xmm1 according to contents of xmm2/m128.VEX.128.66.0F38.WIG 00 /rVPSHUFB xmm1, xmm2, xmm3/m128BV/VAVXShuffle bytes in xmm2 according to contents of xmm3/m128.VEX.256.66.0F38.WIG 00 /rVPSHUFB ymm1, ymm2, ymm3/m256BV/VAVX2Shuffle bytes in ymm2 according to contents of ymm3/m256.EVEX.128.66.0F38.WIG 00 /rVPSHUFB xmm1 {k1}{z}, xmm2, xmm3/m128CV/VAVX512VLAVX512BWShuffle bytes in xmm2 according to contents of xmm3/m128 under write mask k1.EVEX.256.66.0F38.WIG 00 /rVPSHUFB ymm1 {k1}{z}, ymm2, ymm3/m256CV/VAVX512VLAVX512BWShuffle bytes in ymm2 according to contents of ymm3/m256 under write mask k1.EVEX.512.66.0F38.WIG 00 /rVPSHUFB zmm1 {k1}{z}, zmm2, zmm3/m512CV/VAVX512BWShuffle bytes in zmm2 according to contents of zmm3/m512 under write mask k1.NOTES:1. See note in Section 2.4, “AVX and SSE Instruction Exception Specification” in the Intel® 64 and IA-32 Architectures Software Developer’s Manual, Volume 2A and Section 22.25.3, “Exception Conditions of Legacy SIMD Instructions Operating on MMX Registers” in the Intel® 64 and IA-32 Architectures Software Developer’s Manual, Volume 3A.Op/EnTuple TypeOperand 1Operand 2Operand 3Operand 4ANAModRM:reg (r, w)ModRM:r/m (r)NANABNAModRM:reg (w)VEX.vvvv (r)ModRM:r/m (r)NACFull MemModRM:reg (w)EVEX.vvvv (r)ModRM:r/m (r)NA
This UNOFFICIAL reference was generated from the official Intel® 64 and IA-32 Architectures Software Developer’s Manual by a dumb script. There is no guarantee that some parts aren't mangled or broken and is distributed WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.