PHSUBW/PHSUBD — Packed Horizontal SubtractInstruction Operand EncodingDescription (V)PHSUBW performs horizontal subtraction on each adjacent pair of 16-bit signed integers by subtracting the most significant word from the least significant word of each pair in the source and destination operands, and packs the signed 16-bit results to the destination operand (first operand). (V)PHSUBD performs horizontal subtraction on each adjacent pair of 32-bit signed integers by subtracting the most significant doubleword from the least signifi-cant doubleword of each pair, and packs the signed 32-bit result to the destination operand. When the source operand is a 128-bit memory operand, the operand must be aligned on a 16-byte boundary or a general-protection exception (#GP) will be generated. Legacy SSE version: Both operands can be MMX registers. The second source operand can be an MMX register or a 64-bit memory location.128-bit Legacy SSE version: The first source and destination operands are XMM registers. The second source operand is an XMM register or a 128-bit memory location. Bits (MAXVL-1:128) of the corresponding YMM destina-tion register remain unchanged.In 64-bit mode, use the REX prefix to access additional registers.VEX.128 encoded version: The first source and destination operands are XMM registers. The second source operand is an XMM register or a 128-bit memory location. Bits (MAXVL-1:128) of the destination YMM register are zeroed. Opcode/InstructionOp/ En64/32 bit Mode SupportCPUID Feature FlagDescriptionNP 0F 38 05 /r1PHSUBW mm1, mm2/m64RMV/VSSSE3Subtract 16-bit signed integers horizontally, pack to mm1. 66 0F 38 05 /r PHSUBW xmm1, xmm2/m128RMV/VSSSE3Subtract 16-bit signed integers horizontally, pack to xmm1. NP 0F 38 06 /r PHSUBD mm1, mm2/m64RMV/VSSSE3Subtract 32-bit signed integers horizontally, pack to mm1. 66 0F 38 06 /rPHSUBD xmm1, xmm2/m128 RMV/VSSSE3Subtract 32-bit signed integers horizontally, pack to xmm1. VEX.128.66.0F38.WIG 05 /rVPHSUBW xmm1, xmm2, xmm3/m128RVMV/VAVXSubtract 16-bit signed integers horizontally, pack to xmm1.VEX.128.66.0F38.WIG 06 /rVPHSUBD xmm1, xmm2, xmm3/m128RVMV/VAVXSubtract 32-bit signed integers horizontally, pack to xmm1.VEX.256.66.0F38.WIG 05 /rVPHSUBW ymm1, ymm2, ymm3/m256RVMV/VAVX2Subtract 16-bit signed integers horizontally, pack to ymm1.VEX.256.66.0F38.WIG 06 /rVPHSUBD ymm1, ymm2, ymm3/m256RVMV/VAVX2Subtract 32-bit signed integers horizontally, pack to ymm1.NOTES:1. See note in Section 2.4, “AVX and SSE Instruction Exception Specification” in the Intel® 64 and IA-32 Architectures Software Developer’s Manual, Volume 2A and Section 22.25.3, “Exception Conditions of Legacy SIMD Instructions Operating on MMX Registers” in the Intel® 64 and IA-32 Architectures Software Developer’s Manual, Volume 3A.Op/EnOperand 1Operand 2Operand 3Operand 4RMModRM:reg (r, w)ModRM:r/m (r)NANARVMModRM:reg (r, w)VEX.vvvv (r)ModRM:r/m (r)NA
This UNOFFICIAL reference was generated from the official Intel® 64 and IA-32 Architectures Software Developer’s Manual by a dumb script. There is no guarantee that some parts aren't mangled or broken and is distributed WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.