PHSUBSW — Packed Horizontal Subtract and SaturateInstruction Operand EncodingDescription (V)PHSUBSW performs horizontal subtraction on each adjacent pair of 16-bit signed integers by subtracting the most significant word from the least significant word of each pair in the source and destination operands. The signed, saturated 16-bit results are packed to the destination operand (first operand). When the source operand is a 128-bit memory operand, the operand must be aligned on a 16-byte boundary or a general-protection exception (#GP) will be generated. Legacy SSE version: Both operands can be MMX registers. The second source operand can be an MMX register or a 64-bit memory location.128-bit Legacy SSE version: The first source and destination operands are XMM registers. The second source operand is an XMM register or a 128-bit memory location. Bits (MAXVL-1:128) of the corresponding YMM destina-tion register remain unchanged. In 64-bit mode, use the REX prefix to access additional registers. VEX.128 encoded version: The first source and destination operands are XMM registers. The second source operand is an XMM register or a 128-bit memory location. Bits (MAXVL-1:128) of the destination YMM register are zeroed.VEX.256 encoded version: The first source and destination operands are YMM registers. The second source operand can be an YMM register or a 256-bit memory location.OperationPHSUBSW (with 64-bit operands)mm1[15-0] = SaturateToSignedWord(mm1[15-0] - mm1[31-16]); mm1[31-16] = SaturateToSignedWord(mm1[47-32] - mm1[63-48]);mm1[47-32] = SaturateToSignedWord(mm2/m64[15-0] - mm2/m64[31-16]); mm1[63-48] = SaturateToSignedWord(mm2/m64[47-32] - mm2/m64[63-48]);Opcode/InstructionOp/ En64/32 bit Mode SupportCPUID Feature FlagDescriptionNP 0F 38 07 /r1PHSUBSW mm1, mm2/m64 RMV/VSSSE3Subtract 16-bit signed integer horizontally, pack saturated integers to mm1.66 0F 38 07 /rPHSUBSW xmm1, xmm2/m128 RMV/VSSSE3Subtract 16-bit signed integer horizontally, pack saturated integers to xmm1.VEX.128.66.0F38.WIG 07 /rVPHSUBSW xmm1, xmm2, xmm3/m128RVMV/VAVXSubtract 16-bit signed integer horizontally, pack saturated integers to xmm1.VEX.256.66.0F38.WIG 07 /rVPHSUBSW ymm1, ymm2, ymm3/m256RVMV/VAVX2Subtract 16-bit signed integer horizontally, pack saturated integers to ymm1.NOTES:1. See note in Section 2.4, “AVX and SSE Instruction Exception Specification” in the Intel® 64 and IA-32 Architectures Software Developer’s Manual, Volume 2A and Section 22.25.3, “Exception Conditions of Legacy SIMD Instructions Operating on MMX Registers” in the Intel® 64 and IA-32 Architectures Software Developer’s Manual, Volume 3A.Op/EnOperand 1Operand 2Operand 3Operand 4RMModRM:reg (r, w)ModRM:r/m (r)NANARVMModRM:reg (r, w)VEX.vvvv (r)ModRM:r/m (r)NA
This UNOFFICIAL reference was generated from the official Intel® 64 and IA-32 Architectures Software Developer’s Manual by a dumb script. There is no guarantee that some parts aren't mangled or broken and is distributed WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.