PHADDW/PHADDD — Packed Horizontal AddInstruction Operand EncodingDescription (V)PHADDW adds two adjacent 16-bit signed integers horizontally from the source and destination operands and packs the 16-bit signed results to the destination operand (first operand). (V)PHADDD adds two adjacent 32-bit signed integers horizontally from the source and destination operands and packs the 32-bit signed results to the destination operand (first operand). When the source operand is a 128-bit memory operand, the operand must be aligned on a 16-byte boundary or a general-protection exception (#GP) will be generated.Note that these instructions can operate on either unsigned or signed (two’s complement notation) integers; however, it does not set bits in the EFLAGS register to indicate overflow and/or a carry. To prevent undetected over-flow conditions, software must control the ranges of the values operated on. Legacy SSE instructions: Both operands can be MMX registers. The second source operand can be an MMX register or a 64-bit memory location.128-bit Legacy SSE version: The first source and destination operands are XMM registers. The second source operand can be an XMM register or a 128-bit memory location. Bits (MAXVL-1:128) of the corresponding YMM destination register remain unchanged.In 64-bit mode, use the REX prefix to access additional registers. Opcode/InstructionOp/ En64/32 bit Mode SupportCPUID Feature FlagDescriptionNP 0F 38 01 /r1PHADDW mm1, mm2/m64RMV/VSSSE3Add 16-bit integers horizontally, pack to mm1. 66 0F 38 01 /rPHADDW xmm1, xmm2/m128RMV/VSSSE3Add 16-bit integers horizontally, pack to xmm1.NP 0F 38 02 /r PHADDD mm1, mm2/m64RMV/VSSSE3Add 32-bit integers horizontally, pack to mm1. 66 0F 38 02 /rPHADDD xmm1, xmm2/m128RMV/VSSSE3Add 32-bit integers horizontally, pack to xmm1. VEX.128.66.0F38.WIG 01 /rVPHADDW xmm1, xmm2, xmm3/m128RVMV/VAVXAdd 16-bit integers horizontally, pack to xmm1.VEX.128.66.0F38.WIG 02 /rVPHADDD xmm1, xmm2, xmm3/m128RVMV/VAVXAdd 32-bit integers horizontally, pack to xmm1.VEX.256.66.0F38.WIG 01 /rVPHADDW ymm1, ymm2, ymm3/m256RVMV/VAVX2Add 16-bit signed integers horizontally, pack to ymm1.VEX.256.66.0F38.WIG 02 /rVPHADDD ymm1, ymm2, ymm3/m256RVMV/VAVX2Add 32-bit signed integers horizontally, pack to ymm1.NOTES:1. See note in Section 2.4, “AVX and SSE Instruction Exception Specification” in the Intel® 64 and IA-32 Architectures Software Developer’s Manual, Volume 2A and Section 22.25.3, “Exception Conditions of Legacy SIMD Instructions Operating on MMX Registers” in the Intel® 64 and IA-32 Architectures Software Developer’s Manual, Volume 3A.Op/EnOperand 1Operand 2Operand 3Operand 4RMModRM:reg (r, w)ModRM:r/m (r)NANARVMModRM:reg (w)VEX.vvvv (r)ModRM:r/m (r)NA
This UNOFFICIAL reference was generated from the official Intel® 64 and IA-32 Architectures Software Developer’s Manual by a dumb script. There is no guarantee that some parts aren't mangled or broken and is distributed WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.