image/svg+xmlMOVHPS—Move High Packed Single-Precision Floating-Point ValuesInstruction Operand EncodingDescriptionThis instruction cannot be used for register to register or memory to memory moves.128-bit Legacy SSE load:Moves two packed single-precision floating-point values from the source 64-bit memory operand and stores them in the high 64-bits of the destination XMM register. The lower 64bits of the XMM register are preserved. Bits (MAXVL-1:128) of the corresponding destination register are preserved.VEX.128 & EVEX encoded load:Loads two single-precision floating-point values from the source 64-bit memory operand (the third operand) and stores it in the upper 64-bits of the destination XMM register (first operand). The low 64-bits from the first source operand (the second operand) are copied to the lower 64-bits of the destination. Bits (MAXVL-1:128) of the corre-sponding destination register are zeroed.128-bit store:Stores two packed single-precision floating-point values from the high 64-bits of the XMM register source (second operand) to the 64-bit memory location (first operand).Note: VMOVHPS (store) (VEX.128.0F 17 /r) is legal and has the same behavior as the existing 0F 17 store. For VMOVHPS (store) VEX.vvvv and EVEX.vvvv are reserved and must be 1111b otherwise instruction will #UD.If VMOVHPS is encoded with VEX.L or EVEX.L’L= 1, an attempt to execute the instruction encoded with VEX.L or EVEX.L’L= 1 will cause an #UD exception.Opcode/InstructionOp / En64/32 bit Mode SupportCPUID Feature FlagDescriptionNP 0F 16 /rMOVHPS xmm1, m64AV/VSSEMove two packed single-precision floating-point values from m64 to high quadword of xmm1.VEX.128.0F.WIG 16 /rVMOVHPS xmm2, xmm1, m64BV/VAVXMerge two packed single-precision floating-point values from m64 and the low quadword of xmm1.EVEX.128.0F.W0 16 /rVMOVHPS xmm2, xmm1, m64DV/VAVX512FMerge two packed single-precision floating-point values from m64 and the low quadword of xmm1.NP 0F 17 /rMOVHPS m64, xmm1CV/VSSEMove two packed single-precision floating-point values from high quadword of xmm1 to m64.VEX.128.0F.WIG 17 /rVMOVHPS m64, xmm1CV/VAVXMove two packed single-precision floating-point values from high quadword of xmm1 to m64.EVEX.128.0F.W0 17 /rVMOVHPS m64, xmm1EV/VAVX512FMove two packed single-precision floating-point values from high quadword of xmm1 to m64.Op/EnTuple TypeOperand 1Operand 2Operand 3Operand 4ANAModRM:reg (r, w)ModRM:r/m (r)NANABNAModRM:reg (w)VEX.vvvv (r)ModRM:r/m (r)NACNAModRM:r/m (w)ModRM:reg (r)NANADTuple2ModRM:reg (w)EVEX.vvvv (r)ModRM:r/m (r)NAETuple2ModRM:r/m (w)ModRM:reg (r)NANA

image/svg+xmlOperationMOVHPS (128-bit Legacy SSE load)DEST[63:0] (Unmodified)DEST[127:64] := SRC[63:0]DEST[MAXVL-1:128] (Unmodified)VMOVHPS (VEX.128 and EVEX encoded load)DEST[63:0] := SRC1[63:0]DEST[127:64] := SRC2[63:0]DEST[MAXVL-1:128] := 0VMOVHPS (store)DEST[63:0] := SRC[127:64]Intel C/C++ Compiler Intrinsic EquivalentMOVHPS __m128 _mm_loadh_pi ( __m128 a, __m64 *p)MOVHPS void _mm_storeh_pi (__m64 *p, __m128 a)SIMD Floating-Point ExceptionsNoneOther ExceptionsNon-EVEX-encoded instruction, see Table2-22, “Type 5 Class Exception Conditions”; additionally:#UDIf VEX.L = 1.EVEX-encoded instruction, see Table2-57, “Type E9NF Class Exception Conditions”.

This UNOFFICIAL reference was generated from the official Intel® 64 and IA-32 Architectures Software Developer’s Manual by a dumb script. There is no guarantee that some parts aren't mangled or broken and is distributed WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.