image/svg+xmlMOVLPS—Move Low Packed Single-Precision Floating-Point ValuesInstruction Operand EncodingDescriptionThis instruction cannot be used for register to register or memory to memory moves.128-bit Legacy SSE load:Moves two packed single-precision floating-point values from the source 64-bit memory operand and stores them in the low 64-bits of the destination XMM register. The upper 64bits of the XMM register are preserved. Bits (MAXVL-1:128) of the corresponding destination register are preserved.VEX.128 & EVEX encoded load:Loads two packed single-precision floating-point values from the source 64-bit memory operand (the third operand), merges them with the upper 64-bits of the first source operand (the second operand), and stores them in the low 128-bits of the destination register (the first operand). Bits (MAXVL-1:128) of the corresponding desti-nation register are zeroed.128-bit store:Loads two packed single-precision floating-point values from the low 64-bits of the XMM register source (second operand) to the 64-bit memory location (first operand).Note: VMOVLPS (store) (VEX.128.0F 13 /r) is legal and has the same behavior as the existing 0F 13 store. For VMOVLPS (store) VEX.vvvv and EVEX.vvvv are reserved and must be 1111b otherwise instruction will #UD.If VMOVLPS is encoded with VEX.L or EVEX.L’L= 1, an attempt to execute the instruction encoded with VEX.L or EVEX.L’L= 1 will cause an #UD exception.Opcode/InstructionOp / En64/32 bit Mode SupportCPUID Feature FlagDescriptionNP 0F 12 /rMOVLPS xmm1, m64AV/VSSEMove two packed single-precision floating-point values from m64 to low quadword of xmm1.VEX.128.0F.WIG 12 /rVMOVLPS xmm2, xmm1, m64BV/VAVXMerge two packed single-precision floating-point values from m64 and the high quadword of xmm1.EVEX.128.0F.W0 12 /rVMOVLPS xmm2, xmm1, m64DV/VAVX512FMerge two packed single-precision floating-point values from m64 and the high quadword of xmm1.0F 13/rMOVLPS m64, xmm1CV/VSSEMove two packed single-precision floating-point values from low quadword of xmm1 to m64.VEX.128.0F.WIG 13/rVMOVLPS m64, xmm1CV/VAVXMove two packed single-precision floating-point values from low quadword of xmm1 to m64.EVEX.128.0F.W0 13/rVMOVLPS m64, xmm1EV/VAVX512FMove two packed single-precision floating-point values from low quadword of xmm1 to m64.Op/EnTuple TypeOperand 1Operand 2Operand 3Operand 4ANAModRM:reg (r, w)ModRM:r/m (r)NANABNAModRM:reg (w)VEX.vvvv (r)ModRM:r/m (r)NACNAModRM:r/m (w)ModRM:reg (r)NANADTuple2ModRM:reg (w)EVEX.vvvv (r)ModRM:r/m (r)NAETuple2ModRM:r/m (w)ModRM:reg (r)NANA

image/svg+xmlOperationMOVLPS (128-bit Legacy SSE load)DEST[63:0] := SRC[63:0]DEST[MAXVL-1:64] (Unmodified)VMOVLPS (VEX.128 & EVEX encoded load)DEST[63:0] := SRC2[63:0]DEST[127:64] := SRC1[127:64]DEST[MAXVL-1:128] := 0VMOVLPS (store)DEST[63:0] := SRC[63:0]Intel C/C++ Compiler Intrinsic EquivalentMOVLPS __m128 _mm_loadl_pi ( __m128 a, __m64 *p)MOVLPS void _mm_storel_pi (__m64 *p, __m128 a)SIMD Floating-Point ExceptionsNoneOther ExceptionsNon-EVEX-encoded instruction, see Table2-22, “Type 5 Class Exception Conditions”; additionally:#UDIf VEX.L = 1.EVEX-encoded instruction, see Table2-57, “Type E9NF Class Exception Conditions”.

This UNOFFICIAL reference was generated from the official Intel® 64 and IA-32 Architectures Software Developer’s Manual by a dumb script. There is no guarantee that some parts aren't mangled or broken and is distributed WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.