image/svg+xml MOVLPS—Move Low Packed Single-Precision Floating-Point Values Instruction Operand Encoding Description This instruction cannot be used for register to register or memory to memory moves. 128-bit Legacy SSE load: Moves two packed single-precision floating-point values from the source 64-bit memory operand and stores them in the low 64-bits of the destination XMM register. The upper 64bits of the XMM register are preserved. Bits (MAXVL-1:128) of the corresponding destination register are preserved. VEX.128 & EVEX encoded load: Loads two packed single-precision floating-point values from the source 64-bit memory operand (the third operand), merges them with the upper 64-bits of the first source operand (the second operand), and stores them in the low 128-bits of the destination register (the first operand). Bits (MAXVL-1:128) of the corresponding desti- nation register are zeroed. 128-bit store: Loads two packed single-precision floating-point values from the low 64-bits of the XMM register source (second operand) to the 64-bit memory location (first operand). Note: VMOVLPS (store) (VEX.128.0F 13 /r) is legal and has the same behavior as the existing 0F 13 store. For VMOVLPS (store) VEX.vvvv and EVEX.vvvv are reserved and must be 1111b otherwise instruction will #UD. If VMOVLPS is encoded with VEX.L or EVEX.L’L= 1, an attempt to execute the instruction encoded with VEX.L or EVEX.L’L= 1 will cause an #UD exception. Opcode/ Instruction Op / En64/32 bit Mode Support CPUID Feature Flag Description NP 0F 12 /r MOVLPS xmm1, m64 AV/VSSEMove two packed single-precision floating-point values from m64 to low quadword of xmm1. VEX.128.0F.WIG 12 /r VMOVLPS xmm2, xmm1, m64 BV/VAVXMerge two packed single-precision floating-point values from m64 and the high quadword of xmm1. EVEX.128.0F.W0 12 /r VMOVLPS xmm2, xmm1, m64 DV/VAVX512FMerge two packed single-precision floating-point values from m64 and the high quadword of xmm1. 0F 13/r MOVLPS m64, xmm1 CV/VSSEMove two packed single-precision floating-point values from low quadword of xmm1 to m64. VEX.128.0F.WIG 13/r VMOVLPS m64, xmm1 CV/VAVXMove two packed single-precision floating-point values from low quadword of xmm1 to m64. EVEX.128.0F.W0 13/r VMOVLPS m64, xmm1 EV/VAVX512FMove two packed single-precision floating-point values from low quadword of xmm1 to m64. Op/EnTuple TypeOperand 1Operand 2Operand 3Operand 4 ANAModRM:reg (r, w)ModRM:r/m (r)NANA BNAModRM:reg (w)VEX.vvvv (r)ModRM:r/m (r)NA CNAModRM:r/m (w)ModRM:reg (r)NANA DTuple2ModRM:reg (w)EVEX.vvvv (r)ModRM:r/m (r)NA ETuple2ModRM:r/m (w)ModRM:reg (r)NANA image/svg+xml Operation MOVLPS (128-bit Legacy SSE load) DEST[63:0] := SRC[63:0] DEST[MAXVL-1:64] (Unmodified) VMOVLPS (VEX.128 & EVEX encoded load) DEST[63:0] := SRC2[63:0] DEST[127:64] := SRC1[127:64] DEST[MAXVL-1:128] := 0 VMOVLPS (store) DEST[63:0] := SRC[63:0] Intel C/C++ Compiler Intrinsic Equivalent MOVLPS __m128 _mm_loadl_pi ( __m128 a, __m64 *p) MOVLPS void _mm_storel_pi (__m64 *p, __m128 a) SIMD Floating-Point Exceptions None Other Exceptions Non-EVEX-encoded instruction, see Table2-22, “Type 5 Class Exception Conditions”; additionally: #UDIf VEX.L = 1. EVEX-encoded instruction, see Table2-57, “Type E9NF Class Exception Conditions”. This UNOFFICIAL reference was generated from the official Intel® 64 and IA-32 Architectures Software Developer’s Manual by a dumb script. There is no guarantee that some parts aren't mangled or broken and is distributed WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE .