image/svg+xmlMOVQ—Move QuadwordInstruction Operand EncodingDescriptionCopies a quadword from the source operand (second operand) to the destination operand (first operand). The source and destination operands can be MMX technology registers, XMM registers, or 64-bit memory locations. This instruction can be used to move a quadword between two MMX technology registers or between an MMX tech-nology register and a 64-bit memory location, or to move data between two XMM registers or between an XMM register and a 64-bit memory location. The instruction cannot be used to transfer data between memory locations. When the source operand is an XMM register, the low quadword is moved; when the destination operand is an XMM register, the quadword is stored to the low quadword of the register, and the high quadword is cleared to all 0s.In 64-bit mode and if not encoded using VEX/EVEX, use of the REX prefix in the form of REX.R permits this instruc-tion to access additional registers (XMM8-XMM15).Note: VEX.vvvv and EVEX.vvvv are reserved and must be 1111b, otherwise instructions will #UD.If VMOVQ is encoded with VEX.L= 1, an attempt to execute the instruction encoded with VEX.L= 1 will cause an #UD exception.Opcode/InstructionOp/ En64/32-bit ModeCPUID Feature FlagDescriptionNP 0F 6F /rMOVQ mm, mm/m64AV/VMMXMove quadword from mm/m64 to mm.NP 0F 7F /rMOVQ mm/m64, mmBV/VMMXMove quadword from mm to mm/m64.F3 0F 7E /rMOVQ xmm1, xmm2/m64AV/VSSE2Move quadword from xmm2/mem64 to xmm1.VEX.128.F3.0F.WIG 7E /rVMOVQ xmm1, xmm2/m64AV/VAVXMove quadword from xmm2 to xmm1.EVEX.128.F3.0F.W1 7E /rVMOVQ xmm1, xmm2/m64CV/VAVX512FMove quadword from xmm2/m64 to xmm1.66 0F D6 /rMOVQ xmm2/m64, xmm1BV/VSSE2Move quadword from xmm1 to xmm2/mem64.VEX.128.66.0F.WIG D6 /rVMOVQ xmm1/m64, xmm2BV/VAVXMove quadword from xmm2 register to xmm1/m64.EVEX.128.66.0F.W1 D6 /rVMOVQ xmm1/m64, xmm2DV/VAVX512FMove quadword from xmm2 register to xmm1/m64.Op/EnTuple TypeOperand 1Operand 2Operand 3Operand 4ANAModRM:reg (w)ModRM:r/m (r)NANABNAModRM:r/m (w)ModRM:reg (r)NANACTuple1 ScalarModRM:reg (w)ModRM:r/m (r)NANADTuple1 ScalarModRM:r/m (w)ModRM:reg (r)NANA

image/svg+xmlOperationMOVQ instruction when operating on MMX technology registers and memory locationsDEST := SRC;MOVQ instruction when source and destination operands are XMM registersDEST[63:0] := SRC[63:0];DEST[127:64] := 0000000000000000H;MOVQ instruction when source operand is XMM register and destinationoperand is memory location:DEST := SRC[63:0];MOVQ instruction when source operand is memory location and destinationoperand is XMM register:DEST[63:0] := SRC;DEST[127:64] := 0000000000000000H;VMOVQ (VEX.128.F3.0F 7E) with XMM register source and destinationDEST[63:0] := SRC[63:0]DEST[MAXVL-1:64] := 0VMOVQ (VEX.128.66.0F D6) with XMM register source and destinationDEST[63:0] := SRC[63:0]DEST[MAXVL-1:64] := 0VMOVQ (7E - EVEX encoded version) with XMM register source and destinationDEST[63:0] := SRC[63:0]DEST[MAXVL-1:64] := 0VMOVQ (D6 - EVEX encoded version) with XMM register source and destinationDEST[63:0] := SRC[63:0]DEST[MAXVL-1:64] := 0VMOVQ (7E) with memory sourceDEST[63:0] := SRC[63:0]DEST[MAXVL-1:64] := 0VMOVQ (7E - EVEX encoded version) with memory sourceDEST[63:0] := SRC[63:0]DEST[:MAXVL-1:64] := 0VMOVQ (D6) with memory destDEST[63:0] := SRC2[63:0]Flags AffectedNone.Intel C/C++ Compiler Intrinsic EquivalentVMOVQ __m128i _mm_loadu_si64( void * s);VMOVQ void _mm_storeu_si64( void * d, __m128i s);MOVQ m128i _mm_move_epi64(__m128i a)

image/svg+xmlSIMD Floating-Point ExceptionsNoneOther ExceptionsSee Table22-8, “Exception Conditions for Legacy SIMD/MMX Instructions without FP Exception” in the Intel® 64 and IA-32 Architectures Software Developer’s Manual, Volume 3B.

This UNOFFICIAL reference was generated from the official Intel® 64 and IA-32 Architectures Software Developer’s Manual by a dumb script. There is no guarantee that some parts aren't mangled or broken and is distributed WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.