image/svg+xmlPSUBQ—Subtract Packed Quadword Integers Instruction Operand EncodingDescriptionSubtracts the second operand (source operand) from the first operand (destination operand) and stores the result in the destination operand. When packed quadword operands are used, a SIMD subtract is performed. When a quadword result is too large to be represented in 64 bits (overflow), the result is wrapped around and the low 64 bits are written to the destination element (that is, the carry is ignored).Note that the (V)PSUBQ instruction can operate on either unsigned or signed (two’s complement notation) inte-gers; however, it does not set bits in the EFLAGS register to indicate overflow and/or a carry. To prevent undetected overflow conditions, software must control the ranges of the values upon which it operates.In 64-bit mode and not encoded with VEX/EVEX, using a REX prefix in the form of REX.R permits this instruction to access additional registers (XMM8-XMM15).Legacy SSE version 64-bit operand: The source operand can be a quadword integer stored in an MMX technology register or a 64-bit memory location. 128-bit Legacy SSE version: The second source operand is an XMM register or a 128-bit memory location. The first source operand and destination operands are XMM registers. Bits (MAXVL-1:128) of the corresponding YMM desti-nation register remain unchanged.Opcode/InstructionOp/ En64/32 bit Mode SupportCPUID Feature FlagDescriptionNP 0F FB /r1PSUBQ mm1, mm2/m64AV/VSSE2Subtract quadword integer in mm1 from mm2 /m64.66 0F FB /rPSUBQ xmm1, xmm2/m128AV/VSSE2Subtract packed quadword integers in xmm1 from xmm2 /m128.VEX.128.66.0F.WIG FB/rVPSUBQ xmm1, xmm2, xmm3/m128BV/VAVXSubtract packed quadword integers in xmm3/m128 from xmm2.VEX.256.66.0F.WIG FB /rVPSUBQ ymm1, ymm2, ymm3/m256BV/VAVX2Subtract packed quadword integers in ymm3/m256 from ymm2.EVEX.128.66.0F.W1 FB /rVPSUBQ xmm1 {k1}{z}, xmm2, xmm3/m128/m64bcstCV/VAVX512VLAVX512FSubtract packed quadword integers in xmm3/m128/m64bcst from xmm2 and store in xmm1 using writemask k1.EVEX.256.66.0F.W1 FB /rVPSUBQ ymm1 {k1}{z}, ymm2, ymm3/m256/m64bcstCV/VAVX512VLAVX512FSubtract packed quadword integers in ymm3/m256/m64bcst from ymm2 and store in ymm1 using writemask k1.EVEX.512.66.0F.W1 FB/rVPSUBQ zmm1 {k1}{z}, zmm2, zmm3/m512/m64bcstCV/VAVX512FSubtract packed quadword integers in zmm3/m512/m64bcst from zmm2 and store in zmm1 using writemask k1.NOTES:1. See note in Section 2.4, “AVX and SSE Instruction Exception Specification” in the Intel® 64 and IA-32 Architectures Software Developer’s Manual, Volume 2A and Section 22.25.3, “Exception Conditions of Legacy SIMD Instructions Operating on MMX Registers” in the Intel® 64 and IA-32 Architectures Software Developer’s Manual, Volume 3A.Op/EnTuple TypeOperand 1Operand 2Operand 3Operand 4ANAModRM:reg (r, w)ModRM:r/m (r)NANABNAModRM:reg (w)VEX.vvvv (r)ModRM:r/m (r)NACFullModRM:reg (w)EVEX.vvvv (r)ModRM:r/m (r)NA

image/svg+xmlVEX.128 encoded version: The second source operand is an XMM register or a 128-bit memory location. The first source operand and destination operands are XMM registers. Bits (MAXVL-1:128) of the destination YMM register are zeroed. VEX.256 encoded versions: The second source operand is an YMM register or an 256-bit memory location. The first source operand and destination operands are YMM registers. Bits (MAXVL-1:256) of the corresponding ZMM register are zeroed.EVEX encoded VPSUBQ: The second source operand is a ZMM/YMM/XMM register, a 512/256/128-bit memory loca-tion or a 512/256/128-bit vector broadcasted from a 32/64-bit memory location. The first source operand and destination operands are ZMM/YMM/XMM registers. The destination is conditionally updated with writemask k1.OperationPSUBQ (with 64-Bit operands)DEST[63:0] := DEST[63:0] SRC[63:0];PSUBQ (with 128-Bit operands)DEST[63:0] := DEST[63:0] SRC[63:0];DEST[127:64] := DEST[127:64] SRC[127:64];VPSUBQ (VEX.128 encoded version)DEST[63:0] := SRC1[63:0]-SRC2[63:0]DEST[127:64] := SRC1[127:64]-SRC2[127:64]DEST[MAXVL-1:128] := 0VPSUBQ (VEX.256 encoded version)DEST[63:0] := SRC1[63:0]-SRC2[63:0]DEST[127:64] := SRC1[127:64]-SRC2[127:64]DEST[191:128] := SRC1[191:128]-SRC2[191:128]DEST[255:192] := SRC1[255:192]-SRC2[255:192]DEST[MAXVL-1:256] := 0VPSUBQ (EVEX encoded versions)(KL, VL) = (2, 128), (4, 256), (8, 512)FOR j := 0 TO KL-1i := j * 64IF k1[j] OR *no writemask* THENIF (EVEX.b = 1) AND (SRC2 *is memory*)THEN DEST[i+63:i] := SRC1[i+63:i] - SRC2[63:0]ELSE DEST[i+63:i] := SRC1[i+63:i] - SRC2[i+63:i]FI;ELSE IF *merging-masking*; merging-maskingTHEN *DEST[i+63:i] remains unchanged*ELSE *zeroing-masking*; zeroing-maskingDEST[i+63:i] := 0FIFI;ENDFOR;DEST[MAXVL-1:VL] := 0

image/svg+xmlIntel C/C++ Compiler Intrinsic EquivalentsVPSUBQ __m512i _mm512_sub_epi64(__m512i a, __m512i b);VPSUBQ __m512i _mm512_mask_sub_epi64(__m512i s, __mmask8 k, __m512i a, __m512i b);VPSUBQ __m512i _mm512_maskz_sub_epi64( __mmask8 k, __m512i a, __m512i b);VPSUBQ __m256i _mm256_mask_sub_epi64(__m256i s, __mmask8 k, __m256i a, __m256i b);VPSUBQ __m256i _mm256_maskz_sub_epi64( __mmask8 k, __m256i a, __m256i b);VPSUBQ __m128i _mm_mask_sub_epi64(__m128i s, __mmask8 k, __m128i a, __m128i b);VPSUBQ __m128i _mm_maskz_sub_epi64( __mmask8 k, __m128i a, __m128i b);PSUBQ:__m64 _mm_sub_si64(__m64 m1, __m64 m2)(V)PSUBQ:__m128i _mm_sub_epi64(__m128i m1, __m128i m2)VPSUBQ:__m256i _mm256_sub_epi64(__m256i m1, __m256i m2)Flags AffectedNone.Numeric ExceptionsNone.Other ExceptionsNon-EVEX-encoded instruction, see Table2-21, “Type 4 Class Exception Conditions”.EVEX-encoded VPSUBQ, see Table2-49, “Type E4 Class Exception Conditions”.

This UNOFFICIAL reference was generated from the official Intel® 64 and IA-32 Architectures Software Developer’s Manual by a dumb script. There is no guarantee that some parts aren't mangled or broken and is distributed WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.