image/svg+xmlVPGATHERDD/VPGATHERQD — Gather Packed Dword Values Using Signed Dword/Qword Indices Instruction Operand EncodingDescriptionThe instruction conditionally loads up to 4 or 8 dword values from memory addresses specified by the memory operand (the second operand) and using dword indices. The memory operand uses the VSIB form of the SIB byte to specify a general purpose register operand as the common base, a vector register for an array of indices relative to the base and a constant scale factor.The mask operand (the third operand) specifies the conditional load operation from each memory address and the corresponding update of each data element of the destination operand (the first operand). Conditionality is speci-fied by the most significant bit of each data element of the mask register. If an element’s mask bit is not set, the corresponding element of the destination register is left unchanged. The width of data element in the destination register and mask register are identical. The entire mask register will be set to zero by this instruction unless the instruction causes an exception. Using qword indices, the instruction conditionally loads up to 2 or 4 qword values from the VSIB addressing memory operand, and updates the lower half of the destination register. The upper 128 or 256 bits of the destina-tion register are zero’ed with qword indices.This instruction can be suspended by an exception if at least one element is already gathered (i.e., if the exception is triggered by an element other than the rightmost one with its mask bit set). When this happens, the destination register and the mask operand are partially updated; those elements that have been gathered are placed into the destination register and have their mask bits set to zero. If any traps or interrupts are pending from already gath-ered elements, they will be delivered in lieu of the exception; in this case, EFLAG.RF is set to one so an instruction breakpoint is not re-triggered when the instruction is continued.If the data size and index size are different, part of the destination register and part of the mask register do not correspond to any elements being gathered. This instruction sets those parts to zero. It may do this to one or both of those registers even if the instruction triggers an exception, and even if the instruction triggers the exception before gathering any elements.VEX.128 version: For dword indices, the instruction will gather four dword values. For qword indices, the instruction will gather two values and zero the upper 64 bits of the destination.Opcode/InstructionOp/ En64/32-bit ModeCPUID Feature FlagDescriptionVEX.128.66.0F38.W0 90 /rVPGATHERDD xmm1, vm32x, xmm2RMVV/VAVX2Using dword indices specified in vm32x, gather dword val-ues from memory conditioned on mask specified by xmm2. Conditionally gathered elements are merged into xmm1.VEX.128.66.0F38.W0 91 /rVPGATHERQD xmm1, vm64x, xmm2RMVV/VAVX2Using qword indices specified in vm64x, gather dword val-ues from memory conditioned on mask specified by xmm2. Conditionally gathered elements are merged into xmm1.VEX.256.66.0F38.W0 90 /rVPGATHERDD ymm1, vm32y, ymm2RMVV/VAVX2Using dword indices specified in vm32y, gather dword from memory conditioned on mask specified by ymm2. Conditionally gathered elements are merged into ymm1.VEX.256.66.0F38.W0 91 /rVPGATHERQD xmm1, vm64y, xmm2RMVV/VAVX2Using qword indices specified in vm64y, gather dword val-ues from memory conditioned on mask specified by xmm2. Conditionally gathered elements are merged into xmm1.Op/EnOperand 1Operand 2Operand 3Operand 4RMVModRM:reg (r,w)BaseReg (R): VSIB:base,VectorReg(R): VSIB:indexVEX.vvvv (r, w)NA

image/svg+xmlVEX.256 version: For dword indices, the instruction will gather eight dword values. For qword indices, the instruc-tion will gather four values and zero the upper 128 bits of the destination.Note that:If any pair of the index, mask, or destination registers are the same, this instruction results a UD fault.The values may be read from memory in any order. Memory ordering with other instructions follows the Intel-64 memory-ordering model.Faults are delivered in a right-to-left manner. That is, if a fault is triggered by an element and delivered, all elements closer to the LSB of the destination will be completed (and non-faulting). Individual elements closer to the MSB may or may not be completed. If a given element triggers multiple faults, they are delivered in the conventional order.Elements may be gathered in any order, but faults must be delivered in a right-to-left order; thus, elements to the left of a faulting one may be gathered before the fault is delivered. A given implementation of this instruction is repeatable - given the same input values and architectural state, the same set of elements to the left of the faulting one will be gathered.This instruction does not perform AC checks, and so will never deliver an AC fault.This instruction will cause a #UD if the address size attribute is 16-bit.This instruction will cause a #UD if the memory operand is encoded without the SIB byte.This instruction should not be used to access memory mapped I/O as the ordering of the individual loads it does is implementation specific, and some implementations may use loads larger than the data element size or load elements an indeterminate number of times.The scaled index may require more bits to represent than the address bits used by the processor (e.g., in 32-bit mode, if the scale is greater than one). In this case, the most significant bits beyond the number of address bits are ignored.OperationDEST := SRC1;BASE_ADDR: base register encoded in VSIB addressing;VINDEX: the vector index register encoded by VSIB addressing;SCALE: scale factor encoded by SIB:[7:6];DISP: optional 1, 4 byte displacement;MASK := SRC3;VPGATHERDD (VEX.128 version)MASK[MAXVL-1:128] := 0;FOR j := 0 to 3i := j * 32;IF MASK[31+i] THENMASK[i +31:i] := FFFFFFFFH; // extend from most significant bitELSEMASK[i +31:i] := 0;FI;ENDFORFOR j := 0 to 3 i := j * 32;DATA_ADDR := BASE_ADDR + (SignExtend(VINDEX[i+31:i])*SCALE + DISP;IF MASK[31+i] THENDEST[i +31:i] := FETCH_32BITS(DATA_ADDR); // a fault exits the instructionFI;MASK[i +31:i] := 0;ENDFORDEST[MAXVL-1:128] := 0;

image/svg+xmlVPGATHERQD (VEX.128 version)MASK[MAXVL-1:64] := 0;FOR j := 0 to 3i := j * 32;IF MASK[31+i] THENMASK[i +31:i] := FFFFFFFFH; // extend from most significant bitELSEMASK[i +31:i] := 0;FI;ENDFORFOR j := 0 to 1 k := j * 64;i := j * 32;DATA_ADDR := BASE_ADDR + (SignExtend(VINDEX1[k+63:k])*SCALE + DISP;IF MASK[31+i] THENDEST[i +31:i] := FETCH_32BITS(DATA_ADDR); // a fault exits the instructionFI;MASK[i +31:i] := 0;ENDFORDEST[MAXVL-1:64] := 0;VPGATHERDD (VEX.256 version)MASK[MAXVL-1:256] := 0;FOR j := 0 to 7i := j * 32;IF MASK[31+i] THENMASK[i +31:i] := FFFFFFFFH; // extend from most significant bitELSEMASK[i +31:i] := 0;FI;ENDFORFOR j := 0 to 7i := j * 32;DATA_ADDR := BASE_ADDR + (SignExtend(VINDEX1[i+31:i])*SCALE + DISP;IF MASK[31+i] THENDEST[i +31:i] := FETCH_32BITS(DATA_ADDR); // a fault exits the instructionFI;MASK[i +31:i] := 0;ENDFORDEST[MAXVL-1:256] := 0;

image/svg+xmlVPGATHERQD (VEX.256 version)MASK[MAXVL-1:128] := 0;FOR j := 0 to 7i := j * 32;IF MASK[31+i] THENMASK[i +31:i] := FFFFFFFFH; // extend from most significant bitELSEMASK[i +31:i] := 0;FI;ENDFORFOR j := 0 to 3k := j * 64;i := j * 32;DATA_ADDR := BASE_ADDR + (SignExtend(VINDEX1[k+63:k])*SCALE + DISP;IF MASK[31+i] THENDEST[i +31:i] := FETCH_32BITS(DATA_ADDR); // a fault exits the instructionFI;MASK[i +31:i] := 0;ENDFORDEST[MAXVL-1:128] := 0;Intel C/C++ Compiler Intrinsic EquivalentVPGATHERDD: __m128i _mm_i32gather_epi32 (int const * base, __m128i index, const int scale);VPGATHERDD: __m128i _mm_mask_i32gather_epi32 (__m128i src, int const * base, __m128i index, __m128i mask, const int scale);VPGATHERDD: __m256i _mm256_i32gather_epi32 ( int const * base, __m256i index, const int scale);VPGATHERDD: __m256i _mm256_mask_i32gather_epi32 (__m256i src, int const * base, __m256i index, __m256i mask, const int scale);VPGATHERQD: __m128i _mm_i64gather_epi32 (int const * base, __m128i index, const int scale);VPGATHERQD: __m128i _mm_mask_i64gather_epi32 (__m128i src, int const * base, __m128i index, __m128i mask, const int scale);VPGATHERQD: __m128i _mm256_i64gather_epi32 (int const * base, __m256i index, const int scale);VPGATHERQD: __m128i _mm256_mask_i64gather_epi32 (__m128i src, int const * base, __m256i index, __m128i mask, const int scale);SIMD Floating-Point ExceptionsNoneOther ExceptionsSee Table2-27, “Type 12 Class Exception Conditions”.

This UNOFFICIAL reference was generated from the official Intel® 64 and IA-32 Architectures Software Developer’s Manual by a dumb script. There is no guarantee that some parts aren't mangled or broken and is distributed WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.