image/svg+xmlGF2P8AFFINEQB—Galois Field Affine TransformationInstruction Operand EncodingDescriptionThe AFFINEB instruction computes an affine transformation in the Galois Field 28. For this instruction, an affine transformation is defined by A * x + b where “A” is an 8 by 8 bit matrix, and “x” and “b” are 8-bit vectors. One SIMD register (operand 1) holds “x” as either 16, 32 or 64 8-bit vectors. A second SIMD (operand 2) register or memory operand contains 2, 4, or 8 “A” values, which are operated upon by the correspondingly aligned 8 “x” values in the first register. The “b” vector is constant for all calculations and contained in the immediate byte.The EVEX encoded form of this instruction does not support memory fault suppression. The SSE encoded forms of the instruction require16B alignment on their memory operations.Operationdefine parity(x):t := 0 // single bitFOR i := 0 to 7:t = t xor x.bit[i]return tdefine affine_byte(tsrc2qw, src1byte, imm):FOR i := 0 to 7:* parity(x) = 1 if x has an odd number of 1s in it, and 0 otherwise.*retbyte.bit[i] := parity(tsrc2qw.byte[7-i] AND src1byte) XOR imm8.bit[i]return retbyteOpcode/InstructionOp/En64/32 bit Mode SupportCPUID Feature FlagDescription66 0F3A CE /r /ibGF2P8AFFINEQB xmm1, xmm2/m128, imm8AV/VGFNIComputes affine transformation in the finite field GF(2^8).VEX.128.66.0F3A.W1 CE /r /ibVGF2P8AFFINEQB xmm1, xmm2, xmm3/m128, imm8BV/VAVXGFNIComputes affine transformation in the finite field GF(2^8).VEX.256.66.0F3A.W1 CE /r /ibVGF2P8AFFINEQB ymm1, ymm2, ymm3/m256, imm8BV/VAVXGFNIComputes affine transformation in the finite field GF(2^8).EVEX.128.66.0F3A.W1 CE /r /ibVGF2P8AFFINEQB xmm1{k1}{z}, xmm2, xmm3/m128/m64bcst, imm8CV/VAVX512VLGFNIComputes affine transformation in the finite field GF(2^8).EVEX.256.66.0F3A.W1 CE /r /ibVGF2P8AFFINEQB ymm1{k1}{z}, ymm2, ymm3/m256/m64bcst, imm8CV/VAVX512VLGFNIComputes affine transformation in the finite field GF(2^8).EVEX.512.66.0F3A.W1 CE /r /ibVGF2P8AFFINEQB zmm1{k1}{z}, zmm2, zmm3/m512/m64bcst, imm8CV/VAVX512FGFNIComputes affine transformation in the finite field GF(2^8).Op/EnTupleOperand 1Operand 2Operand 3Operand 4ANAModRM:reg (r, w)ModRM:r/m (r)imm8 (r)NABNAModRM:reg (w)VEX.vvvv (r)ModRM:r/m (r)imm8 (r)CFullModRM:reg (w)EVEX.vvvv (r)ModRM:r/m (r)imm8 (r)

image/svg+xmlVGF2P8AFFINEQB dest, src1, src2, imm8 (EVEX encoded version)(KL, VL) = (2, 128), (4, 256), (8, 512)FOR j := 0 TO KL-1:IF SRC2 is memory and EVEX.b==1:tsrc2 := SRC2.qword[0]ELSE:tsrc2 := SRC2.qword[j]FOR b := 0 to 7:IF k1[j*8+b] OR *no writemask*:DEST.qword[j].byte[b] := affine_byte(tsrc2, SRC1.qword[j].byte[b], imm8)ELSE IF *zeroing*:DEST.qword[j].byte[b] := 0*ELSE DEST.qword[j].byte[b] remains unchanged*DEST[MAX_VL-1:VL] := 0VGF2P8AFFINEQB dest, src1, src2, imm8 (128b and 256b VEX encoded versions)(KL, VL) = (2, 128), (4, 256)FOR j := 0 TO KL-1:FOR b := 0 to 7:DEST.qword[j].byte[b] := affine_byte(SRC2.qword[j], SRC1.qword[j].byte[b], imm8)DEST[MAX_VL-1:VL] := 0GF2P8AFFINEQB srcdest, src1, imm8 (128b SSE encoded version)FOR j := 0 TO 1:FOR b := 0 to 7:SRCDEST.qword[j].byte[b] := affine_byte(SRC1.qword[j], SRCDEST.qword[j].byte[b], imm8)Intel C/C++ Compiler Intrinsic Equivalent(V)GF2P8AFFINEQB __m128i _mm_gf2p8affine_epi64_epi8(__m128i, __m128i, int);(V)GF2P8AFFINEQB __m128i _mm_mask_gf2p8affine_epi64_epi8(__m128i, __mmask16, __m128i, __m128i, int);(V)GF2P8AFFINEQB __m128i _mm_maskz_gf2p8affine_epi64_epi8(__mmask16, __m128i, __m128i, int);VGF2P8AFFINEQB __m256i _mm256_gf2p8affine_epi64_epi8(__m256i, __m256i, int);VGF2P8AFFINEQB __m256i _mm256_mask_gf2p8affine_epi64_epi8(__m256i, __mmask32, __m256i, __m256i, int);VGF2P8AFFINEQB __m256i _mm256_maskz_gf2p8affine_epi64_epi8(__mmask32, __m256i, __m256i, int);VGF2P8AFFINEQB __m512i _mm512_gf2p8affine_epi64_epi8(__m512i, __m512i, int);VGF2P8AFFINEQB __m512i _mm512_mask_gf2p8affine_epi64_epi8(__m512i, __mmask64, __m512i, __m512i, int);VGF2P8AFFINEQB __m512i _mm512_maskz_gf2p8affine_epi64_epi8(__mmask64, __m512i, __m512i, int);SIMD Floating-Point ExceptionsNone.Other ExceptionsLegacy-encoded and VEX-encoded: See Table2-21, “Type 4 Class Exception Conditions”.EVEX-encoded: See Table2-50, “Type E4NF Class Exception Conditions”.

This UNOFFICIAL reference was generated from the official Intel® 64 and IA-32 Architectures Software Developer’s Manual by a dumb script. There is no guarantee that some parts aren't mangled or broken and is distributed WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.