VMASKMOV—Conditional SIMD Packed Loads and StoresInstruction Operand EncodingDescriptionConditionally moves packed data elements from the second source operand into the corresponding data element of the destination operand, depending on the mask bits associated with each data element. The mask bits are speci-fied in the first source operand. The mask bit for each data element is the most significant bit of that element in the first source operand. If a mask is 1, the corresponding data element is copied from the second source operand to the destination operand. If the mask is 0, the corresponding data element is set to zero in the load form of these instructions, and unmodified in the store form. The second source operand is a memory address for the load form of these instruction. The destination operand is a memory address for the store form of these instructions. The other operands are both XMM registers (for VEX.128 version) or YMM registers (for VEX.256 version).Faults occur only due to mask-bit required memory accesses that caused the faults. Faults will not occur due to referencing any memory location if the corresponding mask bit for that memory location is 0. For example, no faults will be detected if the mask bits are all zero.Unlike previous MASKMOV instructions (MASKMOVQ and MASKMOVDQU), a nontemporal hint is not applied to these instructions.Instruction behavior on alignment check reporting with mask bits of less than all 1s are the same as with mask bits of all 1s.VMASKMOV should not be used to access memory mapped I/O and un-cached memory as the access and the ordering of the individual loads or stores it does is implementation specific. Opcode/InstructionOp/ En64/32-bit ModeCPUID Feature FlagDescriptionVEX.128.66.0F38.W0 2C /rVMASKMOVPS xmm1, xmm2, m128RVMV/VAVXConditionally load packed single-precision values from m128 using mask in xmm2 and store in xmm1.VEX.256.66.0F38.W0 2C /rVMASKMOVPS ymm1, ymm2, m256RVMV/VAVXConditionally load packed single-precision values from m256 using mask in ymm2 and store in ymm1.VEX.128.66.0F38.W0 2D /rVMASKMOVPD xmm1, xmm2, m128RVMV/VAVXConditionally load packed double-precision values from m128 using mask in xmm2 and store in xmm1.VEX.256.66.0F38.W0 2D /rVMASKMOVPD ymm1, ymm2, m256RVMV/VAVXConditionally load packed double-precision values from m256 using mask in ymm2 and store in ymm1.VEX.128.66.0F38.W0 2E /rVMASKMOVPS m128, xmm1, xmm2MVRV/VAVXConditionally store packed single-precision values from xmm2 using mask in xmm1.VEX.256.66.0F38.W0 2E /rVMASKMOVPS m256, ymm1, ymm2MVRV/VAVXConditionally store packed single-precision values from ymm2 using mask in ymm1.VEX.128.66.0F38.W0 2F /rVMASKMOVPD m128, xmm1, xmm2MVRV/VAVXConditionally store packed double-precision values from xmm2 using mask in xmm1.VEX.256.66.0F38.W0 2F /rVMASKMOVPD m256, ymm1, ymm2MVRV/VAVXConditionally store packed double-precision values from ymm2 using mask in ymm1.Op/EnOperand 1Operand 2Operand 3Operand 4RVMModRM:reg (w)VEX.vvvv (r)ModRM:r/m (r)NAMVRModRM:r/m (w)VEX.vvvv (r)ModRM:reg (r)NA
This UNOFFICIAL reference was generated from the official Intel® 64 and IA-32 Architectures Software Developer’s Manual by a dumb script. There is no guarantee that some parts aren't mangled or broken and is distributed WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.