image/svg+xml VCVTPH2PS—Convert 16-bit FP values to Single-Precision FP values Instruction Operand Encoding Description Converts packed half precision (16-bits) floating-point values in the low-order bits of the source operand (the second operand) to packed single-precision floating-point values and writes the converted values into the destina- tion operand (the first operand). If case of a denormal operand, the correct normal result is returned. MXCSR.DAZ is ignored and is treated as if it 0. No denormal exception is reported on MXCSR. VEX.128 version: The source operand is a XMM register or 64-bit memory location. The destination operand is a XMM register. The upper bits (MAXVL-1:128) of the corresponding destination register are zeroed. VEX.256 version: The source operand is a XMM register or 128-bit memory location. The destination operand is a YMM register. Bits (MAXVL-1:256) of the corresponding destination register are zeroed. EVEX encoded versions: The source operand is a YMM/XMM/XMM (low 64-bits) register or a 256/128/64-bit memory location. The destination operand is a ZMM/YMM/XMM register conditionally updated with writemask k1. The diagram below illustrates how data is converted from four packed half precision (in 64 bits) to four single preci- sion (in 128 bits) FP values. Note: VEX.vvvv and EVEX.vvvv are reserved (must be 1111b). Opcode/ Instruction Op / En 64/32 bit Mode Support CPUID Feature Flag Description VEX.128.66.0F38.W0 13 /r VCVTPH2PS xmm1, xmm2/m64 AV/VF16CConvert four packed half precision (16-bit) floating- point values in xmm2/m64 to packed single-precision floating-point value in xmm1. VEX.256.66.0F38.W0 13 /r VCVTPH2PS ymm1, xmm2/m128 AV/VF16CConvert eight packed half precision (16-bit) floating- point values in xmm2/m128 to packed single- precision floating-point value in ymm1. EVEX.128.66.0F38.W0 13 /r VCVTPH2PS xmm1 {k1}{z}, xmm2/m64 BV/VAVX512VL AVX512F Convert four packed half precision (16-bit) floating- point values in xmm2/m64 to packed single-precision floating-point values in xmm1. EVEX.256.66.0F38.W0 13 /r VCVTPH2PS ymm1 {k1}{z}, xmm2/m128 BV/VAVX512VL AVX512F Convert eight packed half precision (16-bit) floating- point values in xmm2/m128 to packed single- precision floating-point values in ymm1. EVEX.512.66.0F38.W0 13 /r VCVTPH2PS zmm1 {k1}{z}, ymm2/m256 {sae} BV/VAVX512FConvert sixteen packed half precision (16-bit) floating-point values in ymm2/m256 to packed single-precision floating-point values in zmm1. Op/EnTuple TypeOperand 1Operand 2Operand 3Operand 4 ANAModRM:reg (w)ModRM:r/m (r)NANA BHalf MemModRM:reg (w)ModRM:r/m (r)NANA image/svg+xml Operation vCvt_h2s(SRC1[15:0]) { RETURN Cvt_Half_Precision_To_Single_Precision(SRC1[15:0]); } VCVTPH2PS (EVEX encoded versions) (KL, VL) = (4, 128), (8, 256), (16, 512) FOR j := 0 TO KL-1 i := j * 32 k := j * 16 IF k1[j] OR *no writemask* THEN DEST[i+31:i] := vCvt_h2s(SRC[k+15:k]) ELSE IF *merging-masking*; merging-masking THEN *DEST[i+31:i] remains unchanged* ELSE ; zeroing-masking DEST[i+31:i] := 0 FI FI; ENDFOR DEST[MAXVL-1:VL] := 0 VCVTPH2PS (VEX.256 encoded version) DEST[31:0] := vCvt_h2s(SRC1[15:0]); DEST[63:32] := vCvt_h2s(SRC1[31:16]); DEST[95:64] := vCvt_h2s(SRC1[47:32]); DEST[127:96] := vCvt_h2s(SRC1[63:48]); DEST[159:128] := vCvt_h2s(SRC1[79:64]); DEST[191:160] := vCvt_h2s(SRC1[95:80]); DEST[223:192] := vCvt_h2s(SRC1[111:96]); DEST[255:224] := vCvt_h2s(SRC1[127:112]); DEST[MAXVL-1:256] := 0 Figure 5-6. VCVTPH2PS (128-bit Version) VH0 VH1 VH2 VH3 15 031 1647 3263 4895 64127 96 VS0 VS1 VS2 VS3 31 063 3295 64127 96 convert convert convert convert xmm2/mem64 xmm1 VCVTPH2PS xmm1, xmm2/mem64, imm8 image/svg+xml VCVTPH2PS (VEX.128 encoded version) DEST[31:0] := vCvt_h2s(SRC1[15:0]); DEST[63:32] := vCvt_h2s(SRC1[31:16]); DEST[95:64] := vCvt_h2s(SRC1[47:32]); DEST[127:96] := vCvt_h2s(SRC1[63:48]); DEST[MAXVL-1:128] := 0 Flags Affected None Intel C/C++ Compiler Intrinsic Equivalent VCVTPH2PS __m512 _mm512_cvtph_ps( __m256i a); VCVTPH2PS __m512 _mm512_mask_cvtph_ps(__m512 s, __mmask16 k, __m256i a); VCVTPH2PS __m512 _mm512_maskz_cvtph_ps(__mmask16 k, __m256i a); VCVTPH2PS __m512 _mm512_cvt_roundph_ps( __m256i a, int sae); VCVTPH2PS __m512 _mm512_mask_cvt_roundph_ps(__m512 s, __mmask16 k, __m256i a, int sae); VCVTPH2PS __m512 _mm512_maskz_cvt_roundph_ps( __mmask16 k, __m256i a, int sae); VCVTPH2PS __m256 _mm256_mask_cvtph_ps(__m256 s, __mmask8 k, __m128i a); VCVTPH2PS __m256 _mm256_maskz_cvtph_ps(__mmask8 k, __m128i a); VCVTPH2PS __m128 _mm_mask_cvtph_ps(__m128 s, __mmask8 k, __m128i a); VCVTPH2PS __m128 _mm_maskz_cvtph_ps(__mmask8 k, __m128i a); VCVTPH2PS __m128 _mm_cvtph_ps ( __m128i m1); VCVTPH2PS __m256 _mm256_cvtph_ps ( __m128i m1) SIMD Floating-Point Exceptions Invalid Other Exceptions VEX-encoded instructions, see Table2-26, “Type 11 Class Exception Conditions” (do not report #AC). EVEX-encoded instructions, see Table2-60, “Type E11 Class Exception Conditions”. Additionally: #UDIf VEX.W=1. #UDIf VEX.vvvv != 1111B or EVEX.vvvv != 1111B. This UNOFFICIAL reference was generated from the official Intel® 64 and IA-32 Architectures Software Developer’s Manual by a dumb script. There is no guarantee that some parts aren't mangled or broken and is distributed WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE .