<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic Re: Using float16 and bfloat16 in Intel® oneAPI DPC++/C++ Compiler</title>
    <link>https://community.intel.com/t5/Intel-oneAPI-DPC-C-Compiler/Using-float16-and-bfloat16/m-p/1747956#M4751</link>
    <description>&lt;P&gt;You can use&lt;/P&gt;&lt;PRE&gt;__bf16&lt;/PRE&gt;&lt;P&gt;for bfloat16.&amp;nbsp; As noted in the post above, the data type requires hardware support, otherwise it'll be emulated as single-precision float operations.&amp;nbsp; The following generates code which does the addition in single-precision then converts the result to bfloat16:&lt;/P&gt;&lt;PRE&gt;$ cat test.cpp &lt;BR /&gt;__bf16 add(__bf16 x, __bf16 y) {&lt;BR /&gt;    return x + y;&lt;BR /&gt;}&lt;BR /&gt;&lt;BR /&gt;$ icpx -c -mavxneconvert test.cpp&lt;BR /&gt;&lt;BR /&gt;$ nm -C test.o&lt;BR /&gt;0000000000000000 T add(std::bfloat16_t, std::bfloat16_t)&lt;/PRE&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Future Intel CPUs will support AVX10.2, which includes bfloat16 arithmetic instructions (add, sub, mul, FMA, div, sqrt, ...).&lt;/P&gt;</description>
    <pubDate>Thu, 14 May 2026 20:36:08 GMT</pubDate>
    <dc:creator>hpkfft</dc:creator>
    <dc:date>2026-05-14T20:36:08Z</dc:date>
    <item>
      <title>Using float16 and bfloat16</title>
      <link>https://community.intel.com/t5/Intel-oneAPI-DPC-C-Compiler/Using-float16-and-bfloat16/m-p/1722130#M4603</link>
      <description>&lt;P&gt;Is it possible to use the float16 or bfloat16 data types with icx compiler?&lt;/P&gt;&lt;P&gt;I am getting the error:&lt;/P&gt;&lt;LI-CODE lang="cpp"&gt;&amp;lt;source&amp;gt;:7:6: error: no type named 'float16_t' in namespace 'std'
    7 | std::float16_t check_float16_conversion(std::float16_t a, std::float16_t b) {
      | ~~~~~^
&amp;lt;source&amp;gt;:7:46: error: no type named 'float16_t' in namespace 'std'
    7 | std::float16_t check_float16_conversion(std::float16_t a, std::float16_t b) {
      |                                         ~~~~~^
&amp;lt;source&amp;gt;:7:64: error: no type named 'float16_t' in namespace 'std'
    7 | std::float16_t check_float16_conversion(std::float16_t a, std::float16_t b) {
      |                                                           ~~~~~^
3 errors generated.&lt;/LI-CODE&gt;&lt;P&gt;While trying to compile with icx 2025.2.1 -std=c++23&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;I guess it is not supported yet, but then, how can I use float16 or bfloat16?&lt;/P&gt;</description>
      <pubDate>Wed, 15 Oct 2025 12:30:04 GMT</pubDate>
      <guid>https://community.intel.com/t5/Intel-oneAPI-DPC-C-Compiler/Using-float16-and-bfloat16/m-p/1722130#M4603</guid>
      <dc:creator>ddavobsc</dc:creator>
      <dc:date>2025-10-15T12:30:04Z</dc:date>
    </item>
    <item>
      <title>Re: Using float16 and bfloat16</title>
      <link>https://community.intel.com/t5/Intel-oneAPI-DPC-C-Compiler/Using-float16-and-bfloat16/m-p/1737575#M4686</link>
      <description>&lt;P&gt;You can use&lt;/P&gt;&lt;PRE&gt;_Float16&lt;/PRE&gt;&lt;P&gt;&amp;nbsp;for IEEE binary16 (half precision).&lt;/P&gt;</description>
      <pubDate>Wed, 18 Feb 2026 03:50:37 GMT</pubDate>
      <guid>https://community.intel.com/t5/Intel-oneAPI-DPC-C-Compiler/Using-float16-and-bfloat16/m-p/1737575#M4686</guid>
      <dc:creator>hpkfft</dc:creator>
      <dc:date>2026-02-18T03:50:37Z</dc:date>
    </item>
    <item>
      <title>Re: Using float16 and bfloat16</title>
      <link>https://community.intel.com/t5/Intel-oneAPI-DPC-C-Compiler/Using-float16-and-bfloat16/m-p/1737835#M4687</link>
      <description>&lt;P&gt;Hi, bfloat16 type is supported in SYCL extension,&amp;nbsp;&lt;A href="https://github.com/intel/llvm/blob/sycl/sycl/doc/extensions/supported/sycl_ext_oneapi_bfloat16.asciidoc" target="_blank"&gt;https://github.com/intel/llvm/blob/sycl/sycl/doc/extensions/supported/sycl_ext_oneapi_bfloat16.asciidoc&lt;/A&gt;&lt;/P&gt;
&lt;P&gt;There's a simple example you can play with. The data type&amp;nbsp;requires hardware support, otherwise it'll be emulated as float operations.&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Thu, 19 Feb 2026 22:47:00 GMT</pubDate>
      <guid>https://community.intel.com/t5/Intel-oneAPI-DPC-C-Compiler/Using-float16-and-bfloat16/m-p/1737835#M4687</guid>
      <dc:creator>yzh_intel</dc:creator>
      <dc:date>2026-02-19T22:47:00Z</dc:date>
    </item>
    <item>
      <title>Re: Using float16 and bfloat16</title>
      <link>https://community.intel.com/t5/Intel-oneAPI-DPC-C-Compiler/Using-float16-and-bfloat16/m-p/1747956#M4751</link>
      <description>&lt;P&gt;You can use&lt;/P&gt;&lt;PRE&gt;__bf16&lt;/PRE&gt;&lt;P&gt;for bfloat16.&amp;nbsp; As noted in the post above, the data type requires hardware support, otherwise it'll be emulated as single-precision float operations.&amp;nbsp; The following generates code which does the addition in single-precision then converts the result to bfloat16:&lt;/P&gt;&lt;PRE&gt;$ cat test.cpp &lt;BR /&gt;__bf16 add(__bf16 x, __bf16 y) {&lt;BR /&gt;    return x + y;&lt;BR /&gt;}&lt;BR /&gt;&lt;BR /&gt;$ icpx -c -mavxneconvert test.cpp&lt;BR /&gt;&lt;BR /&gt;$ nm -C test.o&lt;BR /&gt;0000000000000000 T add(std::bfloat16_t, std::bfloat16_t)&lt;/PRE&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Future Intel CPUs will support AVX10.2, which includes bfloat16 arithmetic instructions (add, sub, mul, FMA, div, sqrt, ...).&lt;/P&gt;</description>
      <pubDate>Thu, 14 May 2026 20:36:08 GMT</pubDate>
      <guid>https://community.intel.com/t5/Intel-oneAPI-DPC-C-Compiler/Using-float16-and-bfloat16/m-p/1747956#M4751</guid>
      <dc:creator>hpkfft</dc:creator>
      <dc:date>2026-05-14T20:36:08Z</dc:date>
    </item>
  </channel>
</rss>

