<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic PCIe NUMA Affinity on Haswell &amp; Ivy Bridge in Software Tuning, Performance Optimization &amp; Platform Monitoring</title>
    <link>https://community.intel.com/t5/Software-Tuning-Performance/PCIe-NUMA-Affinity-on-Haswell-Ivy-Bridge/m-p/1037802#M4451</link>
    <description>&lt;P&gt;On Ivy Bridge and Haswell dual socket servers, does each NUMA node have access to its own PCIe slot?&amp;nbsp; Or does all PCIe traffic flow through one node?&lt;/P&gt;

&lt;P&gt;And what is the easiest way of telling which NUMA node is closest to the PCIe slot I'm using on Windows Server 2008 R2?&lt;/P&gt;</description>
    <pubDate>Thu, 23 Apr 2015 03:18:43 GMT</pubDate>
    <dc:creator>John_S_2</dc:creator>
    <dc:date>2015-04-23T03:18:43Z</dc:date>
    <item>
      <title>PCIe NUMA Affinity on Haswell &amp; Ivy Bridge</title>
      <link>https://community.intel.com/t5/Software-Tuning-Performance/PCIe-NUMA-Affinity-on-Haswell-Ivy-Bridge/m-p/1037802#M4451</link>
      <description>&lt;P&gt;On Ivy Bridge and Haswell dual socket servers, does each NUMA node have access to its own PCIe slot?&amp;nbsp; Or does all PCIe traffic flow through one node?&lt;/P&gt;

&lt;P&gt;And what is the easiest way of telling which NUMA node is closest to the PCIe slot I'm using on Windows Server 2008 R2?&lt;/P&gt;</description>
      <pubDate>Thu, 23 Apr 2015 03:18:43 GMT</pubDate>
      <guid>https://community.intel.com/t5/Software-Tuning-Performance/PCIe-NUMA-Affinity-on-Haswell-Ivy-Bridge/m-p/1037802#M4451</guid>
      <dc:creator>John_S_2</dc:creator>
      <dc:date>2015-04-23T03:18:43Z</dc:date>
    </item>
    <item>
      <title> </title>
      <link>https://community.intel.com/t5/Software-Tuning-Performance/PCIe-NUMA-Affinity-on-Haswell-Ivy-Bridge/m-p/1037803#M4452</link>
      <description>&lt;P&gt;&amp;nbsp;&lt;/P&gt;

&lt;P&gt;Check following thread: &lt;A href="https://software.intel.com/en-us/forums/topic/379378"&gt;https://software.intel.com/en-us/forums/topic/379378&lt;/A&gt;&lt;/P&gt;</description>
      <pubDate>Thu, 23 Apr 2015 06:16:35 GMT</pubDate>
      <guid>https://community.intel.com/t5/Software-Tuning-Performance/PCIe-NUMA-Affinity-on-Haswell-Ivy-Bridge/m-p/1037803#M4452</guid>
      <dc:creator>Bernard</dc:creator>
      <dc:date>2015-04-23T06:16:35Z</dc:date>
    </item>
    <item>
      <title>Xeon E5-2xxx processors</title>
      <link>https://community.intel.com/t5/Software-Tuning-Performance/PCIe-NUMA-Affinity-on-Haswell-Ivy-Bridge/m-p/1037804#M4453</link>
      <description>&lt;P&gt;Xeon E5-2xxx processors (Sandy Bridge/v1, Ivy Bridge/v2, and Haswell/v3) have PCIe interfaces on each chip.&amp;nbsp; Whether these are all exposed on the motherboard depends on the vendor.&amp;nbsp; Typically systems that are physically small expose fewer slots than systems that are physically large, but the motherboard vendor documentation is a good place to look for information on which slot is attached to which processor socket.&lt;/P&gt;

&lt;P&gt;On Linux systems the "hardware topology" infrastructure queries the hardware to determine the physical layout of the various cores and PCIe devices.&amp;nbsp; On a dual-socket Xeon E5-2680 (Sandy Bridge) node, the "lstopo" command returns something like:&lt;/P&gt;

&lt;BLOCKQUOTE&gt;
	&lt;P&gt;Machine (64GB)&lt;BR /&gt;
		&amp;nbsp; NUMANode L#0 (P#0 32GB) + Socket L#0 + L3 L#0 (20MB)&lt;BR /&gt;
		&amp;nbsp;&amp;nbsp;&amp;nbsp; L2 L#0 (256KB) + L1d L#0 (32KB) + L1i L#0 (32KB) + Core L#0 + PU L#0 (P#0)&lt;BR /&gt;
		&amp;nbsp;&amp;nbsp;&amp;nbsp; L2 L#1 (256KB) + L1d L#1 (32KB) + L1i L#1 (32KB) + Core L#1 + PU L#1 (P#2)&lt;BR /&gt;
		&amp;nbsp;&amp;nbsp;&amp;nbsp; L2 L#2 (256KB) + L1d L#2 (32KB) + L1i L#2 (32KB) + Core L#2 + PU L#2 (P#4)&lt;BR /&gt;
		&amp;nbsp;&amp;nbsp;&amp;nbsp; L2 L#3 (256KB) + L1d L#3 (32KB) + L1i L#3 (32KB) + Core L#3 + PU L#3 (P#6)&lt;BR /&gt;
		&amp;nbsp;&amp;nbsp;&amp;nbsp; L2 L#4 (256KB) + L1d L#4 (32KB) + L1i L#4 (32KB) + Core L#4 + PU L#4 (P#8)&lt;BR /&gt;
		&amp;nbsp;&amp;nbsp;&amp;nbsp; L2 L#5 (256KB) + L1d L#5 (32KB) + L1i L#5 (32KB) + Core L#5 + PU L#5 (P#10)&lt;BR /&gt;
		&amp;nbsp;&amp;nbsp;&amp;nbsp; L2 L#6 (256KB) + L1d L#6 (32KB) + L1i L#6 (32KB) + Core L#6 + PU L#6 (P#12)&lt;BR /&gt;
		&amp;nbsp;&amp;nbsp;&amp;nbsp; L2 L#7 (256KB) + L1d L#7 (32KB) + L1i L#7 (32KB) + Core L#7 + PU L#7 (P#14)&lt;BR /&gt;
		&amp;nbsp; NUMANode L#1 (P#1 32GB) + Socket L#1 + L3 L#1 (20MB)&lt;BR /&gt;
		&amp;nbsp;&amp;nbsp;&amp;nbsp; L2 L#8 (256KB) + L1d L#8 (32KB) + L1i L#8 (32KB) + Core L#8 + PU L#8 (P#1)&lt;BR /&gt;
		&amp;nbsp;&amp;nbsp;&amp;nbsp; L2 L#9 (256KB) + L1d L#9 (32KB) + L1i L#9 (32KB) + Core L#9 + PU L#9 (P#3)&lt;BR /&gt;
		&amp;nbsp;&amp;nbsp;&amp;nbsp; L2 L#10 (256KB) + L1d L#10 (32KB) + L1i L#10 (32KB) + Core L#10 + PU L#10 (P#5)&lt;BR /&gt;
		&amp;nbsp;&amp;nbsp;&amp;nbsp; L2 L#11 (256KB) + L1d L#11 (32KB) + L1i L#11 (32KB) + Core L#11 + PU L#11 (P#7)&lt;BR /&gt;
		&amp;nbsp;&amp;nbsp;&amp;nbsp; L2 L#12 (256KB) + L1d L#12 (32KB) + L1i L#12 (32KB) + Core L#12 + PU L#12 (P#9)&lt;BR /&gt;
		&amp;nbsp;&amp;nbsp;&amp;nbsp; L2 L#13 (256KB) + L1d L#13 (32KB) + L1i L#13 (32KB) + Core L#13 + PU L#13 (P#11)&lt;BR /&gt;
		&amp;nbsp;&amp;nbsp;&amp;nbsp; L2 L#14 (256KB) + L1d L#14 (32KB) + L1i L#14 (32KB) + Core L#14 + PU L#14 (P#13)&lt;BR /&gt;
		&amp;nbsp;&amp;nbsp;&amp;nbsp; L2 L#15 (256KB) + L1d L#15 (32KB) + L1i L#15 (32KB) + Core L#15 + PU L#15 (P#15)&lt;BR /&gt;
		&amp;nbsp; HostBridge L#0&lt;BR /&gt;
		&amp;nbsp;&amp;nbsp;&amp;nbsp; PCIBridge&lt;BR /&gt;
		&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; PCI 14e4:165f&lt;BR /&gt;
		&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; Net L#0 "eth2"&lt;BR /&gt;
		&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; PCI 14e4:165f&lt;BR /&gt;
		&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; Net L#1 "eth3"&lt;BR /&gt;
		&amp;nbsp;&amp;nbsp;&amp;nbsp; PCIBridge&lt;BR /&gt;
		&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; PCI 14e4:165f&lt;BR /&gt;
		&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; Net L#2 "eth0"&lt;BR /&gt;
		&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; PCI 14e4:165f&lt;BR /&gt;
		&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; Net L#3 "eth1"&lt;BR /&gt;
		&amp;nbsp;&amp;nbsp;&amp;nbsp; PCIBridge&lt;BR /&gt;
		&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; PCI 1000:005b&lt;BR /&gt;
		&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; Block L#4 "sda"&lt;BR /&gt;
		&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; Block L#5 "sdb"&lt;BR /&gt;
		&amp;nbsp;&amp;nbsp;&amp;nbsp; PCIBridge&lt;BR /&gt;
		&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; PCIBridge&lt;BR /&gt;
		&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; PCIBridge&lt;BR /&gt;
		&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; PCIBridge&lt;BR /&gt;
		&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; PCI 102b:0534&lt;BR /&gt;
		&amp;nbsp;&amp;nbsp;&amp;nbsp; PCI 8086:1d02&lt;BR /&gt;
		&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; Block L#6 "sr0"&lt;BR /&gt;
		&amp;nbsp; HostBridge L#8&lt;BR /&gt;
		&amp;nbsp;&amp;nbsp;&amp;nbsp; PCIBridge&lt;BR /&gt;
		&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; PCI 15b3:1003&lt;BR /&gt;
		&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; Net L#7 "eth4"&lt;BR /&gt;
		&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; Net L#8 "ib0"&lt;BR /&gt;
		&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; OpenFabrics L#9 "mlx4_0"&lt;/P&gt;
&lt;/BLOCKQUOTE&gt;

&lt;P&gt;&lt;BR /&gt;
	The first 19 lines tell which cores are in the two sockets, while lines starting with "HostBridge L#0" list the PCIe devices attached to socket 0 (including four Ethernet devices, two disks, etc), and the lines starting with "HostBridge L#8" list the PCIe devices attached to socket 1 (including an Infiniband + Ethernet card).&lt;/P&gt;

&lt;P&gt;Windows Server 2008 might have similar information in the system management screens, but I don't have administrative access to any Windows systems to check.&lt;/P&gt;</description>
      <pubDate>Thu, 23 Apr 2015 19:17:34 GMT</pubDate>
      <guid>https://community.intel.com/t5/Software-Tuning-Performance/PCIe-NUMA-Affinity-on-Haswell-Ivy-Bridge/m-p/1037804#M4453</guid>
      <dc:creator>McCalpinJohn</dc:creator>
      <dc:date>2015-04-23T19:17:34Z</dc:date>
    </item>
  </channel>
</rss>

