Intel® MPI Library
Get help with building, analyzing, optimizing, and scaling high-performance computing (HPC) applications.
2161 Discussions

Intel® Cluster Studio XE 2013 is now available for download!

Gergana_S_Intel
Employee
1,222 Views

The Intel® Cluster Studio XE 2013 for Linux* and Windows* combines all Intel® Parallel Studio XE and Intel® Cluster Tools into a single package. This multi-component software toolkit contains the core libraries and tools to efficiently develop, optimize, run, and distribute parallel applications for clusters with Intel processors. This package is for cluster users who develop on and build for IA-32 and Intel® 64 architectures on Linux* and Windows*, as well as customers running over the Intel® Xeon Phi™ coprocessor on Linux*. It contains:

  • Intel® C++ Composer XE 2013 Update 1 - includes Intel® Integrated Performance Primitive (Intel® IPP), Intel® Threading Building Blocks (Intel® TBB) and Intel® Math Kernel Library (Intel® MKL)
  • Intel® Fortran Composer XE 2013 Update 1 - includes Intel® Math Kernel Library (Intel® MKL)
  • Intel® MPI Library 4.1
  • Intel® Trace Analyzer and Collector 8.1
  • Intel® MPI Benchmarks 3.2 Update 4
  • Intel® Advisor XE 2013 Update 1
  • Intel® Inspector XE 2013 Update 2
  • Intel® VTune™ Amplifier XE 2013 Update 2
  • Documentation

New in this release:

  • All components updated to current versions
  • Intel® Advisor XE is a new component included in this product
  • Development of applications running on an Intel® Many Integrated Core architecture (Intel® MIC architecture) coprocessor (Intel® Xeon Phi™ product family) is now supported
  • Brand new HTML documentation format

Refer to the Intel® Cluster Studio XE 2013 Release Notes for more details.

If you have an existing license for the Intel® Cluster Studio XE product, you can download this latest version at the Intel® Registration Center.

If you'd like to evaluate the product, you can register for a free 30-day evaluation license at the Intel® Cluster Studio XE webpage.

0 Kudos
0 Replies
Reply