Community
cancel
Showing results for 
Search instead for 
Did you mean: 
Highlighted
Moderator
232 Views

Time to test this year's Intel Fortran 2020 Beta

Intel® Fortran Composer XE 2020 Beta

We are pleased to announce Intel® Parallel Studio XE 2020 Technical Preview Composer Edition for Fortran is ready for testing.  Packages are available for Linux*, Windows*, and macOS* X.   

These packages contain a Pre-Release version of Intel® Fortran version 19.1.  That's right, we have a "dot-1" minor release this year.  But for Fortran it's anything but minor!  This is one of our most ambitious releases to date.  We've added many Fortran 2018 features.  Get the DETAILS HERE.

When you're ready to join our test program, GO HERE to learn more and sign up!

0 Kudos
39 Replies
Highlighted
Honored Contributor I
188 Views

Ronald W Green (Blackbelt) wrote:

.. We've added many Fortran 2018 features ..

Mega Kudos to Intel team on an outstanding announcement with great news on an exciting product launch, wonderful "What's New" notes as well.

Keep up the great work, 

0 Kudos
Highlighted
Valued Contributor I
188 Views

Yep, interested in the progress. And hoping for fewer regressions this time ;)

 

0 Kudos
Highlighted
New Contributor III
188 Views

The support for the Fortran Coarray collectives is great. Please also consider making Coarray Fortran interoperable with other non-Fortran MPI code. I don't know how much work that would be for Intel. But it is a very much desired feature since much of the software are mixed-language these days. Thanks for your great work @Intel and the excellent Fortran compiler.

0 Kudos
Highlighted
Beginner
188 Views

My screen says something different: The current coarray runtime of the ifort 2020 Beta is obviously faulty. Any coarray program, even a simple one, gives a ‘BAD TERMINATION OF ONE OF YOUR APPLICATION PROCESSES’. Coarray program execution itself works but start and/or end of execution is delayed.
Another problem, using OOP (inheritance) with my parallel codes did not work (the codes are running successfully with gfortran/OpenCoarrays). After some changes, the codes are now running with the ifort 2020 Beta. I am still investigating but it seems there is an issue with names (of modules and procedures) consisting of more than 31 characters (up to 63 characters are allowed since Fortran 2003) under certain circumstances. I will try to reproduce the issue with a simpler program and will then file a ticket for this via the Intel Online Service Center.

Regards

0 Kudos
Highlighted
Moderator
188 Views

Michael, are you using x64 Configuration?  You do know that 32bit Coarrays are not working, don't you?  We're looking to remove 32bit CAF capability all together since our MPI does not support 32bit executables.

0 Kudos
Highlighted
Moderator
188 Views

Mixing CAF and MPI:  composability - yes this has been discussed.  One important issue is the MPI_INIT() call - is it C or Fortran to make the call, as you can only call it once.  Currently Fortran has to be the main() and it calls MPI_INIT as part of startup of a CAF program. 

So since Intel MPI is based on MPICH, and iMPI is under CAF you can actually use MPI trace tools like Vampir or Intel Trace to look at communication patterns!  Also, the MPI debuggers should work.  We don't officially support this BUT .... should work. 

 

0 Kudos
Highlighted
Beginner
188 Views

Hi Ronald,
yes I'm using the x64 configuration with Ubuntu 16.04. on a laptop. Coarray program execution does run successfully, but then the (faulty) coarray runtime ends always like this:

===================================================================================
=   BAD TERMINATION OF ONE OF YOUR APPLICATION PROCESSES
=   RANK 4 PID 2988 RUNNING AT ms-P6622
=   KILLED BY SIGNAL: 9 (Killed)
===================================================================================

===================================================================================
=   BAD TERMINATION OF ONE OF YOUR APPLICATION PROCESSES
=   RANK 5 PID 2989 RUNNING AT ms-P6622
=   KILLED BY SIGNAL: 9 (Killed)
===================================================================================

More severe is a heavy delay after this output before execution does terminate. (I am using SYNC ALL and ERROR STOP as last statements in my coarray program to prevent from this, so that the execution does terminate immediately). Isn't there an issue with the Intel MPI that could be the reason for this?

 

The other problem is not related or limited to coarrays, but could be caused by long names for modules and procedures: I am trying to reproduce the failure with a simple example, but this does currently not fail to execute. Nevertheless, what I currently have is this:

program main
!  use Module_Name_Does_Consist_Of_More_Than_31_Characters
  use Wants_To_Inherit_From_The_Module_With_More_Than_31_Characters
  implicit none
!  type(Type_Name_Does_Consist_Of_More_Than_31_Characters) :: TestObject
  type(Wants_To_Inherit_From_The_Type_With_More_Than_31_Characters) :: TestObject
!
  call TestObject % Procedure_Name_Does_Consist_Of_More_Than_31_Characters_external
!
end program main


module Module_Name_Does_Consist_Of_More_Than_31_Characters
!
implicit none
private
!
type, public :: Type_Name_Does_Consist_Of_More_Than_31_Characters
  private
contains
  private
  procedure, public :: Procedure_Name_Does_Consist_Of_More_Than_31_Characters_external => Procedure_Name_Does_Consist_Of_More_Than_31_Characters
end type Type_Name_Does_Consist_Of_More_Than_31_Characters
!
contains
!
subroutine Procedure_Name_Does_Consist_Of_More_Than_31_Characters (Object)
  class (Type_Name_Does_Consist_Of_More_Than_31_Characters) :: Object
  !
  write(*,*) 'Output from Procedure_Name_Does_Consist_Of_More_Than_31_Characters'
end subroutine Procedure_Name_Does_Consist_Of_More_Than_31_Characters
!
end module Module_Name_Does_Consist_Of_More_Than_31_Characters


module Wants_To_Inherit_From_The_Module_With_More_Than_31_Characters
use Module_Name_Does_Consist_Of_More_Than_31_Characters
!
implicit none
private
!
type, extends(Type_Name_Does_Consist_Of_More_Than_31_Characters), public :: Wants_To_Inherit_From_The_Type_With_More_Than_31_Characters
  private
contains
  private
end type Wants_To_Inherit_From_The_Type_With_More_Than_31_Characters
!
contains
!
end module Wants_To_Inherit_From_The_Module_With_More_Than_31_Characters

 

Compile this with ifort Module_Name_Does_Consist_Of_More_Than_31_Characters.f90 Wants_To_Inherit_From_The_Module_With_More_Than_31_Characters.f90 Main.f90 -o a.out  will give the warning message warning #5462: Global name too long, shortened from: module_name_does_consist_of_more_than_31_characters_mp_PROCEDURE_NAME_DOES_CONSIST_OF_MORE_THAN_31_CHARACTERS to: onsist_of_more_than_31_characters_mp_PROCEDURE_NAME_DOES_CONSIST_OF_MORE_THAN_31_CHARACTERS .

Therefore my question: Isn't the above code consistent with the Fortran standard's naming conventions and if so, why does ifort shorten the global name?

My original program (comming from gfortran/OpenCoarrays) did hang with ifort. After shortening the names of the Modules (and file names), and nothing else, the program does execute successfully with ifort. That is why I do expect the above warning and iforts shortening of the global names to be the reason of that failure. The above warning message can be disabled through the option -diag-disable 5462 but this does not solve the problem at all.

I will do further testing in the next few days.

Cheers

0 Kudos
Highlighted
Beginner
188 Views

My original program (comming from gfortran/OpenCoarrays) did hang with ifort. After shortening the names of the Modules (and file names), and nothing else, the program does execute successfully with ifort.

Additionally, I did also shorten the names of the derived TYPEs.

0 Kudos
Highlighted
Honored Contributor I
188 Views

Ronald W Green (Blackbelt) wrote:

Michael, are you using x64 Configuration?  You do know that 32bit Coarrays are not working, don't you?  We're looking to remove 32bit CAF capability all together since our MPI does not support 32bit executables.

Ron,

Please see this thread over at the Windows forum and Quote #12 therein where Steve Lionel provides an example use of coarrays in Fortran which is far better than the Intel CAF "Hello World!" example.

Is it possible for Intel Fortran team to make Steve's example as the first test case that is used to certify the Intel Fortran compiler version for BETA or official release?!! :-))

I ask because of 2 reasons:

  1. the issue mentioned by Michael S. is easily reproducible with Steve's example (please see below) and
  2. each release of Intel Fortran compiler appears to present some basic issue or other with MPI and Intel Fortran which surfaces when coarrays are employed in Fortran code and this makes many wonder about the QA/validation process used by Intel Fortran team.

Intel Fortran team can do much better by catching the issues with MPI and coarrays well before end users ever notice them.

C:\Temp>type mcpi_coarray_final.f90
!==============================================================
!
! SAMPLE SOURCE CODE - SUBJECT TO THE TERMS OF SAMPLE CODE LICENSE AGREEMENT,
! http://software.intel.com/en-us/articles/intel-sample-source-code-license-agreement/
!
! Copyright 2016 Intel Corporation
!
! THIS FILE IS PROVIDED "AS IS" WITH NO WARRANTIES, EXPRESS OR IMPLIED, INCLUDING BUT
! NOT LIMITED TO ANY IMPLIED WARRANTY OF MERCHANTABILITY, FITNESS FOR A PARTICULAR
! PURPOSE, NON-INFRINGEMENT OF INTELLECTUAL PROPERTY RIGHTS.
!
! =============================================================
!
! Part of the Coarray Tutorial. For information, please read
! Tutorial: Using Fortran Coarrays
! Getting Started Tutorials document

program mcpi

! This program demonstrates using Fortran coarrays to implement the classic
! method of computing the mathematical value pi using a Monte Carlo technique.
! A good explanation of this method can be found at:
! http://www.mathcs.emory.edu/~cheung/Courses/170/Syllabus/07/compute-pi.html
!
! Compiler options: /Qcoarray
!                   -coarray
!
! Note for Visual Studio users - this source is excluded from building in the
! tutorial project. If you wish to change that, right click on the file in
! Solution Explorer and select Properties. Then go to Configuration Properties >
! General. Make sure that All Configurations and All Platforms are selected, then
! change Exclude File From Build to No. Be sure to remove or exclude the sequential
! source file from the build.

implicit none

! Declare kind values for large integers, single and double precision
integer, parameter :: K_BIGINT = selected_int_kind(15)
integer, parameter :: K_DOUBLE = selected_real_kind(15,300)

! Number of trials per image. The bigger this is, the better the result
! This value must be evenly divisible by the number of images.
integer(K_BIGINT), parameter :: num_trials = 1800000000_K_BIGINT

! Actual value of PI to 18 digits for comparison
real(K_DOUBLE), parameter :: actual_pi = 3.141592653589793238_K_DOUBLE

! Declare scalar coarray that will exist on each image
integer(K_BIGINT) :: total
  • ! Per-image subtotal ! Local variables real(K_DOUBLE) :: x,y real(K_DOUBLE) :: computed_pi integer :: i integer(K_BIGINT) :: bigi integer(K_BIGINT) :: clock_start,clock_end,clock_rate integer, allocatable :: seed_array(:) integer :: seed_size ! Image 1 initialization if (THIS_IMAGE() == 1) then ! Make sure that num_trials is divisible by the number of images if (MOD(num_trials,INT(NUM_IMAGES(),K_BIGINT)) /= 0_K_BIGINT) & error stop "Number of trials not evenly divisible by number of images!" print '(A,I0,A,I0,A)', "Computing pi using ",num_trials," trials across ",NUM_IMAGES()," images" call SYSTEM_CLOCK(clock_start) end if ! Set the initial random number seed to an unpredictable value, with a different ! sequence on each image. The Fortran 2015 standard specifies a new RANDOM_INIT ! intrinsic subroutine that does this, but Intel Fortran doesn't yet support it. ! ! What we do here is first call RANDOM_SEED with no arguments. The standard doesn't ! specify that behavior, but Intel Fortran sets the seed to a value based on the ! system clock. Then, because it's likely that multiple threads get the same seed, ! we modify the seed based on the image number. call RANDOM_SEED() ! Initialize based on time ! Alter the seed values per-image call RANDOM_SEED(seed_size) ! Get size of seed array allocate (seed_array(seed_size)) call RANDOM_SEED(GET=seed_array) ! Get the current seed seed_array(1) = seed_array(1) + (37*THIS_IMAGE()) ! Ignore potential overflow call RANDOM_SEED(PUT=seed_array) ! Set the new seed ! Initialize our subtotal total = 0_K_BIGINT ! Run the trials, with each image doing its share of the trials. ! ! Get a random X and Y and see if the position ! is within a circle of radius 1. If it is, add one to the subtotal do bigi=1_K_BIGINT,num_trials/int(NUM_IMAGES(),K_BIGINT) call RANDOM_NUMBER(x) call RANDOM_NUMBER(y) if ((x*x)+(y*y) <= 1.0_K_DOUBLE) total = total + 1_K_BIGINT end do ! Wait for everyone sync all ! Image 1 end processing if (this_image() == 1) then ! Sum all of the images' subtotals do i=2,num_images() total = total + total end do ! total/num_trials is an approximation of pi/4 computed_pi = 4.0_K_DOUBLE*(REAL(total,K_DOUBLE)/REAL(num_trials,K_DOUBLE)) print '(A,G0.8,A,G0.3)', "Computed value of pi is ", computed_pi, & ", Relative Error: ",ABS((computed_pi-actual_pi)/actual_pi) ! Show elapsed time call SYSTEM_CLOCK(clock_end,clock_rate) print '(A,G0.3,A)', "Elapsed time is ", & REAL(clock_end-clock_start)/REAL(clock_rate)," seconds" end if end program mcpi C:\Temp>ifort /standard-semantics /Qcoarray /Qcoarray-num-images=16 mcpi_coarray_final.f90 Intel(R) Visual Fortran Intel(R) 64 Compiler for applications running on Intel(R) 64, Version 19.1.0.056 Pre-Release Beta Build 20190321 Copyright (C) 1985-2019 Intel Corporation. All rights reserved. ifort: NOTE: The Beta evaluation period for this product ends on 9-oct-2019 UTC. Microsoft (R) Incremental Linker Version 14.16.27031.1 Copyright (C) Microsoft Corporation. All rights reserved. -out:mcpi_coarray_final.exe -subsystem:console mcpi_coarray_final.obj C:\Temp>mcpi_coarray_final.exe Computing pi using 1800000000 trials across 16 images Computed value of pi is 3.1415978, Relative Error: .163E-05 Elapsed time is 11.2 seconds =================================================================================== = BAD TERMINATION OF ONE OF YOUR APPLICATION PROCESSES = RANK 4 PID 13976 RUNNING AT xxx = KILLED BY SIGNAL: -1 () =================================================================================== C:\Temp>mcpi_coarray_final.exe Computing pi using 1800000000 trials across 16 images Computed value of pi is 3.1415973, Relative Error: .149E-05 Elapsed time is 11.8 seconds =================================================================================== = BAD TERMINATION OF ONE OF YOUR APPLICATION PROCESSES = RANK 4 PID 15324 RUNNING AT xxx = KILLED BY SIGNAL: -1 () =================================================================================== =================================================================================== = BAD TERMINATION OF ONE OF YOUR APPLICATION PROCESSES = RANK 7 PID 12612 RUNNING AT xxx = KILLED BY SIGNAL: -1 () =================================================================================== =================================================================================== = BAD TERMINATION OF ONE OF YOUR APPLICATION PROCESSES = RANK 11 PID 15116 RUNNING AT xxx = KILLED BY SIGNAL: -1 () =================================================================================== C:\Temp>
  •  

    0 Kudos
    Highlighted
    Moderator
    188 Views

    FortranFan wrote:

     

    Ron,

    Please see this thread over at the Windows forum and Quote #12 therein where Steve Lionel provides an example use of coarrays in Fortran which is far better than the Intel CAF "Hello World!" example.

    Is it possible for Intel Fortran team to make Steve's example as the first test case that is used to certify the Intel Fortran compiler version for BETA or official release?!! :-))

    I ask because of 2 reasons:

    1. the issue mentioned by Michael S. is easily reproducible with Steve's example (please see below) and
    2. each release of Intel Fortran compiler appears to present some basic issue or other with MPI and Intel Fortran which surfaces when coarrays are employed in Fortran code and this makes many wonder about the QA/validation process used by Intel Fortran team.

    Intel Fortran team can do much better by catching the issues with MPI and coarrays well before end users ever notice them.

      

     

    First of all, that example IS used for testing. 

    Second, i just tested it (again) with 19.1 Beta and it passed.

    Computing pi using 1800000000 trials across 16 images
    Computed value of pi is 3.1415939, Relative Error: .407E-06
    Elapsed time is 3.39 seconds
    Press any key to continue . . .

    How many cores are on your system?

    I have tested  on Intel(R) Xeon(R) Gold 6140 CPU @ 2.30GHz, 2295 Mhz, 18 Core(s), 36 Logical Processor(s)

     

    0 Kudos
    Highlighted
    Honored Contributor I
    188 Views

    Devorah H. (Intel) wrote:

    .. How many cores are on your system?

    I have tested  on Intel(R) Xeon(R) Gold 6140 CPU @ 2.30GHz, 2295 Mhz, 18 Core(s), 36 Logical Processor(s)

     

    @Devorah,

    The system I tested has 4 cores.  It appears the issue with "BAD TERMINATION .." occurs at a rather high frequency when /Qcoarray-num-images=n option specifies a value of n >= to the number of cores.

    I suggest you retry the test on somewhat poorer hardware e.g., 4 cores like many of the users have running on their enterprise-issued low-cost, rather outdated workstations!

    0 Kudos
    Highlighted
    Moderator
    188 Views

    1>------ Rebuild All started: Project: coarray_samples, Configuration: Debug x64 ------
    1>Deleting intermediate files and output files for project 'coarray_samples', configuration 'Debug|x64'.
    1>Compiling with Intel(R) Visual Fortran Compiler 19.1.0.049 [Intel(R) 64]...
    1>mcpi_coarray_final.f90
    1>Linking...
    1>Embedding manifest...
    1>
    1>Build log written to  "file://C:\Users\ ...coarray_samples\msvs\x64\Debug\BuildLog.htm"
    1>coarray_samples - 0 error(s), 0 warning(s)
    ========== Rebuild All: 1 succeeded, 0 failed, 0 skipped ==========
    
    Computing pi using 1800000000 trials across 16 images
    Computed value of pi is 3.1415991, Relative Error: .206E-05
    Elapsed time is 34.5 seconds
    Press any key to continue . . .
    
    Intel(R) Core(TM) i5-6300U CPU @ 2.40GHz, 2496 Mhz, 2 Core(s), 4 Logical Processor(s)
    

     

    I still can't reproduce this issue. What is the version of Intel MPI installed on the system? Try the latest 2019 Update 4 MPI

    0 Kudos
    Highlighted
    188 Views

    Nice..

    still seems first tech preview comes with MKL, IPP ,TBB 2019 versions..

    Are we getting new MKL,TBB, IPP 2020 betas integrated in next Composer 2020 betas?

    I'm expecting MKL&IPP 2020 beta bring better perf. so are even more optimized libraries for IceLake..

    thanks..

    0 Kudos
    Highlighted
    Honored Contributor I
    188 Views

    Devorah H. (Intel) wrote:

    .. I still can't reproduce this issue. What is the version of Intel MPI installed on the system? Try the latest 2019 Update 4 MPI

    Devorah,

    See below re: MPI version.  I don't know what you mean by, "Try the latest 2019 Update 4 MPI"  This workstation has Intel Fortran 19.0 compiler installed, official release onward up to Update 4.  The admin then installed 19.1 version i.e., the 2020 BETA Preview edition following the installation late week of 19.0 Update 4.  The test below is performed using the x64 script for the 2020 BETA preview version and the Intel MPI is whatever comes with the script.

    C:\Temp>mpiexec -V
    Intel(R) MPI Library for Windows* OS, Version 2019.0.3 Build 20190214
    Copyright 2003-2019, Intel Corporation.
    
    C:\Temp>ifort /standard-semantics /Qcoarray /Qcoarray-num-images=16 mcpi_coarray_final.f90
    Intel(R) Visual Fortran Intel(R) 64 Compiler for applications running on Intel(R) 64, Version 19.1.0.056 Pre-Release Beta Build 20190321
    Copyright (C) 1985-2019 Intel Corporation.  All rights reserved.
    
    Microsoft (R) Incremental Linker Version 14.16.27031.1
    Copyright (C) Microsoft Corporation.  All rights reserved.
    
    -out:mcpi_coarray_final.exe
    -subsystem:console
    mcpi_coarray_final.obj
    
    C:\Temp>mcpi_coarray_final.exe
    Computing pi using 1800000000 trials across 16 images
    Computed value of pi is 3.1415986, Relative Error: .189E-05
    Elapsed time is 12.6 seconds
    
    ===================================================================================
    =   BAD TERMINATION OF ONE OF YOUR APPLICATION PROCESSES
    =   RANK 11 PID 11840 RUNNING AT xxx
    =   KILLED BY SIGNAL: -1 ()
    ===================================================================================
    
    C:\Temp>

    Note the "BAD TERMINATION .." message pops repeatedly in this environment, as shown above.  I don't know what else I can do to illustrate the issue.

    0 Kudos
    Highlighted
    Honored Contributor I
    188 Views

    Devorah H. (Intel) wrote:

    .. I still can't reproduce this issue. What is the version of Intel MPI installed on the system? Try the latest 2019 Update 4 MPI

    Devorah,

    See below re: MPI version.  I don't know what you mean by, "Try the latest 2019 Update 4 MPI"  This workstation has Intel Fortran 19.0 compiler installed, official release thru' Update 4.  The admin then installed 19.1 version i.e., the 2020 BETA Preview edition after the installation late week of 19.0 Update 4.  The test below is performed using the x64 script for the 2020 BETA preview version and the Intel MPI is whatever comes with the script.

    C:\Temp>mpiexec -V
    Intel(R) MPI Library for Windows* OS, Version 2019.0.3 Build 20190214
    Copyright 2003-2019, Intel Corporation.
    
    C:\Temp>ifort /standard-semantics /Qcoarray /Qcoarray-num-images=16 mcpi_coarray_final.f90
    Intel(R) Visual Fortran Intel(R) 64 Compiler for applications running on Intel(R) 64, Version 19.1.0.056 Pre-Release Beta Build 20190321
    Copyright (C) 1985-2019 Intel Corporation.  All rights reserved.
    
    Microsoft (R) Incremental Linker Version 14.16.27031.1
    Copyright (C) Microsoft Corporation.  All rights reserved.
    
    -out:mcpi_coarray_final.exe
    -subsystem:console
    mcpi_coarray_final.obj
    
    C:\Temp>mcpi_coarray_final.exe
    Computing pi using 1800000000 trials across 16 images
    Computed value of pi is 3.1415986, Relative Error: .189E-05
    Elapsed time is 12.6 seconds
    
    ===================================================================================
    =   BAD TERMINATION OF ONE OF YOUR APPLICATION PROCESSES
    =   RANK 11 PID 11840 RUNNING AT US36110
    =   KILLED BY SIGNAL: -1 ()
    ===================================================================================
    
    C:\Temp>

    Note the "BAD TERMINATION .." message pops repeatedly in this environment, as shown above.  I don't know what else I can do to illustrate the issue.

    0 Kudos
    Highlighted
    Beginner
    188 Views

    Let me add:

    For testing, I am using a freshly installed Linux Ubuntu 16.04 on a laptop (2 physical cores, 4 logical cores through hyperthreading). The "Bad Termination"-messages are only if I do oversubscribe the logical cores (i.e. num_images() > 4).

    On the other hand, a heavy delay after coarray program execution has finished but before execution does actually terminate is always, even without oversubscribing the physical or logical cores (num_images() <= 4).

    0 Kudos
    Highlighted
    Moderator
    188 Views

    Michael S. wrote:

    Let me add:

    For testing, I am using a freshly installed Linux Ubuntu 16.04 on a laptop (2 physical cores, 4 logical cores through hyperthreading). The "Bad Termination"-messages are only if I do oversubscribe the logical cores (i.e. num_images() > 4).

    On the other hand, a heavy delay after coarray program execution has finished but before execution does actually terminate is always, even without oversubscribing the physical or logical cores (num_images() <= 4).

    Thank you for the information I will test your issue on Linux and let you know the results.

    0 Kudos
    Highlighted
    Moderator
    188 Views

    FortranFan wrote:

    Note the "BAD TERMINATION .." message pops repeatedly in this environment, as shown above.  I don't know what else I can do to illustrate the issue.

    Thank you for testing it again. I will talk to the MPI team and ask them why on some systems "BAD TERMINATION" message comes up.

    No luck reproducing this in similar environment: 

    >mpiexec -V
    Intel(R) MPI Library for Windows* OS, Version 2019.0.3 Build 20190214
    Copyright 2003-2019, Intel Corporation.
    
    >ifort /standard-semantics /Qcoarray /Qcoarray-num-images=16 mcpi_coarray_final.f90
    Intel(R) Visual Fortran Intel(R) 64 Compiler for applications running on Intel(R) 64, Version 19.1.0.049 Pre-Release Beta Build 20190321
    Copyright (C) 1985-2019 Intel Corporation.  All rights reserved.
    
    ifort: NOTE: The Beta evaluation period for this product ends on 9-oct-2019 UTC.
    Microsoft (R) Incremental Linker Version 14.16.27030.1
    Copyright (C) Microsoft Corporation.  All rights reserved.
    
    -out:mcpi_coarray_final.exe
    -subsystem:console
    mcpi_coarray_final.obj
    
    >mcpi_coarray_final.exe
    Computing pi using 1800000000 trials across 16 images
    Computed value of pi is 3.1415665, Relative Error: .833E-05
    Elapsed time is 28.9 seconds
    
    >

     

    0 Kudos
    Highlighted
    Beginner
    188 Views

    Answer to your message:

    Hi Devorah,
    these are two distinct issues:

    1. The BAD TERMINATION error does not require coarray syntax, just take a simple Fortran 'Hello World' and compile it as a coarray program (and oversubscribe over the logical cores). The MPI version was installed with the 2020 Beta installer on Ubuntu. The BAD TERMINATION error can easily be reproduced by others, thus I am sure others will report that as well.

    2. The issue with the SHORTEN NAMES is more tricky and I still have to investigate on it. At this time it is just a guess, but it could be that ifort's automatic shortening of the global names could corrupt the data transfer channels through (atomic) coarrays. That is what I will test for next. I will need few days to develop a simple test case for this and will then inform you, resp. directly file a ticket for this (and give the source code of the test case).

    Regards

    0 Kudos