Intel® oneAPI HPC Toolkit
Get help with building, analyzing, optimizing, and scaling high-performance computing (HPC) applications.
The Intel sign-in experience has changed to support enhanced security controls. If you sign in, click here for more information.
2017 Discussions

maximum file size issues (bug??).


Hi, all.
Just after updating to Intel Parallel Studio XE 2017, I've met something wired.
If I tried to copy a file from A to B, when the size of A is approximately 6GB, but B returns 2GB (4byte integer??). Another test gives same result, MPI-IO can not write a file larger than 2GB. This code works very well With Intel Parallel Studio XE 2016. If I exchange mpif90/mpirun to openmpi's one, there is no problem, too. I guess there are some bug in the intel MPI library of Intel Parallel Studio XE 2017.

   call su00.init (ns=sufile.ns)
   call para_range  (jsta, jskp, jend, 1, 1, sufile.ntrcr, nprocs, myrank)
   call mpi_file_open (mpi_comm_world, trim(sufile.file)//".su", mpi_mode_rdonly, mpi_info_null, file00, ierr00)
   call mpi_file_open (mpi_comm_world, trim(sufile.file)//"", mpi_mode_create+mpi_mode_wronly, mpi_info_null, file01, ierr01)
   call mpi_barrier   (mpi_comm_world, ierr)
   do itrcr = jsta, jend, jskp
      su00.dum4 = 0.0E+0
      disp00 = 4*(60+sufile.ns)*(itrcr-1)
      call mpi_file_read_at  (file00, disp00, su00.dum4, 60+sufile.ns, mpi_integer4, stat00, ierr00)
      call mpi_file_write_at (file01, disp00, su00.dum4, 60+sufile.ns, mpi_integer4, stat01, ierr01)
   end do
   call mpi_barrier    (mpi_comm_world, ierr)
   call mpi_file_close (file00, ierr00)
   call mpi_file_close (file01, ierr01)
   call ( )



U Geun


0 Kudos
1 Reply

Hi U Geun,

Would it be possible to get a complete reproducer for the issue? And if you have an Intel Premier Support account I suggest to report the issue there.

Best regards