Community
cancel
Showing results for 
Search instead for 
Did you mean: 
Highlighted
Beginner
101 Views

iforrtl: severe (67): input statement requires too much data, unit 20 error

Hi Support,

While I am executing my fortran program which reads huge Unformatted data, I am getting following error.

iitmlogin3:/hpcs/bipink/LES/cloud_phy $ ./a.out 
forrtl: severe (67): input statement requires too much data, unit 20, file /hpcs/bipink/LES/cloud_phy/mixing_data
Image              PC                Routine            Line        Source             
a.out              00000000004060D9  Unknown               Unknown  Unknown
a.out              000000000041CB46  Unknown               Unknown  Unknown
a.out              00000000004034BE  Unknown               Unknown  Unknown
a.out              000000000040329E  Unknown               Unknown  Unknown
libc.so.6          00000038AE61ED5D  Unknown               Unknown  Unknown
a.out              00000000004031A9  Unknown               Unknown  Unknown
iitmlogin3:/hpcs/bipink/LES/cloud_phy $

 

Following is part of my program:

IMPLICIT NONE

   REAL(KIND=8)    :: e, diss, ens, mse, susa, q, pt, simulated_time, id, x, y, z, radius
   INTEGER(KIND=4) :: number_of_particles, n, unit

   unit = 20
   OPEN(unit,file='mixing_data',status='old',action='read',form='unformatted')
   DO
      READ(unit,END=5) e, diss, ens, mse, susa, q, pt, simulated_time, number_of_particles
      DO  n = 1, number_of_particles
         READ(unit,END=5) id, x, y, z, radius
      END DO
   END DO
5   CLOSE(unit)

Thanks and regards,

Sachin

0 Kudos
20 Replies
Highlighted
96 Views

What produced the data file? Was it another Fortran program that wrote the data using unformatted output? 

The error message indicates that the length of the record is insufficient for the amount of data you are trying to read. Without seeing the data file, nothing more can be deduced.

Retired 12/31/2016
0 Kudos
Highlighted
Beginner
96 Views

Yes, one file opens the which is to be written and other file writes the output in that file.

Following is the part of the file which is writes the output data.

 WRITE(200+myid) e(k,j,i), diss(k,j,i), ens, mse, susa(k,j,i), q(k,j,i), pt(k,j,i), simulated_time, number_of_particles                                                  
                            DO n = 1, number_of_particles
                               WRITE(200+myid) particles(n)%e_m, particles(n)%x, particles(n)%y, particles(n)%z, particles(n)%radius
                            ENDDO

 This file writes the data which is Unformatted.

And I am reading with read function...

READ(unit,END=5) e, diss, ens, mse, susa, q, pt, simulated_time, number_of_particles
      DO  n = 1, number_of_particles
         READ(unit,END=5) id, x, y, z, radius

Are these reading and writing functions are right....

Thanks,

Sachin

0 Kudos
Highlighted
Black Belt
96 Views

Are these reading and writing functions right?

May be. You have to check if the types, kinds and sizes of the variables in the READ list match those of the WRITE list.

0 Kudos
Highlighted
96 Views

Sachin,

READ(unit,END=5) id, x, y, z, radius

Is "id" the same type as that written? (" particles(n)%e_m")

Jim Dempsey

0 Kudos
Highlighted
Beginner
96 Views

Hi,

   It seems that I have the same issue. I have encountered it after compiler update (from 2016 to 2017). Below is a minimal source code showing the issue.

program    HSlec
implicit none
real(8) , allocatable :: prophull(:,:)
real(8) :: refl , grav ,rho

allocate(prophull(1738,72))

refl = 1.0
grav = 9.81
rho = 1025.

prophull = 2.

open(1,file="test", status='unknown',form='unformatted', access='sequential' , action = "write" )
write(1) refl,grav,rho
write(1) prophull
close(1)

write(*,*) refl , grav, rho
write(*,*) size(prophull , dim = 1)
write(*,*) size(prophull , dim = 2)

open(1,file='test', status='old',form='unformatted', access='sequential' , action = "read" )
read(1)refl,grav,rho
read(1) prophull
close(1)

end program HSlec

 

  After some digging, it seems that this is related to the "-assume buffered_io" flags that I use for performance issue.(https://software.intel.com/en-us/forums/intel-visual-fortran-compiler-for-windows/topic/392693).

16:04 e@drhpcmss % ifort -assume buffered_io test.f90
16:04 edrhpcmss % ./a.out
   1.00000000000000        9.81000041961670        1025.00000000000
        1738
          72
forrtl: severe (39): error during read, unit 1, file /bigr/e/hydrostar/trunk/test
Image              PC                Routine            Line        Source
a.out              0000000000405CDC  Unknown               Unknown  Unknown
a.out              000000000041D919  Unknown               Unknown  Unknown
a.out              0000000000403761  Unknown               Unknown  Unknown
a.out              000000000040323E  Unknown               Unknown  Unknown
libc-2.13.so       00007F4D3BB5DEAD  __libc_start_main     Unknown  Unknown
a.out              0000000000403129  Unknown               Unknown  Unknown

 

   Regards,

Guillaume

 

 

 

0 Kudos
Highlighted
Employee
96 Views

@Guillaume - I reproduced your issue and will report to Development and provide the internal tracking id shortly. The error signature is different from the OP's so I don't know if they share the same root cause.

@Sachin - if you use the option Guillaume notes and find that without that option your program run successfully then these may be the same underlying issue; however, to know for certain we would need a complete reproducer for your case if you can provide one.

(Internal tracking id: DPD200416317 - forrtl severe (39) with -assume buffered_io with 17.0 Update 1 compiler)

(Resolution Update on 02/27/2017): This defect is fixed in the Intel® Parallel Studio XE 2017 Update 2 release (ifort Version 17.0.2.174 Build 20170213 - PSXE 2017.2.050 / CnL 2017.2.174 - Linux)

0 Kudos
Highlighted
Employee
96 Views

I escalated the issue to Development. I could not confirm the issue did not exist in any previous 16.0 compiler released to date. All 16.0 compilers produce the same run-time error with the buffered I/O option. I did find that the issue does not occur with our initial 17.0 compiler release so if you have that available you might give that a try.

0 Kudos
Highlighted
Beginner
96 Views

Ok thanks, we are going to downgrade to 2016 version (2016.0.109) which did not have this issue. By the way, what is the proper to downgrade to previously installed version  ?

 

0 Kudos
Highlighted
Employee
96 Views

You have choices. The products can be installed side-by-side so it is not necessary to remove your current installation. You can install the older version and then source the appropriate compilervars (or psxevars) to setup your environment accordingly to use that release. Or, you can uninstall the newer release and then reinstall the previous release only. As I noted, I was unable to avoid the error with any earlier 16.0 release.

Development confirmed the defect here where only a single record is written where two should have been. I will provide updates on further progress toward a fix as I learn it.

0 Kudos
Highlighted
Black Belt
96 Views

Just a thought/question:

Since "-assume buffered_io" is an extension, does the implementation of this option require (for performance purposes) that the user should add a FLUSH (<unit_no>) or a CALL FLUSH statement before attempting to read a file that was just written?

0 Kudos
Highlighted
96 Views

I would imagine if the file were OPENed with SHARE='DENYRW' (or 'DENYWR') FLUSH would not be required before READ

Opened otherwise, meaning file can potentially change content at any time, then I would assume it be prudent to add FLUSH(unit_no) prior to reads. IMHO the flush should not be implicit, rather it should be under programmer control.

Additional note: When a shared file READ exceeds, or crosses, a buffer boundary, it is not stated that the operation is effectively ATOMIC with respect to other thread/process writing to same file within the range of the read. This may be an implementation issue. IOW the pre-READ FLUSH may only be partially effective.

In an MPI multi-process application, you can use LOCK/UNLOCK to facilitate shared file issues.

For non-MPI POSIX multi-process application, you can use PXFFCNTL to facilitate shared file issues. POSIX permits locking portions of a file

Jim Dempsey

0 Kudos
Highlighted
Employee
96 Views

@Guillaume, @Sachin - I confirmed the fix for this issue is available in the latest PSXE 2017 Update 2 release now available for download from the Intel Registration and Download Center.

0 Kudos
Highlighted
Beginner
96 Views

Hi,

   I have updated to v17.0.4 (after having worked with the downgraded v16.0 for a while), and the issue is still there. True, there is a slight improvment, the above test cas that I provided now works. But adding a few other variables to write, and the issue is back (which is the case on my actual program). Below is the updated test that shows the issue :

Regards, 

   Guillaume

program    HSlec

implicit none

real(8) , allocatable :: prophull(:,:)
real(8) :: refl , grav ,rho
integer i1,i2,i3

allocate(prophull(1738,72))
refl = 1.0
grav = 9.81
rho = 1025.

prophull = 2.

i1=1
i2 =2
i3=3

open(1,file="test", status='unknown',form='unformatted', access='sequential' , action = "write" )
write(1) refl,grav,rho
write(1) prophull
write(1) i1,i2,i3
write(1) prophull
close(1)

write(*,*) refl , grav, rho
write(*,*) size(prophull , dim = 1)
write(*,*) size(prophull , dim = 2)


open(1,file='test', status='old',form='unformatted', access='sequential' , action = "read" )
read(1)refl,grav,rho
read(1) prophull
read(1) i1,i2,i3
read(1) prophull
close(1) 

end program HSlec

 

 

17:29 gtruc@drhpcmss % ifort  hslec.f90
/bigr/gtruc/ifortIO
17:29 gtruc@drhpcmss % ./a.out
   1.00000000000000        9.81000041961670        1025.00000000000
        1738
          72
/bigr/gtruc/ifortIO
17:29 gtruc@drhpcmss % ifort  -assume buffered_io  hslec.f90
/bigr/gtruc/ifortIO
17:29 gtruc@drhpcmss % ./a.out
   1.00000000000000        9.81000041961670        1025.00000000000
        1738
          72
forrtl: severe (67): input statement requires too much data, unit 1, file /bigr/gtruc/ifortIO/test
Image              PC                Routine            Line        Source
a.out              0000000000405E93  Unknown               Unknown  Unknown
a.out              000000000042061E  Unknown               Unknown  Unknown
a.out              000000000041DD2E  Unknown               Unknown  Unknown
a.out              0000000000403977  Unknown               Unknown  Unknown
a.out              00000000004032BE  Unknown               Unknown  Unknown
libc-2.13.so       00007FDBE13EFEAD  __libc_start_main     Unknown  Unknown
a.out              00000000004031A9  Unknown               Unknown  Unknown
/bigr/gtruc/ifortIO
17:29 gtruc@drhpcmss % ifort -v
ifort version 17.0.4
/bigr/gtruc/ifortIO
17:29 gtruc@drhpcmss %

 

0 Kudos
Highlighted
Employee
96 Views

Thank you for the update. Our apologies. I reproduced this and escalated the new variant to the Developers.

(Internal tracking id: CMPLRS-43302)

0 Kudos
Highlighted
Beginner
96 Views

I'm running 2017.2.187 in WIN10 and have had similar issues with writing/reading unformatted sequential files containing some large records since the start of this year. I've not been able to isolate a simple set of code to reproduce the issues but have had to develop workarounds. In one case the scratch file that was being corrupted was simply broken into smaller records and avoided the issue. In the other case with an established/interchange file format, its problems were only alleviated after explicitly declaring the RECL=35k words (approx.).

These issues seemed to emerge after I added BUFFERED='yes' to all my OPENs to try and overcome the significant loss in throughput that had occurred after previous compiler updates. However, playing around with that, as well as BLOCKSIZE and BUFFERCOUNT, did not overcome the most recent manifestation, only RECL=.

0 Kudos
Highlighted
Beginner
96 Views

Hello,

  What is the status of this bug ? Is the problem solved in latest version ? (I have a new computer and wonder if I should install the last version, or keep with an old one that I know is working)

Regards,

  Guillaume

0 Kudos
Highlighted
96 Views

The fix for your example program above should appear in an update to 18.0.

It is not in the recent 17.0 Update 5.

                         --Lorri

 

0 Kudos
Highlighted
Beginner
96 Views

Hi,

Could you confirm that the issue has been fixed in the latest 18.0 release ?

   Guillaume

0 Kudos
Highlighted
Beginner
96 Views

Any news on this matter ?

0 Kudos