- 新着としてマーク
- ブックマーク
- 購読
- ミュート
- RSS フィードを購読する
- ハイライト
- 印刷
- 不適切なコンテンツを報告
My application has recently stopped being able to read unformatted files that are greater than 2.1GB in size. Specifically, the read command of the following subroutine fails (iostat=-1) if jj = 2900 (file size 2.15GB), but works (iostat=0) if jj=2800 (file size 2.08GB).
ii = 95000 jj = 4000 allocate ( store_huge(ii, jj), STAT = istat ) store_huge = 0.0 open( newunit = file_id, file = outputfilename, form = 'unformatted', action='write', iostat = istat ) write(file_id, IOSTAT=istat) store_huge close(file_id) store_huge = 1.0 open(newunit=file_id, FILE=outputfilename, form = 'unformatted', action='read', iostat = istat) read(file_id, IOSTAT=istat) store_huge close(file_id) deallocate( store_huge )
This problem only appeared recently, and may be due to a recent Windows update (which we can no longer control, and which have been driving me crazy). I have Windows 10 (including the most recent automated updates), am using VS2017 (which may be the problem, though I have been using this for a couple of months without issue), and Visual Fortran Compiler 17.0.4.210.
コピーされたリンク
- 新着としてマーク
- ブックマーク
- 購読
- ミュート
- RSS フィードを購読する
- ハイライト
- 印刷
- 不適切なコンテンツを報告
I have recompiled the application with (microsoft) VS2015 (rather than VS2017), and the same problem appears. This suggests that it is something to do with Visual Fortran Compiler 17.0.4.210 and/or Windows 10 (version 1703, OS Build 15063.413).
- 新着としてマーク
- ブックマーク
- 購読
- ミュート
- RSS フィードを購読する
- ハイライト
- 印刷
- 不適切なコンテンツを報告
Using the access='stream' option seems to resolve this issue, even if only sequential access (the default) is used. Something looks like it has gone wrong with the compiler (probably due to a change in Windows), which I hope can be fixed soon...
- 新着としてマーク
- ブックマーク
- 購読
- ミュート
- RSS フィードを購読する
- ハイライト
- 印刷
- 不適切なコンテンツを報告
The -1 is end of file on read. But is the problem the read or is it the write that failed? Did you check the end of the file that was written to see if it is all there?
Also is there a timing issue with buffering data, ie windows has not fully finished the write when you do the read....
- 新着としてマーク
- ブックマーク
- 購読
- ミュート
- RSS フィードを購読する
- ハイライト
- 印刷
- 不適切なコンテンツを報告
The error code appears with the read command, not the write command. File sizes of the write command also suggest something at least close to completion. Yes, -1 is end of file on read; it is also possible to generate code 64 (I think), if breaking the read up into multiple variables. I do not know how to check the end of file, as it is unformatted (binary). I can confirm that is definitely not a timing issue.
Playing around with this code snippet reveals no problems at all, but all sorts of failures appear when the "access='stream'" option is omitted and different combinations of variable reads-writes are used that imply a total file size greater than 2.1GB:
ii = 95000 jj = 2000 allocate ( store_huge(ii, jj*2), store_huge2(ii,jj), store_huge3(ii,jj), STAT = istat ) store_huge = 1.0 store_huge2 = 2.0 store_huge3 = 3.0 open( newunit = file_id, file = outputfilename, form = 'unformatted', action='write', access='stream', iostat = istat ) write(file_id, IOSTAT=istat) store_huge write(file_id, IOSTAT=istat) store_huge2 write(file_id, IOSTAT=istat) store_huge3 close(file_id) store_huge2 = 2.0 store_huge3 = 2.0 open(newunit=file_id, FILE=outputfilename, form = 'unformatted', action='read', access='stream', iostat = istat) read(file_id, IOSTAT=istat) store_huge read(file_id, IOSTAT=istat) store_huge2 read(file_id, IOSTAT=istat) store_huge3 close(file_id) deallocate( store_huge, store_huge2, store_huge3 )
It really looks like microsoft have made some change that exposes a problem with the default (sequential) read/write unformatted combination.
- 新着としてマーク
- ブックマーク
- 購読
- ミュート
- RSS フィードを購読する
- ハイライト
- 印刷
- 不適切なコンテンツを報告
These error still suggest you are reading and incomplete file. Read it back one byte at a type with non advancing IO until you get eof. Check the number of bytes read and the last few bytes read at eof. This might tell you something interesting or close this line of thought.
How do you know with certainly it is not a timing issue BTW?
Post a complete program for #5 that gives the error and I will test it.
- 新着としてマーク
- ブックマーク
- 購読
- ミュート
- RSS フィードを購読する
- ハイライト
- 印刷
- 不適切なコンテンツを報告
It is not a timing issue, because the same problem originally appeared when I tried to read a file that had been written some days previously. I have also tested whether reading a (2.2GB) file an hour after it had been written (following my lunch break) made a difference - it didn't.
Find attached two zip folders, with VS solutions that replicate the problem attached. It appears that compiler options make a difference, as the code in the two solutions is identical, but the errors are different. VERSION 1 is an extract from the application that I originally tested on, and reproduces the errors I found during my testing - these are slightly different in VERSION 2.
Please note that this problem only appeared after the most recent windows 10 update (as discussed above) - the errors I get on my system are included in the source code comments.
- 新着としてマーク
- ブックマーク
- 購読
- ミュート
- RSS フィードを購読する
- ハイライト
- 印刷
- 不適切なコンテンツを報告
Does adding:
BUFFERED='yes'
have any effect?
Jim Dempsey
- 新着としてマーク
- ブックマーク
- 購読
- ミュート
- RSS フィードを購読する
- ハイライト
- 印刷
- 不適切なコンテンツを報告
I just ran the test routine, and adding buffered='yes' didn't alter results at all.
- 新着としてマーク
- ブックマーク
- 購読
- ミュート
- RSS フィードを購読する
- ハイライト
- 印刷
- 不適切なコンテンツを報告
Recently I have found a bug with unformatted I/O. Can you look at my previous post, and see if it is related to what you are seeing. Are you using /assume:buffered_io ?
https://software.intel.com/en-us/forums/intel-visual-fortran-compiler-for-windows/topic/721914
Roman
- 新着としてマーク
- ブックマーク
- 購読
- ミュート
- RSS フィードを購読する
- ハイライト
- 印刷
- 不適切なコンテンツを報告
This is possibly the same problem as yours Roman, as I use the buffered I/O option by default. But, the problem I identify also arises when buffered I/O is suppressed - as indicated by the VERSION2.zip solution that is included in this post
- 新着としてマーク
- ブックマーク
- 購読
- ミュート
- RSS フィードを購読する
- ハイライト
- 印刷
- 不適切なコンテンツを報告
I tried your version 1 and I agree that it looks like a bug in the fortran runtime. I had a small play with test 1 (see below). There is nothing of interest in the windows (getlasterror) calls. it seems when the fortran sets the error status on the read the windows error gets reset to 0. I just did that as is sometimes shows something of interest.
when I did the write array array slices test it throws error 67 ( input statement requires too much data) reading the third slice, subsequent reads don't return a error (but fail as the data read is uninitialised).
In conclusion this is something INTEL need to look at.
MODULE tester !************************************************************* ! ! Module defined here to test specific functions ! !************************************************************* IMPLICIT none CONTAINS SUBROUTINE test11() !************************************************************* ! ! TEST to check writing routine ! !************************************************************* use ifwin, only: getlasterror implicit none ! local integer(4) :: ii, jj, file_id, istat integer :: l1,l2 real(8), allocatable :: store_huge(:,:), store_huge2(:,:), store_huge3(:,:) character(2000) :: outputFileName !************************************************************* ! begin code !************************************************************* outputFileName = 'C:\temp\check.dat' ii = 95000 jj = 2000 allocate ( store_huge(ii, jj*2), store_huge2(ii,jj), store_huge3(ii,jj), STAT = istat ) store_huge = 1.0 store_huge2 = 2.0 store_huge3 = 3.0 ! test 1 open( newunit = file_id, file = outputfilename, form = 'unformatted', action='write', iostat = istat ) do l1 = 1, size(store_huge, dim=2) write(file_id, IOSTAT=istat) store_huge(:,l1) if( istat /= 0 ) write(*,*) 'write1 l1, istat', l1, istat enddo !write(file_id, IOSTAT=istat) store_huge close(file_id) store_huge = 0.0 write(*,*) 'err code before open 1',getlasterror() open(newunit=file_id, FILE=outputfilename, form = 'unformatted', action='read', iostat = istat) write(*,*) 'err code before read 1',getlasterror() !read(file_id, IOSTAT=istat) store_huge ! fails with istat = -1 do l1 = 1, size(store_huge, dim=2) read(file_id, IOSTAT=istat) store_huge(:,l1) ! fails with istat = -1 if( istat /= 0 ) write(*,*) 'read1 l1, istat', l1, istat, getlasterror() enddo write(*,*) 'err code after read 1',getlasterror() close(file_id) open(newunit=file_id, FILE=outputfilename, form = 'unformatted', action='read', iostat = istat) read(file_id, IOSTAT=istat) store_huge2 read(file_id, IOSTAT=istat) store_huge3 ! fails with istat = 67 close(file_id) ! test 2 open( newunit = file_id, file = outputfilename, form = 'unformatted', action='write', iostat = istat ) write(file_id, IOSTAT=istat) store_huge2 write(file_id, IOSTAT=istat) store_huge3 close(file_id) open(newunit=file_id, FILE=outputfilename, form = 'unformatted', action='read', iostat = istat) read(file_id, IOSTAT=istat) store_huge2 read(file_id, IOSTAT=istat) store_huge3 ! fails with istat = -1 close(file_id) open(newunit=file_id, FILE=outputfilename, form = 'unformatted', action='read', iostat = istat) read(file_id, IOSTAT=istat) store_huge ! fails with istat = -1 close(file_id) ! test 3 open( newunit = file_id, file = outputfilename, form = 'unformatted', action='write', iostat = istat ) write(file_id, IOSTAT=istat) store_huge write(file_id, IOSTAT=istat) store_huge2 write(file_id, IOSTAT=istat) store_huge3 close(file_id) open(newunit=file_id, FILE=outputfilename, form = 'unformatted', action='read', iostat = istat) read(file_id, IOSTAT=istat) store_huge ! fails with istat = -1 read(file_id, IOSTAT=istat) store_huge2 read(file_id, IOSTAT=istat) store_huge3 close(file_id) ! test 4 - no fail open( newunit = file_id, file = outputfilename, form = 'unformatted', action='write', access='stream', iostat = istat ) write(file_id, IOSTAT=istat) store_huge close(file_id) open(newunit=file_id, FILE=outputfilename, form = 'unformatted', action='read', access='stream', iostat = istat) read(file_id, IOSTAT=istat) store_huge close(file_id) ! test 5 - no fail open( newunit = file_id, file = outputfilename, form = 'unformatted', action='write', access='stream', iostat = istat ) write(file_id, IOSTAT=istat) store_huge2 write(file_id, IOSTAT=istat) store_huge3 close(file_id) open(newunit=file_id, FILE=outputfilename, form = 'unformatted', action='read', access='stream', iostat = istat) read(file_id, IOSTAT=istat) store_huge2 read(file_id, IOSTAT=istat) store_huge3 close(file_id) ! test 6 - no fail open( newunit = file_id, file = outputfilename, form = 'unformatted', action='write', access='stream', iostat = istat ) write(file_id, IOSTAT=istat) store_huge write(file_id, IOSTAT=istat) store_huge2 write(file_id, IOSTAT=istat) store_huge3 close(file_id) open(newunit=file_id, FILE=outputfilename, form = 'unformatted', action='read', access='stream', iostat = istat) read(file_id, IOSTAT=istat) store_huge read(file_id, IOSTAT=istat) store_huge2 read(file_id, IOSTAT=istat) store_huge3 close(file_id) deallocate( store_huge, store_huge2, store_huge3 ) continue END SUBROUTINE test11 END MODULE tester
- 新着としてマーク
- ブックマーク
- 購読
- ミュート
- RSS フィードを購読する
- ハイライト
- 印刷
- 不適切なコンテンツを報告
Thanks for this Andrew - I am surprised that a error is appearing in the write statement on the grid slice test, as no error is reported when writing the full matrix. In any case, at least the 'stream' work-around is keeping me going for the moment, and hopefully the Intel people will resolve the issue before things get more complicated.
- 新着としてマーク
- ブックマーク
- 購読
- ミュート
- RSS フィードを購読する
- ハイライト
- 印刷
- 不適切なコンテンツを報告
Thanks for the clarification (I got confused by the "write array array slices test it throws error 67")
- 新着としてマーク
- ブックマーク
- 購読
- ミュート
- RSS フィードを購読する
- ハイライト
- 印刷
- 不適切なコンテンツを報告
Hi Justin, Andrew,
Thank you for the report. I am checking with development on this and will get back to you shortly.
Best regard,
- 新着としてマーク
- ブックマーク
- 購読
- ミュート
- RSS フィードを購読する
- ハイライト
- 印刷
- 不適切なコンテンツを報告
This was fixed as well in 18.0 compiler to be released soon.
-------------------------
The type of “store_huge” array was not specified. Using real*8”:
real*8, allocatable :: store_huge(:,:)
removed IOSTAT= specifier from READ statement to catch possible Input error:
read(1) store_huge
18.0 works fine:
-bash-4.1$ ifort -V
Intel(R) Fortran Intel(R) 64 Compiler for applications running on Intel(R) 64, Version 18.0 Build 20170705
Copyright (C) 1985-2017 Intel Corporation. All rights reserved.
-bash-4.1$ ifort f2.f90 -assume buffered_io && ./a.out
-bash-4.1$
- 新着としてマーク
- ブックマーク
- 購読
- ミュート
- RSS フィードを購読する
- ハイライト
- 印刷
- 不適切なコンテンツを報告
Apologies - is "real(8), allocatable" not equivalent to "real*8, allocable"? I do not follow how removing IOSTAT= specifier from READ statement will catch the error - I thought this would do the reverse (or maybe you have in mind a run-time error message)?
A bit confused, but happy to hear that it will be fixed in 18.0 - can you take a guess as to which month 18.0 will come out?
Many thanks,
Justin.
- 新着としてマーク
- ブックマーク
- 購読
- ミュート
- RSS フィードを購読する
- ハイライト
- 印刷
- 不適切なコンテンツを報告
REAL*8 is an extension - in Intel Fortran it is equivalent to REAL(8). Best practice is to not use explicit kind numbers such as 8, though. See Doctor Fortran in "It Takes All KINDs"
Removing the IOSTAT allows the error to be reported on the console rather than just have the variable set. The error report will have more information.
- 新着としてマーク
- ブックマーク
- 購読
- ミュート
- RSS フィードを購読する
- ハイライト
- 印刷
- 不適切なコンテンツを報告
And now I am confused as the examples in this thread use real(8) and not the defunct real*8. I don't get Surely removing IOSTAT= that means I program crashes on error rather than you being able to handle the error.
- 新着としてマーク
- ブックマーク
- 購読
- ミュート
- RSS フィードを購読する
- ハイライト
- 印刷
- 不適切なコンテンツを報告
andrew_4619 wrote:
And now I am confused as the examples in this thread use real(8) and not the defunct real*8. I don't get Surely removing IOSTAT= that means I program crashes on error rather than you being able to handle the error.
The comments make sense for the code snippets in the first few posts of the thread. The type of store_huge is not specified at all - support would have had to guess when they saw the original post. An IOSTAT specifier is provided for the io statements (and others), but the value that results is ignored - which is hiding the problem, not handling it.
