Intel® Fortran Compiler
Build applications that can scale for the future with optimized code designed for Intel® Xeon® and compatible processors.
공지
FPGA community forums and blogs on community.intel.com are migrating to the new Altera Community and are read-only. For urgent support needs during this transition, please visit the FPGA Design Resources page or contact an Altera Authorized Distributor.
29285 토론

Direct Access Unformated File: Writting at the last record

Ilie__Daniel
초급자
784 조회수

Hello!

I want to see what the maximum size for a direct access unformated file is.

The file is created using the following command:

open(unit=100, file="c: emp est.rec", recl=32, access="direct", form="unformatted", status="unknown", iostat=ios)

I use the /assume:byterecl option, so recl=32 is in bytes. The variable ios is declared as an integer (default kind = 4).

I want to write at the very last record of the file. I assumed this was given by huge(ios)-1, which is (2**31-1) - 1 = 2**31 - 2. The write statement was skiped in the code and nothing was written to the file. What is the maximum record I could write at?

I run WindowsXP SP2 with a NTFS file system. My computer has an IntelP4 3GHz HT CPU and 1Gb RAM

I appreciate any ideas on this.

Daniel.

0 포인트
9 응답
Steven_L_Intel1
784 조회수
What do you mean "the write statement was skipped"? Do you realize that you're trying to write a file 64GB in size?
0 포인트
Ilie__Daniel
초급자
784 조회수

Steve,

I realise that the file is big, butit should be possible to do this.

I guess a better explanation would have been "the write statement had no effect" because the file size remained at zero. I do not claim that this is in anyway a fault, but I would like to know why is this happening.

Daniel.

The source code:

program

main

implicit none

integer :: ios, dat, j

j = -1000

open(unit=100, file="c: emp est.rec", recl=32, access="direct", &

form="unformatted", status="unknown", iostat=ios)

dat =

huge(ios) - 1

write(100, rec=dat)ios

read(100, rec=dat, iostat=ios)j

print*,ios,j

end

program main

0 포인트
Steven_L_Intel1
784 조회수
It is a bug - you should get a run-time error of some sort. Please report this to Intel Premier Support.

If you'll tell me in more detail what you're trying to find out, I can help with other methods.
0 포인트
Ilie__Daniel
초급자
784 조회수

We are dealing with large data files 5-10Gb. Some of the data in this files got overwritten.

I have checked the source and did not find any mistakes. I wanted to know what is the maximum record number, I could write at, given that my file is created with an "open" statement as shown in the previous post.

0 포인트
Steven_L_Intel1
784 조회수
The only suggestion I can offer is to find out how much disk space remains on the disk you're using and divide by 32. That would be a theoretical maximim, in practice it would be less.
0 포인트
jimdempseyatthecove
명예로운 기여자 III
784 조회수

Danielilie,

On the OPEN trysetting BUFFERED='NO'

If the data in the file is overwritten and you suspect a problem in the IVF write statement then you could potentially do one or more ofthe following

a) after every write verify the data
b) after every write verify the file size
c) insert the record number into the record being written
(insert it into something that does not alter the record size. e.g. Name)

Method a) would spot file alteration but unfortunately would require a substantial amount of overhead (increasing after each write).

Method b) would detect a case where a write to next recored address performs an overwrite. This would have a side effect of not altering the file size (assuming entire record overwrites prior data).

Method c) would help detect what data did the overwrite.

Note, for method c) the more places in the record you can replace with the record number marker the better. An I/O of 1 record may be performed in one or more pieces. Distributing the record number throughout the record might help identify additional characteristics of the problem.

Jim Dempsey

0 포인트
Ilie__Daniel
초급자
784 조회수

Steve,

Jim,

Thank you for your suggestions. I will try them.

Daniel.

0 포인트
Ilie__Daniel
초급자
784 조회수

As a follow up:

I managed to create successfully a 67Gb file. However, I think that something may be wrong with the way the write statement is handled. The explanations follow:

The fist program:

program

main

implicit none

integer :: ios, dat

integer, dimension(8) :: arr_out, arr_in

logical, dimension(8) :: subtract

dat = 0

arr_out = 0

arr_in = 0

subtract = .false.

open(unit=100, file="c:analysis est.rec", recl=32, access="direct", &

form="unformatted", status="unknown", iostat=ios)

open(unit=200, file="c:analysis esult.txt")

dat = 1500000000

do

dat = dat + 1000000

write(200,*)dat

arr_in = -1

call write_out(dat, arr_out)

write(100, rec=dat)arr_out

read(100, rec=dat, iostat=ios)arr_in

if( ios /= 0 )then

write(200,*)"I/O Error: ",ios," at rec: ",dat

exit

else

subtract = (arr_out /= arr_in)

if( any(subtract) )then

write(200,*)"Corruption at record: ",dat

exit

end if

end if

end do

close(unit=200)

close(unit=100)

end

program main

subroutine

write_out(dat, arr_out)

implicit none

integer, intent(in) :: dat

integer, dimension(8), intent(out) :: arr_out

integer :: pos

pos = 0

arr_out = 0

pos =

mod(dat,8)

if( pos == 0 )pos = 8

arr_out(pos) = dat

end subroutine

write_out

is used to test if the program can write at very large record numbers. The writing starts at record 1,501,000,000. Each writting operation is followed by a read operation. Finally a comparison is made between what was written and read.

The results file does not show any error mesages. The last number in the file is negative, which means the maximum range for integer(4) was exceeded. This was to be expected.

In conclusion, one can write successfully at record 2,000,000,000. This is confirmed once again by running the followingprogram

program

main

implicit none

integer :: ios, dat

integer, dimension(8) :: arr_in

arr_in = -1

open(unit=100, file="c:analysis est.rec", recl=32, access="direct", &

form="unformatted", status="unknown", iostat=ios)

dat = 2000000000

read(100, rec=dat, iostat=ios)arr_in

print*,ios,arr_in

end

program main

Indeed, record 2,000,000,000 contains the following information: 0 for seven times and 2,000,000,000 in the 8th position of the record.

These two tests prove that one can write at record 2,000,000,000 without any errors being generated.

However, if you delete the created file test.rec and run this program

program

main

implicit none

integer :: ios, dat

integer, dimension(8) :: arr_in, arr_out

arr_in = -1

arr_out = (/1,2,3,4,5,6,7,8/)

open(unit=100, file="c:analysis est.rec", recl=32, access="direct", &

form="unformatted", status="unknown", iostat=ios)

dat = 2000000000

write(100, rec=dat, iostat=ios)arr_out

print*,ios

read(100, rec=dat, iostat=ios)arr_in

print*,ios,arr_in

end

program main

then you will see that you cannot write at record 2,000,000,000. This is clearly so as the file test.rec does not increase in size and you get iostat=36 in the following read statement.

I would like to know your oppinion on this. This is clearly some kind of fault. Please note that the file test.rec can reach a size of 67Gb when running the first test case.

Kind regards,

Daniel.

0 포인트
Steven_L_Intel1
784 조회수
You reported this to Intel Premier Support. I agree that it's a bug and have asked that the developers fix it in the future.
0 포인트
응답