Intel® Fortran Compiler
Build applications that can scale for the future with optimized code designed for Intel® Xeon® and compatible processors.
29253 Discussions

Unable to read binary file and giving error forrtl: severe (67): input statement requires too much data

dhilonpatel
Beginner
660 Views

Hello,

I am a biginner in using clusters. We have 24 nodes cluster intel xeron X86 processors operating with Linux RHEL5.2 which uses infiniband for applications and Ethernetport for management. Installed with mvapich-1.1_intel, fullpackage of intel compiler.

I have an application CPMD a molecular dynamic package installed on 24 nodes cluster. While restarting job for the second step it uses RESTART.1 binary file for the next step. When I submit my job from /home/username/cwd (current working directory) it successfully read binary file to restart the job for next step but when i submit my job from /home/username/data/subdirectory it finsihes the first step of my job successfully without using binary file but in the second step while using binary file while restarting job it shows an error forrtl: severe (67): input statement requires too much data, unit 1, file /student/username/cpmd_amit_test/linear-BG/20/opt/./RESTART.1
I would like to mention that data is a separate linked file for each users which represents /student/username mounted to all nodes from common storage.
I dont understand why my job is giving an error while using restart file generated during the process in /home/username/data/../cwd. But not when i generate same restart file and run my job through /home/username. One more thing I would like to mention that the same binary generated in /home/username/cwd and /home/username/data/../cwd shows difference while i compare with differ command. What can be the reason for this???

i have compiled my application using the following libraries and compilers for the parallel processing.

---------------------------------------------------------------------------------------------------------------------------------------------------------------------------
SRC = .
DEST = .
BIN = .
FFLAGS = -c -openmp -w90 -w95 -O2 -unroll -ip -cm -xT -convert big_endian
LFLAGS = -L/opt/intel/mkl/10.1.0.015/lib/em64t -lmkl_intel_lp64 -lmkl_intel_thread -lmkl_core -lguide
CFLAGS = -c -openmp -O2 -Wall
CPP = /lib/cpp -P -C -traditional
CPPFLAGS = -D__Linux -D__PGI -DFFT_DEFAULT -DPOINTER8 -DINTEL_MKL \
-DPARALLEL -DMYRINET -DLINUX_IFC
NOOPT_FLAG =
CC = /opt/intel/impi/3.2.0.011/bin64/mpicc -cc=icc
FC = /opt/intel/impi/3.2.0.011/bin64/mpiifort -fc=ifort
LD = /opt/intel/impi/3.2.0.011/bin64/mpiifort -fc=ifort -openmp
AR = ar
-------------------------------------------------------------------------------------------------------------------------------------------------------------------

I will appreciate any help in solving this issue. I would like to provide necessary information in solving this issue.
Thanks in advance.
Dhilon
0 Kudos
1 Reply
TimP
Honored Contributor III
660 Views
This question might be more appropriate for the HPC forum.
One question: why do you start out specifying mvapich1, when you end up saying you used Intel MPI for compilation? These MPI implementations are unlikely to work interchangeably. You would need to take care that the same MPI environment is active on each node as the one you used for compilation, most likely by setting the environment variables in accordance with ifortvars (and mpivars, if using Intel MPI) scripts for each node.
We didn't have mvapich1 working on our own cluster, last time I tried.
I can't guess the implications of the Myrinet and PGI flags you specified, but they seem to be potentially in conflict with the installation you say you are using. Of course, they don't seen directly related to the problem you mention.
0 Kudos
Reply