- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
In an iterative procedure my program stores data in a dat file and at each step search through them to avoid computational repetitions. It is quite time consuming and I think writing the data file on a ramdisk should speed up the procedure. So could someone let me know if it is a good solution or not. If so, how can I create a ram disk, write data on it and read them?
Hamid
Link Copied
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Allocate big array in memory and store your data here.
Another possibility is to use memory mapped file
http://msdn.microsoft.com/en-us/library/ms810613.aspx
or google "CreateFileMapping".
This possibility uses win32 api and it will need extra effort in fortran to work with this (all examples are in C). (But windows manage your file and if it is huge and cannot fit to memory it can be flushed to disk.
Jakub
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
If you are on a 64-bit platform your vurtual memory address space will exceed that of any RAM disk, and in which case change your code to place the tentative data into an internal array eliminating duplicates in the process, then write to file as last step.
If you are on a 32-bit platform a RAM disk will not help if your data is greater than 2GB since you will not likely have more than 4GB on the system to begin with.
If you are on a 32 bit system you might find it useful to produce a hash key out of your results data and place that into an internal array as you write out your results data. Then as you procuce your next results data record, produce the corrisponding hash code, search the internal hash code table, if NOT found, write to disk as duplicate cannot possibly be in disk file, if hash codeFOUND then depending on hash code: a) if no collisions possible in hash code the record is already on disk and therefore you have duplicate results. b) if collisions possible then search the disk for duplicates. Note, depending on your hash code design it may be possible to reduce the frequency of disk search to 1/100 or less of what you are doing now.
Hash codes can be unique or potentially have collisions. Try to design your hash code generator to contain at least 115% codes of possible records. IOW the number of records not to exceed 85% of available hash code values.
See: http://en.wikipedia.org/wiki/Hash_function
Jim Dempsey
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Many thanks for both useful comments. To design hash functions and also to store data in ram Ive got two questions related to I-O formatting and allocating arrays and will post them as new threads.
Hamid

- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page