- 신규로 표시
- 북마크
- 구독
- 소거
- RSS 피드 구독
- 강조
- 인쇄
- 부적절한 컨텐트 신고
Hi forum,
I'm trying to load a txt file containing 1350000 32 bits words to the SDRAM of a DE0 nano dev board but system console is freezing or crashing. I've tried to read the whole file and loading it to a variable (set data [split $file_data "\n"]) and then write it to the SDRAM in a single command (master_write_32 $m $sdram $data) but that crashes system console instantly. Then i tried to read the file line by line and, on each line, i would write it to the SDRAM, using the following code: --- Quote Start --- while { [gets $fp line] >= 0 } { foreach word [split $line] { master_write_32 $m [expr $SDRAM+[expr ($i*4)]] $line incr i } } --- Quote End --- I left the code running for a whole day just to find that system console froze :( the code worked perfectly for a file with only 10k lines. So i came to ask for help, is there any other way to do this? Is there a way to break the big file into small files using tcl? Thanks for the help.링크가 복사됨
4 응답
- 신규로 표시
- 북마크
- 구독
- 소거
- RSS 피드 구독
- 강조
- 인쇄
- 부적절한 컨텐트 신고
I analyzed the JTAG-to-Avalon-ST/MM protocol
http://www.ovro.caltech.edu/~dwh/correlator/pdf/altera_jtag_to_avalon_analysis.pdf and found that master_read/write_memory work significantly faster than master_read/write_32. If you are using an older version of Quartus, you can use master_read/write_memory to improve your performance. If you are using the latest version of Quartus, look in the latest version of the handbook http://www.altera.com/literature/hb/qts/qts_qii53028.pdf master_write_from_file and master_read_to_file (see p38 of this PDF). I suspect they will be optimized for file transfers too. Cheers, Dave- 신규로 표시
- 북마크
- 구독
- 소거
- RSS 피드 구독
- 강조
- 인쇄
- 부적절한 컨텐트 신고
I was using version 13.0, i was not aware of those new commands, master_write_from_file and master_read_to_file, in 13.1.
I will upgrade to the latest version and try, thanks!- 신규로 표시
- 북마크
- 구독
- 소거
- RSS 피드 구독
- 강조
- 인쇄
- 부적절한 컨텐트 신고
It worked wonders, took 55 seconds to read 6M lines with 4 bytes each line.
Thanks dwh!- 신규로 표시
- 북마크
- 구독
- 소거
- RSS 피드 구독
- 강조
- 인쇄
- 부적절한 컨텐트 신고
--- Quote Start --- It worked wonders, took 55 seconds to read 6M lines with 4 bytes each line. --- Quote End --- Yeah, I noticed that memory_read/write_bytes was significantly faster too :) Cheers, Dave
