Hello Altera Forums! I have been a long time lurker, and am making my first post. I have been searching for my particular issue, but that has not yielded any specifically useful results.Let me get you acquainted with my general setup and desired overall outcome. software: Quartus 13.0 sp1 Subscription Edition (Currently running NIOS and TSE as OpenCore Plus, but do have licenses for both) board: Custom Stratix IV EP4SE530 (27Mbit Ram, 530k LE) with 2x 256MB DDR3 Components mac: Altera Triple Speed Ethernet (RGMII @ 100Mbps), so I'm using a 25 MHz Tx CLK on RGMII Interface phy: Micrel KSZ9021RN, RGMII Only 10/100/1000 qsys: NIOS II /f with no flash and a connected 4Mbit onchip_mem (using this for instruction/data storage, will update code via FPGA recompile) I started with a Stratix IV Standard Ethernet Example for a development kit, then imported it into 13.0 sp1, removed the CFI Flash Controller, and added the DDR3 controllers. So far I've run the HAL/ucosii hello world example, count_binary (LED PIO), and also ran successful memory tests using the "Memory Test Small" example. So most of the NIOS here is working. The Ethernet Subsystem is exactly the same as the (hyperlink):ethernet standard example (https://www.altera.com/support/support-resources/design-examples/intellectual-property/embedded/nios...), except in the top level my DDR3 is not connected to instruction memory. At this point I pretty much generated an SSS example in NIOS SBT, hard coded the MAC/IP/Subnet/Default Gateway, and added detection support for the Micrel PHY (it auto negotiates, and detects correct link speed). In SignalTap I am watching packets from the RGMII interface arrive, go through the MAC, and into the Avalon-ST RX SGDMA. The SGDMA will then send an IRQ to the NIOS, which gets handled by 'tse_sgdmaRx_isr'. This happens over and over, but no ping packets I have sent have ever been responded to. Here's where things get strange. The descriptors are not pointing to the packet... They are pointing into the onchip_mem! To give an outline of what is happening, here is some more relevant information: sdram: 0x00000000 - 0x0fffffff (256Mbyte) sdram2: 0x10000000 - 0x0fffffff (256Mbyte) onchip_mem: 0x20080000 - 0x200fffff (4Mbit) descriptor_memory: 0x20100000 - 0x20100fff (4Kbyte) As a sanity check, here is a random descriptor I ripped using the System Console. I did this to make sure the debugger wasn't lying :rolleyes: % master_read_memory $mypath 0x20100040 40 source: 0x00 0x00 0x00 0x00 (indicates avalon-st)
reserved: 0x00 0x00 0x00 0x00
destination: 0xdc 0x97 0x0d 0x20 (memory written to)
reserved: 0x00 0x00 0x00 0x00
next_desc_ptr: 0x60 0x00 0x10 0x20 (desc + 20)
This means the destination is 0x200d97dc. Which by the way is WRONG! But why? What am I missing? Let's do another sanity check and read that address.
% master_read_memory $mypath 0x200d97dc 40 (onchip_mem access)
0x00 0x00 0x00 0x00 0x00 0x00 0x00 0x00 0x00 0x00 0x00 0x00 0x00 0x00 0x00 0x00 0x00 0x00 0x00 0x00 0x00 0x00 0x00 0x00 0x00 0x00 0x00 0x00 0x00 0x00 0x00 0x00 0x00 0x00 0x00 0x00 0x00 0x00 0x00 0x00
Yup, pretty much what I expected. No packet! Well obviously there isn't going to be a packet there... The SGDMA isn't even connected to the onchip_mem, it's connected to the sdram. So if we look at the sdram at that address (which sort of makes sense?) take a look at what we find: % master_read_memory $mypath 0x000d97dc 80 (sdram access) 0x00 0x00 0xff 0xff 0xff 0xff 0xff 0xff 0x00 0x1e 0x68 0x7a 0xf5 0x94 0x08 0x06 0x00 0x01 0x08 0x00 0x06 0x04 0x00 0x01 0x00 0x1e 0x68 0x7a 0xf5 0x94 0xc0 0xa8 0x01 0x01 0x00 0x00 0x00 0x00 0x00 0x00 0xc0 0xa8 0x01 0x0a 0x00 0x00 0x00 0x00 0x00 0x00 0x00 0x00 0x00 0x00 0x00 0x00 0x00 0x00 0x00 0x00 0x00 0x00 0xf1 0x7e 0x6c 0xdb 0x00 0x00 0x00 0x01 0x00 0x00 0x00 0x00 0x00 0x00 0x09 0x6e 0x69 0x67 Well that definitely looks like an ethernet packet. In fact, it's the packet sent from my laptop, with the MAC address: 00:1e:68:7a:f5:94!!! I'm not entirely sure why the start of all the ST packets as 0x00 0x00, I'll need to read into that a bit more. Can anybody please explain to me why the SGDMA descriptor is pointing to 0x200d97dc instead of 0x000d97dc? Is it because my sdram isn't on the instruction memory and thus no part of the data memory is linked and nothing gets put there? I am really just confused at this point, and was not able to find any sort of sgdma_base_address# define in the sgdma_descriptor.h files or anything. I just don't know why the descriptor is claiming to have written to the onchip_mem when a) the data isn't there, and b) it's not even connected to the onchip_mem. I have attached my qsys files in case anybody would like to investigate the qsys setups. The Simple Socket Server program code is largely the same, just it has all the flash elements removed with everything hardcoded + KSZ9021RN support + a bunch of random printf()s. In case you don't want to open my Qsys Files: (hyperlink):screen cap of top level qsys (http://i.imgur.com/xmytg9z.png) (hyperlink):screen cap of ethernet subsystem (http://i.imgur.com/lum7r32.png) (hyperlink):screen cap of peripheral subsystem (http://i.imgur.com/nmw8tm7.png) If there are any other files that I can upload to help you help me, please let me know! Thank you!
It's not a hardware problem, it's a software problem. If you are using the Interniche TCP/IP stack there is a special include file (I think it's called ipport.h) where you can redefine macros to allocate/deallocate memory for packets. Some example designs show how to use on chip memory to speed up the TCP/IP stack, so if you started with one of such examples it will try and write the packet contents to that memory. If you remove those macros it should use the main heap memory instead. You'll find information about that in [url=https://altera.com/content/dam/altera-www/global/en_US/pdfs/literature/an/an440.pdf]application note 440, pages 9 and over (Using Faster Packet Memory).Just to be sure, check also that your link script uses the DDR3 memory for heap.
Daixiwen, thank you for the reply! My DDR3 memory it not being used for heap in the linker script.The issue is probably in my linker script. For some reason the linker script NEVER adds either or my 'sdram' or 'sdram2' devices upon BSP generation. My DDR3 does not show up at all, but clearly it knows it's there, as I can see it when I click on the "Memory Map" button. linker script tab (http://i.imgur.com/aetvh4o.png) memory map window (http://i.imgur.com/3pwdhvn.png) I tried manually adding an 'sdram' Memory Device to the linker, but it said it could not be found in the SOPC Design. At this point the 'sdram' device I created is now overlapping addresses with the real one, but I tried pointing it to the heap anyways, then compiling. The program immediately gave an error in the Nios Terminal, something to do with panic/allocation I believe. I tried generating a BSP from one of the DE2-115 sopcinfo files as a test, and it shows the sdram in the linker script just fine. Is it because the SDRAM component I am using is 2048Mbit/256MByte? Is the address span too large (28 bits)? What if I attach the sdram to the Instruction Memory port. Would that force generation in the Linker Script? If you have any suggestions, let me know. For now I am going to directly attach the SGDMAs to the onchip_mem and see if I can get it working that way. Thanks
My sdram devices don't show up in my BSP Linker for some odd reason, even if I regenerate a new project. Is it because they require 28 bits to address? They are NOT hooked up to the instruction master, but I don't think that should matter at all. When generating the BSP the sdram memory devices show up in the Memory Map, but aren't available for me to map to any part of the system. If I try to "Add Memory Device" It says that the device 'sdram' is not recognized in the SOPC info, although it is clearly in the Memory Map. I tried manually mapping the sdram device in the BSP Editor, but this caused the memory addresses to overlap with the real sdram device. I tried pointing my sdram device to the heap, compiled and it obviously didn't work. I got a dtrap error and no results.memory map window (http://i.imgur.com/3pwdhvn.png) generated linker script tab (http://i.imgur.com/aetvh4o.png) Does anyone know how I can make the sdram show up in the BSP Editor? I thought it should do that automatically. I made a test project with one of the DE2-115 demonstration .sopcinfo files, and that sdram showed up fine without a problem. After switching the SGDMA memory read/write sources to the onchip_mem (now expanded to 8Mbits, which is absurd), I can receive the packets, and actually see the wheels turning! Things like the eth struct actually capture the source/destination MAC addresses and the payload appears to be there. Another sample I tried was the DE2-115 VERY BASIC Tx/Rx_Frame sender. It sends and receives packets! I spied on them in signal tap and even captured one of the ping packets from a laptop which had various 192.168.1.1/192.168.1.234 data in it. But now I am experiencing a completely different issue. It seems that once the NicheStack finds a relevant packet, such as a ping/telnet packet, the whole software goes berzerk and just restarts. The debugger states "Stopped due to shared library event". Even running without the debugger, it nets the same results, just I don't get to see the "Stopped due to shared library event" error. When the system receives packets that are not relevant (random broadcasts, or pings/telnets to the wrong IP), it doesn't restart. It only happens when the device is targeted. I tried Recompiling/Clean/Rebuilding BSP and Quarus Projects, but got the same results. Usually when the system gets the ping packet, I am able to sniff one packet escaping through the TX_SGDMA in SignalTap, but after that it seems to just reboot. Interestingly enough, it looks like the response ping packet (all the information in the packet is correct). However, it seems like the TSE is ignoring the packet since I never see any activity on the network switch. I will look into this and see if I can figure out what's going on. Let me know if you have any suggestions!
The Simple Socket Server is now running flawlessly... Well almost!Separating the .heap into it's own memory fixed the "Shared library event" issue, probably because the RX DMA was overwriting previously allocated memory. Right now the Simple Socket Server is responding to the ping requests, and even sending the packet to the TSE MAC which is showing positive Frames Transmitted OK. I can see the light blinking on the 100Mbps switch I'm hooked up to but the packet never seems to make it to the computer that is pinging... Looking at the packet in SignalTap shows the following: hyperlink: signal tap screenshot (http://i.imgur.com/ic1onje.png) hyperlink: packet decoding (looks okay to me!) (http://i.imgur.com/qvr8rhf.png) I am going to try and phase shift the clock 90 degrees even though I'm only running the RGMII interface at 25MHz (100Mbps). I'm so close!!! Let me know if you have any suggestions!
The packet looks okay, but this is an ARP reply. Do you also see some ping requests from the PC, after that ARP reply? Did the PC add the altera's MAC address to its ARP cache (type "arp -a" from the command line)?
Nope. Not getting the board MAC in the laptop's ARP table. I probed my RGMII Lines that go to the Micrel KSZ9021RN PHY with an oscope, but did not have any luck with that. Everything looked good, the four RGMII lines definitely had some sort of 0x55 preamble when I was looking at it.Really all I can do at this point is verify registers in the PHY, perhaps tweak some of the manufacturer specific registers, and check pinouts. To reiterate, I do have increasing TransmittedFrames in the TSE, so I believe this issue might be outside of the FPGA. Is it possible that there are reflections or something of the sort on the RGMII lines? Perhaps changing drive strength might help? Any suggestions are welcomed :)
You say that you are using the rgmii lines at 25MHz. Are both the TSE MAC and the PHY set in 100Mpbs mode? Is the PC also set in 100Mbps mode?I think I misunderstood your previous post. I thought the "Packet decoding" picture was a screenshot of wireshark on a packet that was actually received by the PC. But if it didn't get through then yes it could be a problem on the RGMII lines or the PHY configuration. What you should see in wireshark when pinging the board: [list][*] ARP request from the PC [*] ARP reply from the altera card [*] ICPM echo request from the PC [*] ICMP echo reply from the altera card[/list]
Daixiwen,Correct, that packet capture was from SignalTap, not the actual NIC. And yes both the board and PC are in 100Mbps mode. I do have good and bad news however. Yesterday I was able to get the SSS running! Ping/Telnet both worked flawlessly. I powered it down, powered it back up and reprogrammed and it still worked. Then after I disconnected the ethernet cable and tried to reconnect to it, it stopped working. I haven't been able to get it working again since then. How is it possible that I could fully power off, power on, reprogram, then run the SSS successfully twice then have it not work on subsequent tries? I'm a bit confused.
Adding a DDIO on the GTX clock seems to have solved my issue! 1400 byte pings and telnet are operating without a hitch. Routing packets through a switch and direct to the PC is working :)Will update you if anything new happens. Thank you Daixiwen!
Glad to know it works, even if I didn't have anything to do with that!Did you set up timing constraints for the RGMII I/O pins? If putting a ddio for the clock line it could indicate a timing issue. I didn't think it would matter for a 25MHz clock, but maybe it does after all.
--- Quote Start --- Glad to know it works, even if I didn't have anything to do with that! Did you set up timing constraints for the RGMII I/O pins? If putting a ddio for the clock line it could indicate a timing issue. I didn't think it would matter for a 25MHz clock, but maybe it does after all. --- Quote End --- I did not setup any constraints on the RGMII pins specifically, I assumed that the TSE files had something in there for them. Perhaps that was a bad assumption. I got the idea of the GTX clock DDIO from the DE2-115 Dev Kit Web Server Example. They do the same exact thing there, shifting the output clock by (~1.5ns).
The TSE sets some timing rules used internally, but doesn't provide timing constraints for the I/O pins. You need to provide them yourself.It looks like Altera use different solutions for different kits ;) but in that case if it works, just use what they did on the kit. You'll probably find the correct timing constraints in the design example too.