Nios® V/II Embedded Design Suite (EDS)
Support for Embedded Development Tools, Processors (SoCs and Nios® V/II processor), Embedded Development Suites (EDSs), Boot and Configuration, Operating Systems, C and C++
12600 Discussions

Resource Collisions when installing PCIe DD

Altera_Forum
Honored Contributor II
3,561 Views

Does anyone know what this could mean ? 

 

Jan 1 00:30:22 linaro-developer kernel: [ 1822.434579] PCI: Device 0000:01:00.0 not available because of resource collisions  

 

I have the sample SG DMA DD and a Arria V Starter Kit with a PCIe design...  

I altered the DD to mathc the DEVICE_ID ... I am not sure there the resource collison can be resolved. I ammume a memory map 

overlap at the BAR's. 

 

Thanks, Bob.
0 Kudos
25 Replies
Altera_Forum
Honored Contributor II
743 Views

I am now adding printk's to the kernel and re-building it ... I do know where it fails but need to track back as to why the dev-> resource's are start = 0, end = 0xFFFFF, for the first resource ... where I believe each resource is a block of memory map allocated to the BAR register. 

To make it simple I first reduced the BAR size to be 20 bits .. from 28 bits since that amount of memory was not available in the map. 

I then tried making BAR 0 32 bits and not pre-fetchable. BAR 1 disabled and BAR 2 32 bits. 

I have now reduced to just BAR 0, non-prefetch ...  

I have to believe the system supports non-prefetch memory which is necessary for any control registers. 

 

The particular test that is failing is for the first dev->resource.start != 0 and dev->resource.end = 0xFFFFF. the first test fails and the printk reports a 

"resource collision". I need to work back to see why the start value isn't a meaningful number ...  

The dmesg outpus doesn't seem to have any particular errors.  

 

Any ideas on this ? 

 

Thanks BCD.
0 Kudos
Altera_Forum
Honored Contributor II
743 Views

BAR registers are typically for access to small regions, eg., typically no more than 1MB regions (20-bits). Any larger than this, and a lot of PCs will not even boot, eg., your 32-bit BAR is asking for a 4GB region in the host memory map. If that host is 32-bits ... well ... that ain't going to happen :) 

 

Try smaller BAR regions and see if that helps.  

 

Ignore any arguments in favor of large BARs, that discussion can come after you resolve this issue :) 

 

Cheers, 

Dave
0 Kudos
Altera_Forum
Honored Contributor II
743 Views

Thanks Dave, 

 

I did reduce the BAR sizes to 1MB without changing anything and there was a dmesg line that indicated the same but at the end the resource collision message was still the result. The system is an ARM 32 bit system ... I traced the kernel source back to a resource allocation section which I believe must be responsible for setting the resource->start and resource->end values. ... later on in mach-msm the pcie.c code run through a 0 .. 6 for loop examining the start and end values for each resource. The fail is where the start value is 0 which I am able to see via a printk. The indx vale in the for loop was 0 ... and I interpreted that at the BAR0 resource but may be mistaken. 

 

Currently I have reduced to a single BAR0 (32bit address ) and I did notice there are some requirements checked before the resource allocation code does it's job ... like checks for if pre-fetching is allowed or not ... I need to check on ARM where this decision is made. in /proc/mem I do see space allocated for the memory beyond the PCIe bridge ( Synopsys IP ) The original reference design has BAR0/1 as pre-fetchable and BAR2 not ... I guess this is since BAR2 handles NIOSII IP where pre-fetching could have side-effects that we can't handle. So I will run with and without pre-fetching on BAR0 and see what happens .  

 

BTW, 1BM isn't mach menory per BAR. The system memory map allocates 128MB of space for PCI ... That seems a fair bit for the BAR's and possibly ROM ... but I ca see that 4GB is too large and the dmesg output indicate that. 

 

 

 

Best Regards, Bob
0 Kudos
Altera_Forum
Honored Contributor II
743 Views

 

--- Quote Start ---  

 

I did reduce the BAR sizes to 1MB without changing anything and there was a dmesg line that indicated the same but at the end the resource collision message was still the result. The system is an ARM 32 bit system ... 

 

--- Quote End ---  

 

As a debug "tool", I'd try booting the board in an x86 host and see what the Linux messages are on that machine. The "problem" may not be due to your PCIe setup, but due to the PCIe configuration logic in the ARM bootloader. 

 

 

--- Quote Start ---  

 

I traced the kernel source back to a resource allocation section which I believe must be responsible for setting the resource->start and resource->end values. ... later on in mach-msm the pcie.c code run through a 0 .. 6 for loop examining the start and end values for each resource. The fail is where the start value is 0 which I am able to see via a printk. The index vale in the for loop was 0 ... and I interpreted that at the BAR0 resource but may be mistaken. 

 

--- Quote End ---  

 

Sorry, I haven't dug around in that code. The place to look might actually be in the bootloader. If you boot to U-Boot, then you should be able to browse the PCIe devices. 

 

 

--- Quote Start ---  

 

I will run with and without pre-fetching on BAR0 and see what happens .  

 

--- Quote End ---  

 

I have not seen any hosts have problems with PCI/PCIe BARs marked as pre-fetchable versus not. I typically make the regions pre-fetchable and then make sure there are no read "side effects". 

 

 

--- Quote Start ---  

 

BTW, 1BM isn't mach menory per BAR. The system memory map allocates 128MB of space for PCI ... That seems a fair bit for the BAR's and possibly ROM ... but I ca see that 4GB is too large and the dmesg output indicate that. 

 

--- Quote End ---  

 

Type lspci on your host PC and look at the BAR sizes. I doubt you will find many with large BAR sizes. 

 

The BAR regions allow host CPU access, and that access is typically terribly low performance. All of your "real work" should be done using DMA, and the DMA controller usually exists on the peripheral board, and the peripheral board needs to have bus master capability. The DMA controller can access/generate 32-bit and 64-bit PCIe addresses, so it can "see" the complete memory map of the PCIe segment it resides on. 

 

Another use for BARs is as a window, eg., you can have a BAR with control registers and another that is the "window", and via the control registers you can set a base address that defines where the "window" looks. I have PowerPC processor boards where I can use a 1MB BAR to copy FPGA images of many MB to DDR memory and then run PowerPC DMA controller tests. The key is that a large BAR is not needed to access all of the memory on your board if you just want to "poke" around. 

 

Cheers, 

Dave
0 Kudos
Altera_Forum
Honored Contributor II
743 Views

Hi Dave and team, I owe you an answer here ... I wasn't really getting anywhere adding printk messages to the PCIe kernel code so elected a different approach. 

I tried a $30 dual USB PCIe endpoint from Fry's and it had resources allocated just fine. I recorded the dmesg output to a text file then did the same with the Arria V starter kit board with my  

Gen1 X1 PCie design and again recorded the dmesg output to a text file. I then diffed the two outputs and evaluated the differences. Oh, I should say that I modified the FPAG design to match the BAR's and sizes that were showing up on the dyal USB endpoint ... it still failed with a "resource conflict" Anyhow the first difference was the Class code returned from the endpoint config space. The FPGA was returning 0x00000000, since I was not setting it to anything and the USB PCIe endpoint card returned a non-zero value indicating some serial bus endpoint. So ... I copied that class code to the FPGA design and the resources were allocated as expected. I have not traced through the PCIe kernel code to determine why a Class of 0x00000000 causes problems but the PCIe reference indicates a value of 0x00000000 indicates some early adapter so I assume the kernel code is not supporting a 0x00000000 code. 

 

I have made progress on the Linux device driver based on the 2 samples provided from Altera. I also found that the Linux I have doesn't support pre-fetchable memory attribute on the BARS.  

And I set the BAR0 to 32 bit address space and defined the BAR1 as 32 bit address to access the DMA controller and mail boxes for NIOS <-> ARM system ( rc ) communications . 

 

I now have IOCTL functions supporting the ARM system as a "producer" and have NIOS code to implement the "consumer" side ...  

 

I plan to add dedicated PCI order checking state machines at the output of the hard PCIe IP at the Avaon MM BAR0 output and where read completion data is mastered ... to monitor dedicated "data" coming down the BAR0 master and a dedicated "flag" pattern which will be a NIOS read completion. As a debug tool I plan to have the order tracking state machine inputs ( patterns) drive FPGA outputs to drive user LED's and be tracked by logic analyzer. 

 

Dave, on a new subject, I want to run CVP to manage the FPGA configuration from the system side ... but I also want to load the NIOS .elf or .flash code from the system side as well. Where  

should I go for advice on say configuring the NIOS Flash from the system side .... I believe maintaining both from the system side is the only reasonable way to manage "configuration" and "code" levels since as a manual operation it would be difficult to manage. Would it be as easy as adding a FLASH controller and having the system update the FLASH diirectly wiith a new .flash file ? 

Thanks, Bob.
0 Kudos
Altera_Forum
Honored Contributor II
743 Views

 

--- Quote Start ---  

I owe you an answer here ...  

--- Quote End ---  

 

Thanks! :) 

 

 

--- Quote Start ---  

 

I wasn't really getting anywhere adding printk messages to the PCIe kernel code so elected a different approach. 

I tried a $30 dual USB PCIe endpoint from Fry's and it had resources allocated just fine. I recorded the dmesg output to a text file then did the same with the Arria V starter kit board with my  

Gen1 X1 PCie design and again recorded the dmesg output to a text file. I then diffed the two outputs and evaluated the differences. Oh, I should say that I modified the FPAG design to match the BAR's and sizes that were showing up on the dyal USB endpoint ... it still failed with a "resource conflict" Anyhow the first difference was the Class code returned from the endpoint config space. The FPGA was returning 0x00000000, since I was not setting it to anything and the USB PCIe endpoint card returned a non-zero value indicating some serial bus endpoint. So ... I copied that class code to the FPGA design and the resources were allocated as expected. I have not traced through the PCIe kernel code to determine why a Class of 0x00000000 causes problems but the PCIe reference indicates a value of 0x00000000 indicates some early adapter so I assume the kernel code is not supporting a 0x00000000 code. 

 

--- Quote End ---  

 

I haven't tried a class code of 0, so hadn't had that one trip me yet :) 

 

 

--- Quote Start ---  

 

I have made progress on the Linux device driver based on the 2 samples provided from Altera. I also found that the Linux I have doesn't support pre-fetchable memory attribute on the BARS.  

 

--- Quote End ---  

 

In what way? 

 

The tests I did in this thread (see the PDF) 

 

http://www.alteraforum.com/forum/showthread.php?t=35678 

 

have pre-fetchable regions. 

 

 

--- Quote Start ---  

 

And I set the BAR0 to 32 bit address space and defined the BAR1 as 32 bit address to access the DMA controller and mail boxes for NIOS <-> ARM system ( rc ) communications . 

 

I now have IOCTL functions supporting the ARM system as a "producer" and have NIOS code to implement the "consumer" side ...  

 

--- Quote End ---  

 

Why IOCTL codes? You can use the mailboxes and interrupts to create an inter-locked handshake. Here's an old PCI example ... 

 

http://www.ovro.caltech.edu/~dwh/correlator/pdf/cobra_driver.pdf 

 

 

--- Quote Start ---  

 

I plan to add dedicated PCI order checking state machines at the output of the hard PCIe IP at the Avaon MM BAR0 output and where read completion data is mastered ... to monitor dedicated "data" coming down the BAR0 master and a dedicated "flag" pattern which will be a NIOS read completion. As a debug tool I plan to have the order tracking state machine inputs ( patterns) drive FPGA outputs to drive user LED's and be tracked by logic analyzer. 

 

--- Quote End ---  

 

Why do you need this? If you are DMAing data, then use an interrupt to indicate the end of the transfer. 

 

 

--- Quote Start ---  

 

Dave, on a new subject, I want to run CVP to manage the FPGA configuration from the system side ... but I also want to load the NIOS .elf or .flash code from the system side as well. Where  

should I go for advice on say configuring the NIOS Flash from the system side .... I believe maintaining both from the system side is the only reasonable way to manage "configuration" and "code" levels since as a manual operation it would be difficult to manage. Would it be as easy as adding a FLASH controller and having the system update the FLASH diirectly wiith a new .flash file ? 

 

--- Quote End ---  

 

Here's what I would do (assuming you have SDRAM on the board for the NIOS II processor); 

 

1. Instantiate a NIOS II core that boots from SDRAM.  

 

This core powers-on with its reset asserted. The reset register would be something the host can toggle, eg., an Avalon-MM register. 

 

2. Use the host to copy the NIOS II images into SDRAM. 

 

3. Use the host to release (deassert) the reset line. 

 

The NIOS II processor would then boot. The host can wait for a message from the NIOS II in the mailbox. 

 

Here I assume that you have a "mailbox" for the host-to-device and another for device-to-host transfers, and that these two mailboxes generate interrupts at their respective device. 

 

Cheers, 

Dave
0 Kudos
Altera_Forum
Honored Contributor II
743 Views

Dave Thanks for the reply. 

 

"have pre-fetchable regions." 

I believe pre-fetchable space is generally supported but I believe the particular Linux I am working with which is a development version may not support pre-fetchable memory on the endpoint somewhere down in the kernel resource related code ... I can investigate this but not right now. 

 

"Why IOCTL codes? You can use the mailboxes and interrupts to create an inter-locked handshake. Here's an old PCI example ..." 

I am just using IOCTL at this tome to provide some simple system side facility to test code running at the application level. 

 

"Why do you need this? If you are DMAing data, then use an interrupt to indicate the end of the transfer." 

I can understand why this may not be obvious ... the short answer is that I am taking a verification test suite which includes a "produce consumer" stress test and  

supporting it in an Emulation and Post-Silicon environment . The produce / consumer test could use an interrupt to signal that the transfer is complete but the implement producer / consumer 

test transfers data , and has a flag and status value to synchronize between the producer and consumer. The flag and status can randomly mapped to endpoint memory or system memory in all possible permutations. So the ARM processor in the system produces data and the NIOS core consumes data. The reason for this is to stress bridges between the producer and consumer which need to obey PCI ordering rules.  

 

So ... this isn't a simple DMA data transfer but a stress test to run verification tests in a post-silicon validation environment. 

 

In general MSI interrupts will stay in order across bridges since the MSI is just another write and will stay ordered behind the write data. In the producer / consumer test the write data may target endpoint memory while the endpoint NIOS polls the flag in system memory to determine the data write has completed. If the bridges honor PCI ordering rules , the flag read completion will flush posted data writes in bridges. I am stressing that activity in this test. I hope that makes sense as a "normal" data transfer may involve DMA followed by an interrupt but we can't constrain all activity to be "normal". 

 

 

 

Best Regards, Bob
0 Kudos
Altera_Forum
Honored Contributor II
743 Views

Hi Bob, 

 

Thanks for the clarifications. It sounds like you've got things under control :) 

 

Cheers, 

Dave
0 Kudos
Altera_Forum
Honored Contributor II
743 Views

 

--- Quote Start ---  

Hi Bob, 

 

Thanks for the clarifications. It sounds like you've got things under control :) 

 

Cheers, 

Dave 

--- Quote End ---  

 

 

Thanks Dave, I very much appreciate your support. 

 

The local supplier rep. wasn't sure about loading the NIOS executable over the PCIe link and your suggestion makes sense. 

Right now the NIOS is executing from internal memory since the code is small and it was an easy starting place. 

 

If I can figure out how to load the .elf into that memory, I believe you are saying the NIOS core can be released from reset or 

controlled via a register output to start executing the reset vector and the system ( rc ) side could do the load and reset register operation. 

 

I will investigate that approach.  

 

Thanks, Bob.
0 Kudos
Altera_Forum
Honored Contributor II
743 Views

Dave ,  

 

on re-reading your suggestion to load the NIOS code via the PCIe link ... I now understand you are saying to load up the SRAM data and the SRAM controller will be in reset ... upon releasing that reset the NIOS core will start ... does that mean that the NIOS core will be stalled with a fetch to the reset vector memory location, until the SRAM controller is active and can honor the first Avalon MM bus read request ? 

 

Thanks, Bob
0 Kudos
Altera_Forum
Honored Contributor II
743 Views

Hi Bob, 

 

--- Quote Start ---  

 

on re-reading your suggestion to load the NIOS code via the PCIe link ... I now understand you are saying to load up the SRAM data and the SRAM controller will be in reset ... upon releasing that reset the NIOS core will start ... does that mean that the NIOS core will be stalled with a fetch to the reset vector memory location, until the SRAM controller is active and can honor the first Avalon MM bus read request ? 

 

--- Quote End ---  

 

If you assert the NIOS II processor reset pin, it will sit there doing absolutely nothing. If the NIOS II processor is supposed to boot from SRAM or an external SDRAM, then it will not boot until reset is released. 

 

I use a "similar" scheme for booting a DSP. The DSP external bus routes through an FPGA. The "memory map" of that FPGA normally locates flash at the DSP reset location, however, by changing an FPGA register, I can remap SDRAM over the flash locations. This allows me to copy DSP boot code into SDRAM, reset the DSP, flip the DSP address decoding bit, and then release the DSP reset, and viola, it boots from SDRAM (none the wiser). This allows the flash to contain a basic image, whereas my run-time code gets delivered from a version controlled file-system. I never have to wonder which version of code is running on the DSP, as I know for certain it is the latest code. 

 

Cheers, 

Dave
0 Kudos
Altera_Forum
Honored Contributor II
743 Views

Hi Dave, I have the "producer / consumer" test running with my thin Linux device driver ...  

 

You may have an idea on this ... and I assume it is a simple case of understanding how NIOS AVALON MM slave space is reached via 

BAR addresses in the Linux device driver. 

 

"The second issue I have relates to setting up the PCIe core Avalon MM -> PCIe address translation .. this is at a table with 2 entries of 4k bytes each. 

I am trying to track it down . This Cra register space is accessed via BAR1 and contains the physical address of the system DMA memory address. 

For some reason , that BAR1 write is ending up at a different location in IMEM that is just below the Cra space . I have an idea it could be something 

to do with the BAR address matching scheme and reference to NIOS address map. The IMEM is at 0x00010000 - 0x0001ffff and the CRA slave is at  

0x00020000 - 0x00023fff ... the write to the translation tables at offest 0x00021000 in the CRA space seems to end up at 0x00011000 in the IMEM." 

 

Thanks, Bob
0 Kudos
Altera_Forum
Honored Contributor II
743 Views

 

--- Quote Start ---  

 

Hi Dave, I have the "producer / consumer" test running with my thin Linux device driver ...  

 

--- Quote End ---  

 

Awesome! 

 

 

--- Quote Start ---  

 

You may have an idea on this ... and I assume it is a simple case of understanding how NIOS AVALON MM slave space is reached via 

BAR addresses in the Linux device driver. 

 

--- Quote End ---  

 

If you look in the Qsys memory map (use the memory map tab), it will show you the addresses for each Avalon-MM master. The BAR registers are a PCIe slave, but an Avalon-MM master. If you have a BAR that maps to a 4K block of Avalon-MM registers, then the PCIe 64-byte address should map to an Avalon-MM master address, and then that will map to your slave (with address LSBs dropped depending on whether its an 8-bit, 16-bit, or 32-bit slave). 

 

 

--- Quote Start ---  

 

"The second issue I have relates to setting up the PCIe core Avalon MM -> PCIe address translation .. this is at a table with 2 entries of 4k bytes each. 

I am trying to track it down . This Cra register space is accessed via BAR1 and contains the physical address of the system DMA memory address. 

 

--- Quote End ---  

 

Huh? The CRA registers are for your Avalon-MM masters to use, I don't think it was intended for you to loop back onto a BAR register for PCIe access. 

 

 

--- Quote Start ---  

 

For some reason , that BAR1 write is ending up at a different location in IMEM that is just below the Cra space . I have an idea it could be something 

to do with the BAR address matching scheme and reference to NIOS address map. The IMEM is at 0x00010000 - 0x0001ffff and the CRA slave is at  

0x00020000 - 0x00023fff ... the write to the translation tables at offest 0x00021000 in the CRA space seems to end up at 0x00011000 in the IMEM." 

 

--- Quote End ---  

 

Sorry, I'm not sure what the IMEM you are referring to is. 

 

Modelsim simulation of the system would help resolve addressing issues like this. 

 

Cheers, 

Dave
0 Kudos
Altera_Forum
Honored Contributor II
743 Views

If the nios is in 'soft reset' it actually fetches the first instruction every few clocks. 

This doesn't matter - but can be confusing in traces. 

 

It ought to be valid to write the outbound address translation memory from the PCIe master (I'm pretty sure we have a system that does it).
0 Kudos
Altera_Forum
Honored Contributor II
743 Views

I am wanting to get the ModelSim running for the PCIe endpoint design I have. 

 

I have the ModelSim GUI up after going through the Arria PCIe user guide steps ...but am missing the following. 

 

1) I would like to run the sample testbench against the endpoint to go through training and some basic operations but can't find the sample testbench. 

2) If I find the sample testbench ... how to start the simulation. 

 

Best Regards. BCD.
0 Kudos
Altera_Forum
Honored Contributor II
743 Views

 

--- Quote Start ---  

Awesome! 

 

 

If you look in the Qsys memory map (use the memory map tab), it will show you the addresses for each Avalon-MM master. The BAR registers are a PCIe slave, but an Avalon-MM master. If you have a BAR that maps to a 4K block of Avalon-MM registers, then the PCIe 64-byte address should map to an Avalon-MM master address, and then that will map to your slave (with address LSBs dropped depending on whether its an 8-bit, 16-bit, or 32-bit slave). 

 

 

Huh? The CRA registers are for your Avalon-MM masters to use, I don't think it was intended for you to loop back onto a BAR register for PCIe access. 

 

 

Sorry, I'm not sure what the IMEM you are referring to is. 

 

Modelsim simulation of the system would help resolve addressing issues like this. 

 

Cheers, 

Dave 

--- Quote End ---  

 

 

 

Dave the example device I have sets the Cra translation table entries up via a BAR register at the end of the DD probe code .. just before the endpoint is enabled. 

 

The NIOS system I have only has IMEM ... I have IMEM at 0x00010000 - 0x0001ffff and CRA at 0x00020000 - 0x00023fff somehow when I trie to set the CRA tables at 0x00021000 the table entry gets written to IMEM at 0x00011000 .. I think this may be due to the BAR1 offset I am giving which is the absolute offset  

and not a relative offset . Thanks Bob.
0 Kudos
Altera_Forum
Honored Contributor II
743 Views

 

--- Quote Start ---  

Thanks! :) 

 

 

I haven't tried a class code of 0, so hadn't had that one trip me yet :) 

 

 

In what way? 

 

The tests I did in this thread (see the PDF) 

 

http://www.alteraforum.com/forum/showthread.php?t=35678 

 

have pre-fetchable regions. 

 

 

Why IOCTL codes? You can use the mailboxes and interrupts to create an inter-locked handshake. Here's an old PCI example ... 

 

http://www.ovro.caltech.edu/~dwh/correlator/pdf/cobra_driver.pdf 

 

 

Why do you need this? If you are DMAing data, then use an interrupt to indicate the end of the transfer. 

 

 

Here's what I would do (assuming you have SDRAM on the board for the NIOS II processor); 

 

1. Instantiate a NIOS II core that boots from SDRAM.  

 

This core powers-on with its reset asserted. The reset register would be something the host can toggle, eg., an Avalon-MM register. 

 

2. Use the host to copy the NIOS II images into SDRAM. 

 

3. Use the host to release (deassert) the reset line. 

 

The NIOS II processor would then boot. The host can wait for a message from the NIOS II in the mailbox. 

 

Here I assume that you have a "mailbox" for the host-to-device and another for device-to-host transfers, and that these two mailboxes generate interrupts at their respective device. 

 

Cheers, 

Dave 

--- Quote End ---  

 

 

Hi Dave, been working on this for almost a year now on and off.  

 

I am interested in your statement that the NIOS II core powers-on with reset asserted ... possibly that is the reason the NIOS II doesn't start up automatically when I commit both the FPGA configuration to FLASH and the NIOS II software to FLASH . I can't see any mention in the examples of having to explicitly release the NIOS II reset ( or to generate a reset "interrupt " to force execution at the reset vector ... Can you point me to anything there as I believe I am close to running the FPGA + NIOS II out of FLASH. 

 

Anyhow ... I had an idea on the NIOS II software load via the host. Can you comment on this. 

 

1. Since the Altera cards all have FLASH, start the NIOS II out of flash executing boot loader code . 

2. Modify the boot loader expect the code to be found at SSRAM location x. 

3. Modify the boot loader to poll a doorbell register set by the host via some register mapped to a BAR. 

4. Have the host load the code at SSRAM loaction x . 

5. Have host set the doorbell indicating to the boot loader to go and load code in SSRAM and begin execution of user code. 

 

The alternative would be to have the FLASH accessed via a BAR register and when the user SW FLASH is to be updated, have the  

host update the user FLASH code and then restart the NIOS II core ... that may be more straight forward. 

 

Regards, Bob.
0 Kudos
Altera_Forum
Honored Contributor II
743 Views

Dave  

 

re-reading your post ... I think you indicate the NIOS II core is held in reset ... then released via the host to allow NIOS II to fetch the first instruction . 

Let me look at that also which sounds simple ... I just have to have the host load the SSRAM the same way the Eclipse system woudl process the .elf file to get it  

loaded into SSRAM.
0 Kudos
Altera_Forum
Honored Contributor II
743 Views

Are there any tricks in converting from an environment controlled via Eclipse ( debug JTAG port ), to a system that is stand-alone, and starts without any Eclipse interaction ? 

I would just like the JTAG UART to still communicate with the a NIOS terminal window or something similar. 

 

Thanks, Bob.
0 Kudos
Altera_Forum
Honored Contributor II
700 Views

 

--- Quote Start ---  

Are there any tricks in converting from an environment controlled via Eclipse ( debug JTAG port ), to a system that is stand-alone, and starts without any Eclipse interaction? 

I would just like the JTAG UART to still communicate with the a NIOS terminal window or something similar. 

 

--- Quote End ---  

 

There is a NIOS II host-side console (command-line) program; nios2-terminal. 

 

Depending on your hardware setup, you may find it easier to move away from using JTAG for your UART (since it is inherently polled) to an FPGA-based UART connected to a UART-to-USB bridge (meaning that do you not have to have RS232 level translators). For example, FTDI has cables ... 

 

http://www.digikey.com/product-detail/en/c232hm-ddhsl-0/768-1106-nd/2714139 

http://www.digikey.com/product-detail/en/c232hd-ddhsp-0/768-1011-nd/2767783 

 

Cheers, 

Dave
0 Kudos
Reply