- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hello,
I am very new to Quartus II 9.1 software and VHDL.
While implementing a counter with decoder I found the result not reliable in the simulator window.
Actually the output waveform shows strange impacts depending on the decoded data.
Please take a look into my code, which I reduced to the minimum to show the error.
There exists one Gray-Code counter which becomes driven by a 125MHz clock line, and a decoder section which outputs some values to the pins in a case construct.
The strange thing is, that depending of the value put to the pins the whole waveform changes.
What do I wrong
Code:
Link Copied
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
I cant see anything wrong in your code. What problems are you seeing? I can see that when you first start you are going to have "UUUUUUUUUUUUUUUU" on pinsCypressData for 1 clock cycle when you simulate.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Thanks for your answer. I attached two screen shots showing the problem. As you can see under the given output values a short signal change is visible, but its not there with different values. I believe that the pin values should be depending from the coded value not from "ghosts" inside the chip.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
That looks like a post Fit simulation. If its an code simulation, you appear to have lots of problems. Have you tried simulating the code rather than a gate level simulation?
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
I am unsure about the meaning of yor post. What I have done was:
- Compile - Generate Net-List - Start Simulation (Blue triangle with rectangular wave, there is a warning that this simulation will supported in future). I expect that the different length of the "high" and "low" - states reflects the runtime to the different pins, but the marked "high"-state is strange. I will try to download the latest release (9.1 service pack 2) and the new simulator. The clock line (125MHz) is attached at a clock input and declared as clock insige the simulation setup.- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
So what you have done is a post fit simulation, simulating real hardware.
You can also simulate the code to see that the code itself functions as you expect. I find that good practice and correctly functioning code will 99% of the time give you the correct hardware. Code simulation is also massivly quicker than post fit gate level simulation.- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
thanks for the advise. I found where to switch between "functional" and "timing" simulation. I attached two pictures one for each mode.
As expected the functional simulation shows (luckily for me) that the code works correctly, but the timing simulation shows unexpected behaviour. So back to the roots, is there any explanation why the 1ns spike appears at 70ns in the timing simulation? I understand that the different length to the output pins cause the different timestamps where the pins change there value but the spike? I dont have any idea how to avoid this. I will try to analyse the code in reality (scope). But unfortunally I found unwanted strange behaviour in the hardware implementation and traveled down to the code and simulation - and (surpise) it's unclear there aswell. Maybe the volcano ash from Iceland had more impact than we all recogniced :-)- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hello again,
I made a new approach to the problem. I have created two projects one in VHDL and one in AHDL, doing both the same functions. I made functional and timing simulation for both projects. While the functional simulation is same for both projects the timing simulation differs from AHDL and VHDL. The AHDL approach seems to be ok, while the VHDL approach shows unexplainable spikes. I attached the both projects and and the four screenshots. Meanwhile I belive that there is a bug in the VHDL compiler causing the spikes. I use the Quartus II Version 9.1 Build 350 03/24/2010 SJ Web Edition Service Pack 2, but the previous Service pack 1 Version shows the same error. I would be very happy to understand what happned! best regards Ekkehard- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
--- Quote Start --- The AHDL approach seems to be ok, while the VHDL approach shows unexplainable spikes. --- Quote End --- These spikes aren't unexplainable nor even a compiler bug. They are basically normal behaviour with an asynchronous decoder. An interesting question is, why the VHDL variant is showing stronger glitches than the AHDL. Apparently, the synthesis is different. You should consider, that the delay skew of the AHDL "pinsCypressData" output lines can already cause problems when sampling the data with an unsuitable timing. In contrast, registering the decoded VHDL output can remove the glitches.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Thanx for the answer. I would follow your explanation (Spikes "are basically normal behaviour with an asynchronous decoder") *when* I had made an binary counter.
*But* I have implemented an Gray-Code counter, where from each state to the next only a single bit change it's value. So it is per definition that during the change from one state to next the asynchronous decoder can have only the previous or the next state. If the decoder output decodes the same value in both states, then the outputs must stay stable even if the decoder input switches randomly between the two states. Correct? I think that the AHDL version does this correctly while the VHDL version fails.- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
--- Quote Start --- So it is per definition that during the change from one state to next the asynchronous decoder can have only the previous or the next state. If the decoder output decodes the same value in both states, then the outputs must stay stable even if the decoder input switches randomly between the two states. Correct? --- Quote End --- No. You havn't glitches in the gray encoded output. But pinsCypressData is not gray encoded. If more than one input term to the combinational logic for an output bit changes, you can get gliches. By the way, I got glitches in the simulation of your AHDL, too. P.S.: It's not exactly clear, what can be regarded as glitches in this design. When you look at the AHDL simulation, the decoded output pins are changing it's state at different times. So the output code has already glitches. With the VHDL design, you often see glitches of a single output bit. If you look at the technology map, you realize, that each output bit is generated by two cascaded LUTs. At the second LUT stage, multiple input bits can change simultaneously. In this case, glitches are normal behaviour. Also the AHDL design has cascaded LUTs, but the structure is different. To avoid glitches of individual output bits, the outputs must be registered. But you have the problem of delay skew between outputs, so the code still has glitches. There has been a discussion, if the LUT output can be always expected glitch free with only one input bit changíng. The empirical results seem to suggest this, but apparently, it can't be guaranteed.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
but pinscypressdata is not gray encoded. if more than one input term to the combinational logic for an output bit changes, you can get gliches.
True but the seen glitch is on a single pin who's value is not a logical combined value, instead its a constant declaration. If I had declared pinX <= pin[0] and pin[1] than I would expect the glitch or spike. But the discussion seems to end, because my expectations on VHDL and what the compiler creates are not very close together.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
The basic answer to all this - dont use asynchronous encoders in FPGAs, use synchronous, whatever the langage. Glitches just go away (assuming you meet timing requirements).
When you compile something for the FPGA, you get whatever the synthesiser gives you, which will be a reduction of the boolean equations you wrote, which may have glitches. Go synchronous.- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Where to generate the clock from, in the current example?
The source 125MHz for the gray counter seems not to work because there is a huge runtime gap between this signal and the counter output. If I define the falling edge of 125MHz I will receive tons of timing warnings. I think I should use the decoded states, but I am unsure about this since the output of the stages (as shown) are not reliable, e.g. I am afraid that decoding a clock from the counters stage will cause glitches in the clock just as it does on the output.- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Just some few words to close the case.
I made several tests with clocked buffers prior the output pins and the hints given by the other users. It seems to work (at least the spikes are gone). So I declared signal CypressData : std_logic_vector(15 downto 0); in the architectur header, and replaced every place where "pinsCypressData" is used (e.g.) when GRAY_STEP_XXX => pinsCypressData <= x"XXXX"; with when GRAY_STEP_XXX => CypressData <= x"XXXX"; at the very end of the process section I inserted if rising_edge(pinClk125MHz) then pinsCypressData <= CypressData; end if; The amount of used blocks increased in numbers of 10 blocks for 16 Pins. The spikes are gone. I made a simmilar design with a binary counter instead of a gray-code stepper, and the clocked output are stable as well. My conclusion is that result of combinational logic on the output pins are not reliable, even if the same design build with 74xxx chips would work. Strange but true. Thanx for your patience and your help!
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page