Serialized bit phase alignment is deterministic for odd LVDS serialization factors as used by Channel Link interfaces.. In gate level simulation using Modelsim, I get properly phase aligned bits out of my LVDS receiver with external PLLs such that, for example, bit 20 into the testbench's LVDS transmitter, comes out on bit 20 of the simulated receiver. However, on hardware, a distinctive sync signal which should come out on bit 20 is actually shifted down to bit 18. In a debug attempt I re-instantiated the Soft LVDS Receiver with the bit slip (data_align) input active and routed it to some debug switches. I find that I can successfully shift bit 18 back to bit 20 by pulsing data_align enough times. However, my system does not support sending a training pattern and automatically pulsing data_align the correct number of times as a closed loop solution. Because of the asymmetric nature of the received LVDS clock I shouldn't have to do this. FPD / channel link ICs do this all the time so I was hoping to manually adjust serialized phase alignment at compile time by properly adjusting the external PLL's three clocks which drive the LVDS IP. However, I cannot find any decent documentation how to compute those phase adjustments using the DDIO based Soft LVDS IP which is substantially different than the older altlvds megafunction still used in older devices. All the Max10 LVDS users guide says on the subject is to instantiate the Soft LVDS Receiver with internal PLL, then look at the timing settings after a compile to determine the correct settings for the external PLL clock outputs. Nothing I have tried seems to work. Help needed.
Note that my system is attempting to receive channel link data if that was not clear from the first post. The serialized bit alignment defined by channel link causes the word transition boundary to be centered in the high time of tx clock period (2/7 T) within the assymetric 57% duty cycle LVDS Tx clock. After some experimentation in simulation I realize had previously adjusted the LVDS Tx stimulus to make the Rx data correct rather than making the input word transition occur in the correct location vs. channel link. With that test bench correction I found a solution in code that works around what I consider a short-coming of the Soft LVDS megafunction (the ability to set the word boundary relative to clock phase especially for odd serialization factors where phase alignment is completely deterministic and does not require a training pattern) by merely pulsing the rx_data_align input the desired number of times after PLL lock goes high. It seems like the default Soft LVDS megafuction configuration uses bit-slip = 0 which I assume implies that the word boundary in the serial stream is edge aligned to the rising clock edge which is not the same place as a channel link 1 interface. I get that for even serialization factors there is ambiguity between the rx clock and the word boundary in the serial stream and a training pattern must be used to establish the channel. What I'm wondering now is for an odd serialization factor will the word alignment always come up correctly due to the 57% duty cycle clock??? If so, a simple counter to fire off my N-2 "rx_data_align" pulses after the external PLL locks seems to get the channel setup correctly in simulation but I question how reliably this will power up over may power cycles... the alignment in the Soft LVDS IP is based on DDIO captures to allow the internal serialization clock to be cut in half (which is what makes it work on larger process nodes to begin with) but this adds to the aligment ambiguity. Any feedback from Intel on this would be helpful.
One more actual question regarding the Soft LVDS Receiver IP... If arbitrary bit-slip values can be set within the deserialzer pipeline, why can users not set the bit-slip counter's reset value directly within the megafunction wizard? This would allow the reciever to be configured with a power-up defined word boundary alignment and extra useless code would not be necessary just for creating a prescribed number of pulses when using non-clock edge aligned word boundaries. A simple diagram/example showing bit alignment vs bit slip value anywhere in the LVDS users guides would have saved me several days of attempting to understand the Soft LVDS IP behavior. Please improve the documentation.
Hello there ,
Here is my comprehension of your first paragraph ,
Data from the LVDS port to the LVDS RX with respect to the clock is not aligned(skew).So you done the simulation by creating the training pattern know how many bits to slip to see right word.
and training pattern is not possible in your system. Hence your question is how to do this ; correct ?
Let me explain you training pattern has to be come from the external world ; from the simulation you can adjust the clock which is not actually referring the problem you might facing.
can you make sure it is clock or channel to channel skew ?
Also you were saying if serialization factor is odd so there is no need for training pattern ? Really ? i dont think so , Bit slip feature required because of the data skew or channel to channel Skew.
hence you are slipping the bit for a clock or more and see you are getting the right data.
Example , you expect to get the data 5A , but you get A5 because of the channel - channel Skew. By enabling the bit-slip and training pattern you can see how many clock you have to slip to get the
Thank you ,
It is not channel to channel skew. I have used these and the (formerly) Fairchild Semiconductor equivalent of this serdes receiver for years: http://www.ti.com/lit/gpn/DS90CR288A They have a pre-programmed data word alignment vs. Rx clock and they assume the channel to channel skew is controlled by length matching PCB trace pairs. My PCB does the same. The issue is that a designer should be able to set the bitslip count at design time rather than run time. All register elements in the FPGA fabric can be configured for set or preset on power up and/or reset. Now that I understand the IP core more completely I'm suggesting Intel does the following: 1) document the word alignment vs. bitslip count with at least one timing diagram and emphasize that not enabling bit slip means that the serial data word is aligned with the rising edge of the data clock. Further, that increasing the bit slip count will effectively right shift (with LSb wrapping to MSb) received bits on the parallel output bus. 2) Add a bit slip setting to the megafunction wizard which presets the bit slip counter when "data_align_reset" is asserted thereby avoiding the need to generate a prescribed number of pulses at runtime.
Beyond this the MAX10 implementation of the Soft LVDS Core is buggy. The gate level RTL simulation model (post fitted, no timing annotation) does not match hardware behavior. My system has two LVDS Rx streams which are generated in the same clock domain within a CycloneV. Each stream is identical except for the data content. In my two identical Max10 receivers, I have to set bitslip to 6 in one of them and 5 in the other on hardware despite simulation of the .vo EDA netlist indicating that bit slip should be set to 5 on both. My point being is that I see buggy behavior in the Max10 LVDS serdes behavior that I do not see when porting the same code to a Cyclone10 for pre synthesis vs. post fitted RTL simulation. Getting timing closure has proven problematic in that I am having to constrain parallel data paths in the Max10 just to keep an internal data bus in sync as it propagates toward a subsequent LVDS transmitter despite having tons of margin on my Fmax. This is more effort than I've had to apply on previous generation devices for similar functionality.
Thank you for your suggestion and appreciate your input. Defintely i will provide your feedback to the internal team.
Coming back to the soft LVDs IP core buggy comment : " In my two identical Max10 receivers, I have to set bitslip to 6 in one of them and 5 in the other on hardware despite simulation of the .vo EDA netlist indicating that bit slip should be set to 5 on both" Can you kindly share the files so i can check and if it is a bug i will definetely raise the concern and fix it.
Actually my thought is it maybe issue with the model not the ip core but lets see . you can send to my email .
Thank you ,
I have connected with an FAE under MNDA and am sharing files with them. At this time I believe some of my issues to be related to the lack of logic lock regions around the IP modules due to use of the Lite Edition. The modules are spread all over the FPGA fabric by the fitter rather than including some reasonable timing constraints applied by the megafunction wizard when the IP is generated. With the addition of some partially invalid routing path delay constraints that break the reported Fmax, the LVDS Tx core is more spatially constriained by the fitter and my code now functions on hardware. I do not have any explanation for the different bit-slip between the two cores. Since the data path is video data I have even tested routing half of the input data to my Tx port direct with routing and a subsequent frame grabber is able to capture the incoming feed reasonably well, that is to say, the input video streams are in fact properly timed.
Sure ..If you can kindly let me know input from FAE.
By the way , would it possibe to share only the tb on bit slip. I think it will save my time and i can check with just IP instantiation. It will help me to deep dive on your case.
Thank you ,