- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
I have a working Intel Quartus Standard design (23.1). Now that I have used virtual IO pin mapping at synthesis to see the difference in resource utilization. But, I can see the design is having negative setup slack after introducing the virtual IO pin
Why virtual IO pins introduce the negative setup slack? Am I missing something?
These are the commands I have used. I ran the tcl with quartus_sh executable. Before using these virtual tcl commands, I have used quartus_sta executable. Does that cause any impact?
execute_module -tool map
set excludes [list $myClockPinName]
post_message "exclude $excludes from Virtual pin assignments"
set name_ids [get_names -filter * -node_type pin]
foreach_in_collection name_id $name_ids {
set pin_name [get_name_info -info full_path $name_id]
if { [lsearch -exact -nocase $excludes $pin_name] >= 0 } {
post_message "Skipping VIRTUAL_PIN assignment to $pin_name"
} else {
post_message "Making VIRTUAL_PIN assignment to $pin_name"
set_instance_assignment -name VIRTUAL_PIN ON -to $pin_name
}
}
# Run mapping to save virtual pin assignments in current project
execute_module -tool map
# Export assignments
export_assignments
Link Copied
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hello!
Virtual pins remove I/O buffering and IOE register packing, so registers get pulled into core logic and routing delays increase, which can cause negative setup slack. If you only run STA after mapping, timing looks worse because fitter optimizations aren’t applied. To fix this, apply virtual pins selectively, run a full fit before STA, and add proper constraints for virtual endpoints.
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page