- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
I recently purchased an Arc B580, and although I really like how it performs in almost all games I tried, I was quite disappointed by its performance on CS2. The card has a significant performance drop when using DX11 compared to other cards and when compared to using Vulkan as the game's API, which you can opt into by using the launch command "-vulkan". In my tests, using DX11 as the API for the game made the game run 20% slower than Vulkan, and almost 30% slower when Low Latency mode is enabled.
This is not the case with any other card from Nvidia and AMD; using Vulkan on those cards will actually make your performance slightly worse. My fear is that CS2 running on DX11 is such a nightmare for the Arc B580 that using Vulkan will make the card appear to run the game better, but in reality, it's just because the card delivers bad performance on DX11, which maybe is related to a possible CPU overhead issue in CS2 since I saw many people reporting that using Vulkan in other games that presented the overhead problem helped the performance.
My thoughts seem to make sense when compared to my friend's PC, which is identical to mine except for the GPU, as we bought the parts for our PCs together. My old GPU results also seem to indicate the same.
Compared to his 8GB RTX 3060, the B580 was around 22% slower in the scenario I tested, and it only could match the RTX 3060's performance using Vulkan. The problem is that CS2's implementation of Vulkan is not ideal; it's very unstable, with constant crashes and users mentioning latency issues with the game. I agree with those claims; even though Vulkan should be smoother, the game just feels laggy when using Vulkan compared to DX11, which feels better, especially when using the Low Latency mode. So using Vulkan in just not ideal for competitive players.
Apart from comparing the results with my friends 3060, I can also compare it with my old card, the GTX 1650. ( I did use DDU before installing the Arc B580 just to make that clear).
My 1650 gave me 322 average fps on the same settings my friend and I tested (all low, no FSR, 1280x960). The B580 almost looses compared to the 1650 when using Low Latency, which in the test I used to measure the performance with the 1650, Nvidia reflex was enabled. So these are awful results considering the GTX 1650 is a 6 year old low-end GPU, it should be notable better than the RTX 3060. Most of the tests I saw the support team doing were made with the game on ultra/high settings and also at 1080p, which are useless to me or to most competitive players as no one really plays with those settings.
I'd like to ask for this seemingly unexplained performance drop the Arc B580 has with DX11 on CS2 to be investigated further, if possible. I'll attach my full specs, data, explain how I gathered the data presented, and how to reproduce the results.
PC Specs:
- Processor: AMD Ryzen 5 5600X 6-Core Processor.
- Maxsun Intel Arc B580 Milestone 12G (driver version 32.0.101.8136)
Motherboard: TUF GAMING B550M-plus.
Memory: Corsair Vengeance RGB RS 16GB DDR4 3200MHz (2x8).
Storage: Game and OS are installed on a NVMe M.2 drive.
OS: Windows 10.
Note: Rebar is turned on and PCIe is set to 4.0 in the BIOS settings.
Rebar is turned on and PCIe is set to 4.0 in the bios settings.
To make the benchmark averages I used the Steam Workshop map CS2 FPS BENCHMARK DUST2, using CapFrameX to record the average of 3 runs then making an average out of the results. On the end of each run, the map displays the average fps results on the developer console Window. I reached the conclusion that those results are accurate, because they are always really close to the averages CapFrameX registers. Knowing this, I asked my Friend to do 3 runs using specific video settings, then I did an average of the results, and compared said results with mine. His RTX 3060 made an average of 479.5 fps compared to the B580's 375.9 on DX11, making the Arc Card 21.61% slower compared to the 3060.
How to reproduce the results:
Settings used: Fullscreen, 1280x960, 4:3 aspect ratio (it may sound crazy, but most CS players play the game using this video resolution).
Low settings preset, set both FSR and boost player contrast to disabled.
Open the Dust 2 benchmark map using the workshop maps tab, and start capturing the performance as soon as the countdown is over and the game itself shows up. End the capture as soon as the game starts to fade out to display the results.
That should be all. Thank you for your time and attention. I hope I've been informative so far and I look forward to any potential updates in this matter. If any extra info is needed, please, let me know.
Link Copied
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Yeah, I was thinking the same. I'll repeat on our available AMD processors and see how it behaves.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
@De_Marchi With 5600G+B570, data obtained from the console after the 3rd run. I still have to test with a B580.
| DX | VK | dif | |
| min | 277 | 311 | |
| max | 300 | 314 | |
| 288.5 | 312.5 | 8% |
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hey there @Felipe_Intel
I think I may have found the culprit that's causing the differences in our tests, it's the Windows power plan. Which one did you use? I always set mine to "Ultimate Performance", changing it to "Balanced" caused my Vulkan averages to decrease significantly, while the DX11 performance remained the same.
VULKAN "ULTIMATE PERFORMANCE" AVG=513.3, P1=167.6 DX11 "ULTIMATE PERFORMANCE" AVG=427.2, P1=169.9
VK "BALANCED" Power plan AVG=447.6, P1=131.2 DX11 "Balanced" Power plan" AVG=420.5, P1=155.7
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
I believe all my systems are in "maximum savings" so they can reach low idle power. Thanks for the feedback. I'll repeat the test with the B570 to confirm this. And recheck with the 13400 because that one is in maximum savings too. So maybe it is not the processor itself (AMD vs Intel) but the combination with the energy setting. I'll let you know.
-- EDIT --
I had a mix up. I was thinking in the PCI e Link State Power Management Setting (inside the "Edit Power Plan" option) and not the Windows Power Plan Presets: Power Saver, Balanced and High Perf (as you mentioned above).
My 14900 was in "High Perf" in both tests (all cores on and only 4 P-Cores on). So no difference here.
The AMD+B570 was in Balanced. I'll move it to High Perf and Power Saver and see what happens.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
After changing the power settings in the B570 system, I got the following
| power saver | 6 runs | |
| min | 307 | |
| max | 312.8 | |
| High Perf | 6 runs | |
| min | 303 | |
| max | 324 |
That 324 FPS was a one time, the rest of results were around the lower end (303, 304).
Conclusion: With this system, there was not significant increase in Vk. Didn't test DX following your results (no expected change with DX).
B580 test pending.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Did you try "Ultimate Performance" or "High Performance"? I'm not sure how different they are, but "High Performance" gets exactly the same FPS as "Balanced" for me.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
@De_Marchi I don't have the "Ultimate Performance" or the "Bitsum Highest Performance" available by default in any of my systems. I understand that Ultimate can be enabled by modifying some registry and Bitsum is part of a 3rd party software.
I ran the tests on a fresh installed system with the minimum software required for this: OS, Steam/Game and the driver.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
@Felipe_Intel That’s weird. I have it by default; I even tried installing Windows on a VM using a standard Windows 11 ISO and it's there by default
But alright, since we can't reach similar results on the Vulkan vs. DirectX question, let's move on to something else. I have a friend I mentioned before who owns virtually the exact same system I have—down to the same motherboard, RAM, etc. We bought our computers together, so the parts are identical. The only difference is the GPU; he opted for an 8GB RTX 3060, while I kept my old GTX 1650 while waiting for a better deal.
So essentially, it's the same CPU, RAM speeds, and motherboard, just different GPUs. I asked him to run the same benchmark using the 'High Performance' power plan on low settings at 1280x960, and he was kind enough to spend 2-3 hours helping me out.
Here are the results:
We tried Reflex set to 'On,' 'On + Boost,' and 'Off,' and there wasn't much difference. He scored around 20-40 FPS more depending on the run when using Nvidia Reflex set to 'Off.' I tried doing the same with Intel Low Latency Mode and, as you can see, it scores around the same as Reflex On vs. Off. (High Performance scored 421 avg in a quick test without Low Latency Mode).
Since he only took screenshots using Reflex 'On + Boost' and I didn't want to bother him further, I went with using Low Latency set to 'On + Boost.' As you can see, the performance is quite poor.
I also took some recordings with OBS using Intel's quicksync, I found it doesn't affect performance much (10% less maybe) to compare with this video I found with the same CPU but with an RTX 4060 instead :https://www.youtube.com/watch?v=XXPx02vXzz0&t=145s
In that video, everything was set to low except MSAA, which was set to '4X.' As far as I know, that is quite taxing on performance, so please keep that in mind. I recorded everything on low without any AA settings, using the 'High Performance' power plan, without using the Low Latency Mode (in case that's a big deal).
The videos were made on a competitive mode community server full of bots (not hosted locally), and I did everything I could to simulate normal gameplay, so it is comparable to the competitive matchmaking game the video features.
Please excuse the strange colors and quality in both videos; I am still learning how to use OBS and was primarily trying to make sure it didn't affect performance too much. Also, I couldn't get my RivaTuner overlay to work with Vulkan for some reason, so I had to use Steam's performance overlay instead, both show similar stats.
DX11:
https://www.youtube.com/watch?v=zxAeHJtJe18
Vulkan:
https://www.youtube.com/watch?v=KGYLPRBT54s
As you can see, performance is clearly worse than in the comparison video, though it gets closer if I use Vulkan. I don't want to take up too much of your time, but could you perhaps try comparing the performance with other GPUs? A 4060 or 3060 would be perfect.
Honestly, I mostly play CS2 and I'm willing to do anything I can to help fix this issue, as I like how the B580 performs in other games. If you have any requests for specific tests I can run to help figure this out, please let me know, since I don't know what I could possibly do anymore, as I already tried a clean Windows install and even updating my Bios to no avail.
It is disappointing to wait this long only to get performance that is considerably worse than my friend's 3060, especially since the B580 was advertised as having performance similar to the RTX 4060 (he's actually been making fun of me because of this).
Thanks,
Gian.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hello @De_Marchi
I was able to see the big differences (~25%) in a system with Ryzen 7 5800X + B580 .
| DX | 385 | 373 | 342 |
| Vk | 484 | 487 | 482 |
I will be capturing some data from this system, from a second system I can swap the processor between 5700G and 3600 and a third system I can swap between 7700X and 8600G. I will also be testing the B570 and the B580.
We want to check if it happens in different generations of AMD processors or if it happens in one in particular (so far and per yours and my findings, 5000 series non X3D).
I'll be submitting a new internal ticket with this findings. I'll be posting news here as soon as I have them.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hello, Felipe
Please also monitor your video memory usage (you can do this through the built-in Steam Advanced Overlay) and play online on these graphics cards for more than 20 minutes.
After about 15 minutes, I start experiencing terrible freezes and FPS drops.
Thank you.
Here is my topic:
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Did you try a fresh Windows install? Although I have noticeable low performance on my B580, I don't see much of an increase in VRAM usage in game, and I play 3-4 hours daily. I also don't see any fps drops as bad as you reported. VRAM consumption rarely goes past 7GB for me.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi!
Have you tried the same settings as the ones in the screenshots below? I understand these settings aren't typical for you, but it would be interesting to check.
As I wrote, it's more noticeable and faster on the new maps: Alpine, Warden, and Stronghold.
It's also present on Dust 2 and other older maps, but it seems to take longer to become noticeable.
I'll tell you more. We tested this on my friend's system:
GPU: Intel B580 (he has his own Intel B580 card)
CPU: AMD 5800X3D
MB: Gigabyte B550I AORUS PRO AX (Last BIOS F21) Above 4G Decoding and Rebar enabled
RAM: 2*16GB DDR4 3600MHz
SSD: Samsung PM9A1 1TB
Power Supply: Cooler Master 750w 80Plus Gold
Windows 11 Pro x64 25H2
The result is the same as mine. The memory is full, the game lags.
I did a clean install of Windows 11, but it didn't change anything.
Everything was fine with driver version 8331.
Not as high as the competition, but still OK, around 140-170 FPS at these settings.
After 8331, the constant stuttering and frame rate drops are noticeable.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hello everyone!
I made the same game settings.
I wanted to let you know that I'm experiencing the same thing as this forum member.
My video memory is running low and the game is starting to lag horribly.
I hope you can figure out how to solve this problem?
Best regards,
Zalim
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hey, I did try higher settings, although the game did not crash, at 1080p after a little while, it gets to 10-11gb of usage. I recommend to report this issue in their github: https://github.com/IGCIT/Intel-GPU-Community-Issue-Tracker-IGCIT
They seem to answer to issues much faster then they do here.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hey @Felipe_Intel
Glad you were able to get similar results, I'll be eagerly waiting for news.
Thank you so much for the attention you gave to this matter.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
@Felipe_Intel
Is there any news regarding this issue? Honestly I feel it got worse. I seen my fps going down to double digits during gun fights, and when I tried to play on higher resolutions like the 2 other people mentioned in this thread, I've seen the VRAM usage spike to 10-11GB.
For the time being I went back to my GTX 1650, it's sad but It does have higher fps than the B580 (they get similar results, but it drops much less).
This is consistently the most played game on steam, an issue like this should not be a thing at this point.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
This is especially noticeable on newer cards: Alpine, Warden, and Stronghold.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hello
With the new 8626 driver, the game is a little smoother, but performance remains the same. Occasionally, there are drops of up to 40 FPS.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
I've been having this problem on CS2 where the game runs fine (at locked 120fps, low settings) but after 3-4 mins, the VRAM will spike up to like 8GBs and the framerate just fluctuates rapidly, going from 120 to 40fps if there's multiple characters on screen or if you're looking at a certain part of the map, mind you this happens everytime after 4 mins when loading into a map, so I think its a memory leak, running in Vulkan doesn't improve it. My specs are Ryzen 5 3600, ARC B570. Latest driver.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
As far as I understand, players shouldn't expect any optimizations?
They just need to avoid buying Intel.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Seems like it. I spent 2 months doing tests, stuff they should be doing, responding to obviously AI generated messages, just to get pretty much ignored now. I have no idea when/if this issue will be fixed, or even if it is acknowledged. Although Felipe said they submitted a ticket about this, it has not showed up in any of the known issues in the release notes for the drivers yet, at least I didn't see it there.
Now I'm having issues with my 320hz monitor, VRR doesn't work properly with the B580, instead of lowering the refresh rate according to the FPS, it just halves the refresh rate to 160, if I set to 240hz, then it goes to 120, and so on. This results in a very choppy experience, where the refresh rate suddenly is cut by half and then quickly turns back again to 320, very fast. This doesn't happen with my old Nvidia GPU, G-sync works perfectly (my monitor is VESA adaptive sync). I seen people having similar issues with VRR on Intel GPUs. This is again, another huge headache I'll have to face.
I bought this GPU to play now, not to beta test for a huge corporation and then maybe get to play properly after they fixed those issues. IF they fix them.
Just headaches after headaches really. I fell for major tech youtubers portraying buying an 8GB GPU as stupidity, but hey, at least I would be able to play an Esports title with better performance than a RTX 2060, because, believe it or not, it actually loses to the RTX 2060 (RTX 2060 + Ryzen 5 5600 CS2 vs Arc B580 + Ryzen 5 5600x CS2), a mid-range GPU I could've bought 7 years ago.
Again, CS2 is consistently the most played game on Steam, and if this is the treatment they are giving it, imagine how it will be with other non-AAA titles. Zero support expected.
I'll sell this piece of junk and buy a 5060 later. Never in my life I'm touching an Intel GPU again. I can't believe how naive I was for even taking my time reporting this. I should've returned this thing as soon as I noticed how bad it performed.
Advertising as having similar performance to the RTX 4060 and then losing by almost 30% to a 3060 is crazy.
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page