Retesting Ghost of Tsushima with a new Arc A770 shows Intel has a mountain to climb to compete with AMD and Nvidia

Last month I put Ghost of Tsushima through a benchmarking meat grinder, testing each graphics preset on multiple CPUs and GPUs. Upscaling and frame generation were also examined to see how well Nixxes Software had done in porting the PS4 hit to the mighty gaming PC. For the most part, everything went very well, making the game a very enjoyable experience on PC. Well, apart from two GPUs used for the analysis: an AMD Radeon RX 7800 XT and an Intel Arc A770.

The former only lasted three or four seconds of gameplay before crashing on desktop and while the latter worked, it caused some strange visual issues and the overall performance could have been a lot better.

Intel kindly got in touch and offered to help with this, by lending me an Arc A770 Limited Edition, as I had previously been using an Acer Predator version. For some reason, that particular graphics card was a bit temperamental at times, especially during the BIOS phase of booting. I wasn’t really convinced that a new GPU would help, but I gave it a try anyway. Or tried, at least.

Intel’s drivers refused to install, despite erasing every trace of the previous versions. Even reinstalling Windows didn’t help. After much headaches, the solution turned out to be quite simple, if somewhat old-fashioned: manually extract the driver package and then install it via Device Manager. That’s not a method you’d expect today, but at least it worked.

Anyway, now that everything was finally installed, I could continue testing. The good news is that the previous rendering error was gone and the new Arc card passed all upscaling and frame generation tests without any major issues, unlike before.

The not so good news is that the results for the new Arc weren’t that different from the old one. Slightly faster at 1080p, but worse at 1440p and 4K. In any case, everything ran correctly and I was able to get a good look at Intel’s upscaler in the game.

XeSS of course provided a useful performance boost, but its combination with AMD’s FSR 3 Frame Generation was not as successful as with the RTX 3060 Ti using DLSS upscaling and FSR 3 FG. Running Ghost of Tsushima at 4K Very High, with XeSS Ultra Performance and framegen enabled, only resulted in an average frame rate of 64fps.

That may sound pretty decent, but the Ampere-powered RTX card hit 107 fps on a Ryzen 7 5700X3D machine, using DLSS Ultra Performance and FSR 3 frame-gen.

No other graphics card tested in-game showed as big a change in frame rate going from the High to Very High setting as the Arc A770. So I went back and tested all the quality settings to see if I could pinpoint exactly what the problem was in this game. The biggest culprit turned out to be the volumetric fog option.

At 1080p, with the graphics options set to the Very High preset, the Arc A770 ran at 40 fps with very high quality fog and 61 fps with high quality fog: a 100% performance increase! While the other cards also worked better with high quality fog, compared to the very high quality setting, the gains were not nearly as great.

So why does the A770 perform so poorly compared to all the other cards used in the analysis? The first thing I did was run some GPU traces (using Intel’s GPA software) to compare the difference in rendering workloads between the fog modes, but nothing indicated that the GPU itself was beyond a certain limit bounced.

And it’s not like it’s lagging behind in hardware metrics, as the table below shows:

On paper, the Arc A770 should be just as fast, if not faster, than the Radeon RX 6750 XT and RTX 3060 Ti in Ghost of Tsushima. But as my testing has shown, it’s so far behind that there must be something about the architecture and/or drivers that the rendering workload just doesn’t like.

Of course, it could be a coding issue in the game itself, but as we noted in our A770 review, the lackluster performance isn’t limited to just one game (although to be fair, these results were taken with older drivers). Sometimes Intel’s Alchemist GPU performs exactly as expected, other times it’s a complete mystery as to what’s going on.

To investigate this further, I took advantage of a Vulkan compute performance tool created by Nemez on X, which assesses a GPU’s capabilities by running multiple instructions and cache tests. While the results can’t be used to directly analyze why the A770 is struggling so much in Ghost of Tsushima, they do show that Alchemist’s performance is a bit of a mystery.

FP32 multiply instructions are very common in graphics routines, and the Alchemist chip is not only well behind the pace of the RNDA 2 and Ampere GPUs, but also far behind its peak throughput. It’s not always possible to get full speed, even on tests like this, but it’s still much lower than it should be.

However, in the other throughput tests the A770 is really good. It doesn’t lack internal bandwidth and there’s no sign of high cache latencies, but it still suffers far more than the competition when it comes to high resolutions or heavy rendering in Ghost of Tsushima.

Intel has been fully committed to regularly releasing driver updates for its Arc graphics cards, but I think drivers can only go so far now. Finally, support for Ghost of Tsushima has been added to the 5518 driver set and we are already two releases into that (5522 and 5534).

Ultimately, whatever the problems, they can almost certainly be found in alchemical architecture. The Battlemage GPU in Lunar Lake chips looks promising and some changes seem like they will help a lot. The only problem is that the competition already has a significant lead. AMD’s $500 Radeon RX 7800 XT is a perfect example of what Battlemage will face.

Ghost of Tsushima has been patched a few times since release and one of the updates was to improve stability for Radeon GPUs. A full battery of benchmarks showed that Nixxes had definitively resolved that issue, and the RDNA 3-powered GPU had no problems running the game at 1080p and 1440p.

Your next upgrade

Nvidia RTX 4070 and RTX 3080 Founders Edition graphics cards

Nvidia RTX 4070 and RTX 3080 Founders Edition graphics cards

Best CPU for Gaming: The top chips from Intel and AMD.
Best gaming motherboard: The right signs.
Best graphics card: Your perfect pixel pusher is waiting for you.
Best SSD for Gaming: Make sure you’re ahead of the rest.

It wasn’t until 4K that it started to struggle, but even then it wasn’t super slow and a little upscaling made it very playable. And that’s how it’s been in all the games I’ve tested with that graphics card so far.

You can rightly point out that the Navi 32 chip in the RX 7800 XT is much more capable than the ACM-G10 in the Arc A770, but only because it has dual-ALU shaders (doubling the FP32 throughput) and more cache and VRAM bandwidth. At a resolution of 1440p, the advantage they offer is not nearly as great as with 4K.

Nvidia dominates the discrete GPU market, so Intel should try to steal some of AMD’s share, no matter how small it is. But if an RX 7800 XT is a whopping 132% faster than an Arc A770 at 1440p High, and an RX 6750 XT is 68% faster, I wonder if Battlemage’s performance jump over Alchemist will be big enough.

One game is certainly not indicative of a GPU’s overall performance, but it does suggest that Intel has a veritable mountain of GPU gains to climb.

Leave a Comment