When NVIDIA reached out to ask if we needed to attempt the brand new RTX 4090 GPU, we nearly stated “no.” Most of our readers are photographers, and most photograph enhancing merely doesn’t require a top-tier GPU. However we did say “sure” as a result of we needed to reply a special, extra related query: do photograph editors want a GPU in any respect? And if that’s the case, how a lot do it is advisable spend to get top-notch photograph enhancing efficiency?
To that finish, we put the shiny new RTX 4090 Founder’s Version graphics card that NVIDIA despatched over up towards three of its little siblings from final yr: the RTX 3070, RTX 3080, and RTX 3090 Founder’s Version, all new-in-box earlier than this check, all working inside the very same PC.
First, we put these playing cards by way of a slew of high-powered artificial benchmarks that check issues like 3D rendering and Ray Tracing with a view to spotlight the sorts of huge efficiency positive aspects you may count on when you work in high-end visible results or 3D design. Then, we examined these similar playing cards in our normal Photoshop, Lightroom Traditional, and Seize One Professional benchmarks to indicate why most photographers are higher off saving some cash and shopping for a last-gen GPU.
Oh proper, spoiler alert: Most photographers are higher off saving some cash and shopping for a last-gen GPU.
However that undoubtedly doesn’t imply you must skip the GPU solely, nor does it imply that you simply gained’t see any efficiency acquire by upgrading your graphics card to the newest and biggest. It’s simply that, in our testing utilizing Lightroom Traditional, Seize One Professional, and Photoshop, you may get 90% of the efficiency for lower than one third the worth of an RTX 4090.
Let’s dive in.
Desk of Contents
Our Testing Rig
All the checks under had been carried out in my private enhancing rig, which I constructed about six months in the past. It consists of:
- An MSI MEG Z590 ACE Motherboard
- An Intel Core i9-11900K CPU, water-cooled utilizing a MSI CoreLiquid 240r v2 AIO
- 64GB of Corsair Vengeance Professional RGB DDR4-3600 CL18 RAM
- A 1TB Corsair MP600 Professional PCIe 4.0 M.2 NVMe SSD
- An EVGA 1000 P3 1000W PSU (80 PLUS Platinum)
Notably, this isn’t the form of bleeding-edge rig you’ll see on some glorious high-end gaming evaluations from individuals like LTT, Gamer’s Nexus, or JayzTwoCents. We merely don’t have entry to that form of gear. Nevertheless it’s in all probability nearer to the form of PC lots of our readers are literally utilizing in the event that they construct an enhancing rig inside the final two or three years, and it provides us a fairly highly effective CPU that gained’t instantly bottleneck each one in all our checks.
Talking of which…
A Word on CPU Bottlenecks
Except you’re a gamer the time period “CPU bottleneck” is perhaps new to you, but it surely’s necessary that you simply perceive what it means. That’s as a result of CPU bottlenecks are on the core of why some individuals can justify spending $1,600+ on a GPU and different individuals could as effectively be lighting their cash on hearth.
A CPU bottleneck is precisely what it seems like: that is when your CPU is the limiting consider a given computational process. Whether or not we’re speaking about exporting photographs, rendering a Premiere Professional challenge, or utilizing AI to upscale an enormous batch of RAW information, there comes a degree the place including extra GPU horsepower will do completely nothing for efficiency, as a result of your present GPU is already spending most of its time sitting round idle, ready to your CPU to complete its a part of the job.
The idea is especially related for gaming, the place updating your graphics card doesn’t really enhance your frames per second (fps) in a given sport at decrease resolutions, as a result of the limiting issue is your CPU. Crank up the decision to 4K and immediately the weaker graphics playing cards will fall behind, however earlier than that, a extra highly effective GPU gained’t assist. As you’ll see shortly, this similar idea is why many, if not most, photograph enhancing duties don’t want the newest and biggest high-end GPU.
However first, let’s see how our 4 graphics playing cards evaluate in high-end, graphics intensive duties that aren’t restricted by the opposite elements in our rig.
3D Rendering Benchmarks
On the subject of rendering ultra-high decision video information or calculating precisely how the sunshine is bouncing off each inch of a 3D animated body utilizing ray tracing, a strong GPU could make an enormous distinction. On this respect, NVIDIA has made a large leap with the RTX 4090.
By transferring to an entire new structure construct on TSMC’s 4nm course of, rising the dimensions of the cardboard and beefing up the cooler, they had been capable of enhance the bottom clock by over 35% and pack in over 16,000 CUDA cores, 512 Gen 4 Tensor cores, 128 Gen 3 RT cores, all whereas consuming the identical 450W TDP because the final gen RTX 3090Ti.
If that every one seems like gibberish, the upshot is that the this card ought to lay waste to each the RTX 3090 and the RTX 3090Ti in GPU-bound duties with out requiring a much bigger energy provide or overheating within the course of. That is precisely what we see in all of our 3D rendering benchmarks.
VRay (v5.0.20)
In VRay, the RTX 4090 doubles our 3090’s rating in each the CUDA and Ray Tracing benchmarks, rendering over 4,200 “vpaths” and over 5,500 “vrays” in a one-minute run:
This sort of leap in efficiency is extremely uncommon as of late, however NVIDIA has pulled it off. And this isn’t some fluke both, it’s a sample that performs out over and over in each “creator” benchmark we ran.
Blender (v3.3.0)
In Blender, the RTX 4090 greater than doubles the RTX 3090’s efficiency within the Monster scene, and practically doubles its efficiency in each Junkshop and the older Classroom scene:
OctaneBench (v2020.1.5)
Lastly, OctaneBench is identical story another time. In all 4 rendered scenes, the RTX 4090 comes near doubling what the already beefy and power-hungry RTX 3090 can do, whereas the 3090 posts solely modest enhancements over its little siblings, the RTX 3080 and RTX 3070.
Once more, on the subject of high-end rendering efficiency in benchmarks which might be particularly tuned to rely completely on the GPU, the RTX 4090 represents a doubling of efficiency yr on yr. That’s… unimaginable. It’s not typically we get to say that this technology of *fill within the clean* is 2x or 100% quicker than final yr’s mannequin with out including a bunch of asterisks. Sadly, that is the place I’ve to change gears and inform you why, as a photograph editor, you gained’t see wherever near this degree of efficiency uplift in your favourite photograph enhancing apps.
Photograph Enhancing Benchmarks
As talked about earlier, most photograph enhancing duties are CPU bottlenecked. And sure, that features duties which might be “GPU accelerated.” It’s not all unhealthy information: Photoshop leans on the GPU to speed up or outright carry out a number of necessary duties like Good Sharpen and many of the Blur Gallery results, Seize One Professional 22 makes use of GPU to considerably speed up exports and, as of some months in the past, Lightroom added GPU acceleration to their exports as effectively.
There’s additionally a rising variety of AI-powered photograph enhancing instruments like ON1 Resize that use the GPU to hurry up processing, and Adobe Sensei-powered actions like Sky Alternative and Tremendous Decision rely closely on GPU acceleration as effectively.
However how a lot do these items actually velocity up your workflow? And the place is the price-to-performance candy spot when you’re trying to purchase your first GPU? Luckily to your pockets, the candy spot for probably the most time-consuming photograph enhancing duties is on the low finish.
Adobe Lightroom Traditional
In Lightroom Traditional, import efficiency is 100% based mostly in your CPU and RAM—the GPU does nothing—so we’re going to skip that benchmark. However on the subject of exports, the newest variations of Lightroom now use the GPU to speed up that course of considerably. Utilizing 110 Sony a7R IV and 150 PhaseOne XF RAW information, we utilized a customized preset after which exported every batch of information as 100% JPEGs and 16-bit TIFFs in flip.
Lengthy story brief: any GPU is a enormous enchancment over utilizing the CPU for export, however there may be undoubtedly a degree of diminishing returns because the GPU will get increasingly more highly effective:
This is among the few benchmarks we ran the place spending extra money on an RTX 3080 may very well be worthwhile when you’re exporting 1000’s of JPEGs each week. Lightroom appears to rely closely on the GPU’s quick GDDR6 reminiscence when GPU-accelerated export is enabled, so a dearer GPU with extra VRAM makes a major distinction. That’s in all probability why we see no distinction between the RTX 3090 and the RTX 4090: each have 24GB of VRAM.
Seize One Professional 22
The story in Seize One Professional 22 is way the identical. Imports don’t use the GPU in any respect, so we’ve skipped that benchmark once more, however exports are considerably GPU accelerated and, on this case, they don’t depend on the GPU’s VRAM nearly in any respect.
The very first thing to note about our outcomes is that Seize One Professional and Lightroom Traditional are a lot nearer in export efficiency now that Lightroom additionally has GPU accelerated export. The following factor to note is that the distinction between CPU-only and any GPU is even bigger than in Lightroom. And the very last thing to note is that upgrading to a high-end GPU has principally no influence on JPEG exports, and solely a reasonable influence on TIFF exports.
The JPEG ends in explicit characterize a basic CPU bottleneck. The RTX 3070 is already sitting round ready for the CPU to catch up, so there isn’t a distinction between any of the GPUs in that export. The TIFF outcomes are a bit higher, getting barely quicker with every improve, but it surely’s nothing like the huge efficiency leap we noticed within the 3D rendering benchmarks.
Adobe Photoshop
Unsurprisingly, our Photoshop benchmarks present extra of the identical. We ran the standard benchmark: PugetSystem’s PugetBench v0.8, which we nonetheless use as a result of it features a PhotoMerge check that was eliminated in later variations. On this case, we don’t really care about PhotoMerge, all we care about is the “GPU” class rating. Each different rating was inside margin of error from one GPU to the following, and solely the GPU class rating reliably elevated as we upgraded from utilizing the built-in GPU, to the RTX 3070, the RTX 3080, the RTX 3090, and at last, the RTX 4090.
Identical to our different checks, there’s an enormous leap from iGPU to the discrete GPUs, but it surely’s frankly stunning how little efficiency an additional $1,000+ will purchase you on the subject of GPU accelerated duties.
These findings lengthen to our qualitative expertise. Utilizing Photoshop options like Sky Alternative or making use of Tremendous Decision by way of Digicam RAW on particular person RAW information is unquestionably quicker on the beefier GPUs, however you gained’t discover the velocity improve once you’re enhancing one photograph at a time. The RTX 3070 would possibly take 2 seconds to AI upscale your 100MP photograph, whereas the RTX 4090 takes simply 1 second. That’s nonetheless a large velocity enchancment in share phrases, but it surely’s on a time scale that’s completely irrelevant to your workflow.
Based mostly on our expertise, any trendy GPU will ship a buttery clean expertise and nice efficiency in Photoshop. No must shell out for top-shelf.
AI Resizing
In our last check, we ran two totally different AI-powered resizing algorithms that each use GPU acceleration: ON1 Resize and Adobe’s Tremendous Decision. Each algorithms rely completely on the GPU to upscale the photograph and the CPU to export the outcome.
Sadly, we couldn’t check these two on a one-to-one foundation as a result of ON1’s algorithm takes for much longer per photograph than Adobe’s regardless of producing barely worse outcomes (to my eye). So we used ON1 Resize to upscale 5 full-sized Sony a7R IV RAW information, and we used Adobe Tremendous Decision (by way of Lightroom Traditional) to upscale the complete batch of 110 Sony a7R IV RAW information we use for testing.
Adobe Tremendous Decision will double the decision of your RAW file and export a DNG by default, so we selected matching settings in ON1 Resize and ran each checks independently.
I ought to notice that we needed to run these checks utilizing the iGPU as effectively, however the processing is simply means too sluggish. Upscaling a single photograph utilizing the Intel UHD built-in graphics on our Core i9-11900K took 34 minutes in ON1 Resize and two-and-a-half minutes in Lightroom Traditional. That interprets into nearly 3 hours for the complete ON1 check and over 4 hours in Lightroom Traditional. Nonetheless, the GPU accelerated outcomes are nonetheless illuminating:
As you may see there’s a gradual climb in efficiency for the ON1 Resize check, and a way more meager climb for Adobe Tremendous Decision. The previous nonetheless has room to enhance, however the latter is just about CPU bottlenecked from the start: the precise AI upscaling solely takes just a few seconds per photograph and the remainder of the time is spent ready for the CPU to export the DNG, add it to the library, and create a 1:1 preview.
You’ll undoubtedly discover a efficiency enchancment when you use Adobe Tremendous Decision to upscale massive batches of photographs. However based mostly on these outcomes, there’s little or no cause to improve past the RTX 3080.
Conclusion
This text isn’t meant to be a full, complete evaluation of the RTX 4090, neither is it meant to be a complete comparability towards all the main rivals in the marketplace. We might have beloved to have an AMD GPU within the lineup or some 20-series NVIDIA GPUs for that matter, however that wasn’t actually the purpose. The purpose of this text was to substantiate, as soon as and for all, that photographers don’t want a high-powered GPU to get the very best photograph enhancing efficiency.
As increasingly more photograph enhancing apps tout the truth that they’re “GPU accelerated” it’s tempting to suppose that extra highly effective is at all times extra higher, however on the subject of photograph enhancing that’s not the case within the overwhelming majority of purposes. The newest technology of GPUs are aimed squarely at players, animators, and 3D designers who’re both rendering a whole lot of frames per second or one insanely complicated three-dimensional scene. Day-to-day photograph enhancing duties are youngster’s play to a contemporary GPU.
The place a GPU is helpful is for AI-accelerated batch enhancing or huge GPU accelerated exports, and for these duties, just about any trendy GPU will do, whether or not you spend $500 or $5,000.