NVIDIA’s Turing-based RTX 20-series graphics cards have been announced to begin shipping on the 20th of September. Their most compelling argument for users to buy them is the leap in ray-tracing performance, enabled by the integration of hardware-based acceleration via RT cores that have been added to NVIDIA’s core design. NVIDIA has been pretty bullish as to how this development reinvents graphics as we know it, and are quick to point out the benefits of this approach against other, shader-based approximations of real, physics-based lighting. In a Q&A at the Citi 2018 Global Technology Conference, NVIDIA’s Colette Kress expounded on their new architecture’s strengths – but also touched upon a possible segmentation of graphics cards by raytracing capabilities.
During that Q&A, NVIDIA’s Colette Kress put Turing’s performance at a cool 2x improvement over their 10-series graphics cards, discounting any raytracing performance uplift – and when raytracing is indeed brought into consideration, she said performance has increased by up to 6x compared to NVIDIA’s last generation. There’s some interesting wording when it comes to NVIDIA’s 20-series lineup, though; as Kress puts it, “We’ll start with the ray-tracing cards. We have the 2080 Ti, the 2080 and the 2070 overall coming to market,” which, in context, seems to point out towards a lack of raytracing hardware in lower-tier graphics cards (apparently, those based on the potential TU106 silicon and lower-level variants).
Additionally, if this graphics card segregation by RTX support (or lack of it) were to happen, what would be of NVIDIA’s lineup? GTX graphics cards up to the GTX 2060 (and maybe 2060 Ti), and RTX upwards? Dilluting NVIDIA’s branding through GTX and RTX doesn’t seem like a sensible choice, but of course, if that were to happen, it would be much better than keeping the RTX prefix across the board.
It could also be a simple case of it not being feasible to include RT hardware on smaller, lower performance GPUs. As performance leaks and previews have been showing us, even NVIDIA’s top of the line RTX 2080 Ti can only deliver 40-60 FPS at 1080p in games such as the upcoming Shadow of the Tomb Raider and Battlefield V (DICE has even said they had to tone down levels of raytracing to achieve playable performance levels). Performance improvements until release could bring FPS up to a point, but all signs point towards a needed decrease in rendering resolution for NVIDIA’s new 20-series to be able to cope with the added raytracing compute. And if performance looks like this on NVIDIA’s biggest (revelaed) Turing die, with its full complement of RT cores, we can only extrapolate what raytracing performance would look like in cut-down dies with lower number of RT execution units. Perhaps it really wouldn’t make much sense to add the increased costs and per-die-area of this dedicated hardware, if raytracing could only be supported in playable levels at 720p.