In the closing months of 2018, NVIDIA finally released the long-awaited successor to the Pascal-based GeForce GTX 10 series: the GeForce RTX 20 series of video cards. Built on their new Turing architecture, these GPUs were the biggest update to NVIDIA's GPU architecture in at least half a decade, leaving almost no part of NVIDIA's architecture untouched.

So far we’ve looked at the GeForce RTX 2080 Ti, RTX 2080, and RTX 2070 – and along with the highlights of Turing, we’ve seen that the GeForce RTX 20 series is designed on a hardware and software level to enable realtime raytracing and other new specialized features for games. While the RTX 2070 is traditionally the value-oriented enthusiast offering, NVIDIA's higher price tags this time around meant that even this part was $500 and not especially value-oriented. Instead, it would seem that the role of the enthusiast value offering is going to fall to the next member in line of the GeForce RTX 20 family. And that part is coming next week.

Launching next Tuesday, January 15th is the 4th member of the GeForce RTX family: the GeForce RTX 2060 (6GB). Based on a cut-down version of the same TU106 GPU that's in the RTX 2070, this new part shaves off some of RTX 2070's performance, but also a good deal of its price tag in the process. And for this launch, like the other RTX cards last year, NVIDIA is taking part by releasing their own GeForce RTX 2060 Founders Edition card, which we are taking a look at today.

NVIDIA GeForce Specification Comparison
  RTX 2060 Founders Edition GTX 1060 6GB (GDDR5) GTX 1070
RTX 2070
CUDA Cores 1920 1280 1920 2304
ROPs 48 48 64 64
Core Clock 1365MHz 1506MHz 1506MHz 1410MHz
Boost Clock 1680MHz 1709MHz 1683MHz 1620MHz
FE: 1710MHz
Memory Clock 14Gbps GDDR6 8Gbps GDDR5 8Gbps GDDR5 14Gbps GDDR6
Memory Bus Width 192-bit 192-bit 256-bit 256-bit
Single Precision Perf. 6.5 TFLOPS 4.4 TFLOPs 6.5 TFLOPS 7.5 TFLOPs
"RTX-OPS" 37T N/A N/A 45T
SLI Support No No Yes No
TDP 160W 120W 150W 175W
FE: 185W
GPU TU106 GP106 GP104 TU106
Transistor Count 10.8B 4.4B 7.2B 10.8B
Architecture Turing Pascal Pascal Turing
Manufacturing Process TSMC 12nm "FFN" TSMC 16nm TSMC 16nm TSMC 12nm "FFN"
Launch Date 1/15/2019 7/19/2016 6/10/2016 10/17/2018
Launch Price $349 MSRP: $249
FE: $299
MSRP: $379
FE: $449
MSRP: $499
FE: $599

Like its older siblings, the GeForce RTX 2060 (6GB) comes in at a higher price-point relative to previous generations, and at $349 the cost is quite unlike the GeForce GTX 1060 6GB’s $299 Founders Edition and $249 MSRP split, let alone the GeForce GTX 960’s $199. At the same time, it still features Turing RT cores and tensor cores, bringing a new entry point for those interested in utilizing GeForce RTX platform features such as realtime raytracing.

Diving into the specs and numbers, the GeForce RTX 2060 sports 1920 CUDA cores, meaning we’re looking at a 30 SM configuration, versus RTX 2070’s 36 SMs. As the core architecture of Turing is designed to scale with the number of SMs, this means that all of the core compute features are being scaled down similarly, so the 17% drop in SMs means a 17% drop in the RT Core count, a 17% drop in the tensor core count, a 17% drop in the texture unit count, a 17% drop in L0/L1 caches, etc.

Unsurprisingly, clockspeeds are going to be very close to NVIDIA’s other TU106 card, RTX 2070. The base clockspeed is down a bit to 1365MHz, but the boost clock is up a bit to 1680MHz. So on the whole, RTX 2060 is poised to deliver around 87% of the RTX 2070’s compute/RT/texture performance, which is an uncharacteristically small gap between a xx70 card and an xx60 card. In other words, the RTX 2060 is in a good position to punch above its weight in compute/shading performance.

However TU106 has taken a bigger trim on the backend, and in workloads that aren’t pure compute, the drop will be a bit harder. The card is shipping with just 6GB of GDDR6 VRAM, as opposed to 8GB on its bigger brother. The result of this is that NVIDIA is not populating 2 of TU106’s 8 memory controllers, resulting in a 192-bit memory bus and meaning that with the use of 14Gbps GDDR6, RTX 2060 only offers 75% of the memory bandwidth of the RTX 2070. Or to put this in numbers, the RTX 2060 will offer 336GB/sec of bandwidth to the RTX 2070’s 448GB/sec.

And since the memory controllers, ROPs, and L2 cache are all tied together very closely in NVIDIA’s architecture, this means that ROP throughput and the amount of L2 cache are also being shaved by 25%. So for graphics workloads the practical performance drop is going to be greater than the 13% mark for compute throughput, but also generally less than the 25% mark for ROP/memory throughput.

Speaking of video memory, NVIDIA has called this the RTX 2060 but early indications are that there will be different configurations of RTX 2060s with less VRAM and possibly fewer CUDA cores and other hardware resources. Hence, it seems forward-looking to refer to the product mentioned in this article as the RTX 2060 (6GB); as you might recall, the GTX 1060 6GB was launched as the ‘GTX 1060’ and so appeared as such in our launch review, up until a month later with the release of the ‘GTX 1060 3GB’, a branding that does not indicate its lower-performing GPU configuration unrelated to frame buffer size. Combined with ongoing GTX 1060 naming shenanigans, as well as with GTX 1050 variants (and AMD’s own Polaris naming shenanigans also of note), it seems prudent to make this clarification now in the interest of future accuracy and consumer awareness.

NVIDIA GTX 1060 Variants
Specification Comparison
  GTX 1060 6GB GTX  1060 6GB
(9 Gbps)
GTX 1060 6GB (GDDR5X) GTX 1060 5GB (Regional) GTX 1060 3GB
CUDA Cores 1280 1280 1280 1280 1152
Texture Units 80 80 80 80 72
ROPs 48 48 48 40 48
Core Clock 1506MHz 1506MHz 1506MHz 1506MHz 1506MHz
Boost Clock 1708MHz 1708MHz 1708MHz 1708MHz 1708MHz
Memory Clock 8Gbps GDDR5 9Gbps GDDR5 8Gbps GDDR5X 8Gbps GDDR5 8Gbps GDDR5
Memory Bus Width 192-bit 192-bit 192-bit 160-bit 192-bit
TDP 120W 120W 120W 120W 120W
GPU GP106 GP106 GP104* GP106 GP106
Launch Date 7/19/2016 Q2 2017 Q3 2018 Q3 2018 8/18/2016

Moving on, NVIDIA is rating the RTX 2060 for a TDP of 160W. This is down from the RTX 2070, but only slightly, as those cards are rated for 175W. Cut-down GPUs have limited options for reducing their power consumption, so it’s not unusual to see a card like this rated to draw almost as much power as its full-fledged counterpart.

All-in-all, the GeForce RTX 2060 (6GB) is quite the interesting card, as the value-enthusiast segment tends to be more attuned to price and power consumption than the performance-enthusiast segment. Additionally, as a value-enthusiast card and potential upgrade option it will also need to perform well on a wide range of older and newer games – in other words, traditional rasterization performance rather than hybrid rendering performance.

Meanwhile, looking at evaluating the RTX 2060 itself, measuring generalizable hybrid rendering performance remains unclear. Linked to the Windows 10 October 2018 Update (1809), DXR has been rolled-out fairly recently. 3DMark’s DXR benchmark, Port Royal, is due on January 8th, while for realtime raytracing Battlefield V is the sole title with it for the moment, with optimization efforts are ongoing as seen in their recent driver efforts. Meanwhile, it seems that some of Turing's other advanced shader features (Variable Rate Shading) are only currently available in Wolfenstein II.

Of course, RTX support for a number of titles have been announced and many are due this year, but there is no centralized resource to keep track of availability. It’s true that developers are ultimately responsible for this information and their game, but on the flipside, this has required very close cooperation between NVIDIA and developers for quite some time. In the end, RTX is a technology platform spearheaded by NVIDIA and inextricably linked to their hardware, so it’s to the detriment of potential RTX 20 series owners in researching and collating what current games can make use of which specialized hardware features they purchased.

Planned NVIDIA Turing Feature Support for Games
Game Real Time Raytracing Deep Learning Supersampling (DLSS) Turing Advanced Shading
Anthem   Yes  
Ark: Survival Evolved   Yes  
Assetto Corsa Competizione Yes    
Atomic Heart Yes Yes  
Battlefield V Yes
Control Yes    
Dauntless   Yes  
Darksiders III   Yes  
Deliver Us The Moon: Fortuna   Yes  
Enlisted Yes    
Fear The Wolves   Yes  
Final Fantasy XV   Yes
(available in standalone benchmark)
Fractured Lands   Yes  
Hellblade: Senua's Sacrifice   Yes  
Hitman 2   Yes  
In Death     Yes
Islands of Nyne   Yes  
Justice Yes Yes  
JX3 Yes Yes  
MechWarrior 5: Mercenaries Yes Yes  
Metro Exodus Yes    
Outpost Zero   Yes  
Overkill's The Walking Dead   Yes  
PlayerUnknown Battlegrounds   Yes  
ProjectDH Yes    
Remnant: From the Ashes   Yes  
SCUM   Yes  
Serious Sam 4: Planet Badass   Yes  
Shadow of the Tomb Raider Yes    
Stormdivers   Yes  
The Forge Arena   Yes  
We Happy Few   Yes  
Wolfenstein II     Yes, Variable Shading

So the RTX 2060 (6GB) is in a better situation than the RTX 2070. With comparative GTX 10 series products either very low on stock (GTX 1080, GTX 1070) or at higher prices (GTX 1070 Ti), there’s less potential for sales cannibalization. And as Ryan mentioned in the AnandTech 2018 retrospective on GPUs, with leftover Pascal inventory due to the cryptocurrency bubble, there’s much less pressure to sell Turing GPUs at lower prices. So the RTX 2060 leaves the existing GTX 1060 6GB (1280 cores) and 3GB (1152 cores) with breathing room. That being said, $350 is far from the usual ‘mainstream’ price-point, and even more expensive than the popular $329 enthusiast-class GTX 970.

Across the aisle, the recent Radeon RX 590 in the mix, though its direct competition is the GTX 1060 6GB. Otherwise, the Radeon RX Vega 56 is likely the closer matchup in terms of performance. Even then, AMD and its partners are going to have little choice here: either they're going to have to drop prices to accomodate the introduction of the RTX 2060, or essentially wind down Vega sales.

Unfortunately we've not had the card in for testing as long as we would've liked, but regardless the RTX platform performance testing is in the same situation as during the RTX 2070 launch. Because the technology is still in the early days, we can’t accurately determine the performance suitability of RTX 2060 (6GB) as an entry point for the RTX platform. So the same caveats apply to gamers considering making the plunge.

Q1 2019 GPU Pricing Comparison
Radeon RX Vega 56 $499 GeForce RTX 2070
  $449 GeForce GTX 1070 Ti
  $349 GeForce RTX 2060 (6GB)
  $335 GeForce GTX 1070
Radeon RX 590 $279  
  $249 GeForce GTX 1060 6GB
(1280 cores)
Radeon RX 580 (8GB) $200/$209 GeForce GTX 1060 3GB
(1152 cores)
Meet The GeForce RTX 2060 (6GB) Founders Edition


View All Comments

  • Storris - Tuesday, January 8, 2019 - link

    The RTX2060 game bundle includes RTX showcases Battlefield 5 and Anthem, yet you haven't tested either of those games.

    What's the point of an RTX review, if the RTX doesn't actually get reviewed?

    Also, what's the point of a launch, and the day 1 driver, when no-one can buy the card yet?
  • catavalon21 - Thursday, March 7, 2019 - link

    Paper launches are nothing new for either Nvidia or AMD GPUs. Reply
  • eastcoast_pete - Tuesday, January 8, 2019 - link

    My take-home is: the 2060 is a good, maybe even very good graphics card. Price-performance wise, it's not a bad proposition, if (IF) you're reasonably sure that you won't run into the memory limit. The 6 GB the 2060 comes with is vintage Nvidia: it'll keep the 2060 off the 2070's back even for games that wouldn't require the 2070's bigger GPU brawn, and give Nvidia an easy way to make a 2060 Ti model in the near future; just add 2 GB for a full 8.
    That's my biggest beef with this card: it could have gone from a good to a great mid-upper level card just by giving it the 8 GB VRAM to start with. Now, it's not so sure how future proof it is.
  • TheJian - Tuesday, January 8, 2019 - link

    Going to do this in a few posts, since I was writing while reading a dozen or more reviews and piling up a TON of data. I own AMD stock (NV soon too), so as a trader, you HAVE to do this homework, PERIOD(or you're dumb, and like to lose money...LOL). Don't like data or own stock? Move along.

    Why is DLSS and RT or VRS benchmarks not shown? It should have been the HIGHLIGHT of the entire article. NO HD textures in far cry (would a user turn this off before testing it?)?
    CLEARLY DLSS is awesome. Note how many times DLSS makes the 2060 run like a 2080 with TAA. 39% improvement he says with DLSS. WOW. 6:29 you see 2060+DLSS BEATING 2080 with TAA. Note he has MULTIPLE tests here and a very good vid review with many useful data points tested. Why can't anandtech show any of these games that use NEW TECH? Ah right, sold out to AMD as a portal site. Same as Tomshardware (your sister site, no dlss or RT there either, just COMING soon...LOL). Note he also says in there, it would be INSANE to do RTX features and not have 2060 capable as it will be the BASE of RTX cards likely for years (poor will just get them next year at 7nm or something for a little cheaper than this year people) kind of how Intel screwed base graphics with, well CRAP graphics integrated so devs didn't aim higher. This is the same with last gen console stuff, which held us back on PC for how long? @9:19, 60% faster than 1060 for 40% more MONEY (in older crysis 3 even). It was 42% faster than RX 590 in the same game. Next game Shadow of the Tomb Raider, 59% faster than 1060, 40% faster than RX590. Out of 11 titles tested it’s better than Vega56 in 10 of them, only far cry 5 was better on vega56 (only because of perf spurt in beginning of benchmark or that one lost too). Beats Vega64 in many too even rebenched with latest drivers as he notes.

    @ 14:30 of the vid above Wolf New Collossus with VRS perf turned on vs. 1060 92% higher fps (again for 40% more cash)! Vega56 just died, 64 not far behind, as you get RT+DLSS on NV which just adds to above info. Cheapest Vega on newegg $369, Vega64 higher at $399. Tough sell against 2060 WITH RT+DLSS+VRS and less watts (210 V56, 295 V64, 160 for 2060 RTX - that's bad). Power bill for 50w 8hrs a day is $19 @ .12 (and many places over .2 in USA never mind elsewhere). So double that for V64 at best (less than 8hrs) if you game and have a kid etc that does too on that PC. Easy to hit 8hrs avg even alone if you game heavy just on weekends. You can easily put in 20hrs on a weekend if you're single and a gamer, and again easily another 4 a night during the week. Got kids, you’ll have more people doing damage. My current old Dell 24 (11yrs old Dell wfp2407-hc) uses ~110w. Simply replacing it pays for Gsync, as I'd save the same $19 a year (for a decade? @ .12 watt cost, many places in USA over .20 so savings higher for some) just buying a 30in new model at 50w. Drop that to 27in Dell and it goes to 35w! Think TCO here people, not today's price. So simply replacing your monitor+gpu (say 2060), might save you $39 a year for 5-10yrs. Hey, that's a free 2060 right there ;) This is why I'll laugh at paying $100 more for 7nm with 1070ti perf (likely much higher) with better watts/more features. I know I'll play on it for 5yrs probably then hand it to someone else in the family for another 3-5. I game more on my main PC than TV (htpc), so 1070ti can move to the HTPC and I'll save on the main pc with 7nm more. Always think TCO.
    “The end result is that instead of the RTX 2080 improving on the GTX 1080 by an average of around 25 to 30% in most titles, the 2080 outperforms the 1080 by around 60% in Wolfenstein II using VRS.”
    “So in essence, turning the VRS to Performance mode gets you half way between a 1080 Ti and a 2080 Ti, as opposed to basically matching the 1080 Ti in performance.” And again mentions next gen consoles/handheld to have it.
    Again, why are you even bothering with 4k at 1.42% usage on steam (125 MILLION GAMERS). NOBODY is using it. Yeah, I call 1.42% NOBODY. Why not test more USEFUL games at resolutions people actually use? This is like my argument with Ryan on the 660ti article where he kept claiming 1440p was used by enthusiasts...LOL. Not even sure you can claim that TODAY, years later as 1080p is used by 60.7% of us and only 3.89% on 1440p. Enthusiasts are NOT playing 4k or even 1440p unless you think gaming enthusiasts are only 5% of the public? Are you dense? 4k actually dropped .03%...ROFLMAO. 72% of MULTI-MONITOR setups are not even 4k…LOL. Nope, just 3840x1080. Who is paying you people to PUSH 4k when NOBODY uses it? You claimed 1440p in 2012 for GTX 660ti. That is stupid or ignorant even TODAY. The TOTAL of 1440p+4K is under 5%. NOBODY is 5%. Why is 4K listed before 1080p? NOBODY is using it, so the focus should be 1080P! This is like setting POLICY in your country based on a few libtards or extremists wanting X passed. IE, FREE COLLEGE for everyone, without asking WHO PAYS FOR IT? Further, NOT realizing MANY people have no business going to college as they suck at learning. Vocational for them at best. You are wasting college on a kid that has a gpa under 3.0 (attach any gpa you want, you get the point). 4k, but, but, but…So what, NOBODY USES IT. Nobody=1.42%...LOL.

    MipsToRemove, again lowering qual? Why not test full-on, as many users don't even know what .ini files are?...LOL. I'm guessing settings like this make it easier for AMD. MipsToRemove=0 sets ground textures to max and taxes vid memory. What are you guys using? If it's not 0 why?

    “The highest quality preset, "Mein leben!", was used. Wolfenstein II also features Vega-centric GPU Culling and Rapid Packed Math, as well as Radeon-centric Deferred Rendering; in accordance with the preset, neither GPU Culling nor Deferred Rendering was enabled. NVIDIA Adaptive Shading was not enabled.”
    So in this case you turn off tech for both sides that NOBODY would turn off if buying EITHER side (assuming quality doesn’t drop SET properly). You buy AMD stuff for AMD features, and you do the same for RTX stuff on NV etc. Who goes home and turns off MAIN features of their hardware. Unless it makes a game UNPLAYABLE why the heck would ANYONE do this? So why test like this? Who tests games in a way WE WOULD NOT PLAY them? Oh, right, anandtech. Providing you with the most useless tests in the industry, “Anandtech”. We turn off everything you’d use in real life, you’re welcome…LOL.

    Anandtech quote:
    “hidden settings such as GameWorks features” for Final Fantasy and no DLSS. Umm, who buys NV cards to turn off features?
    “For our testing, we enable or adjust settings to the highest except for NVIDIA-specific features and 'Model LOD', the latter of which is left at standard.”
    WTH are you doing here? Does it speed up NV cards? If yes why would anyone turn it off (trying to show NV weaker?)? I paid for that crap, so I’d definitely turn it on if it is FASTER or higher QUALITY.
  • TheJian - Tuesday, January 8, 2019 - link

    Crap, have to reply to my own post to keep info in order (5.5 pages in word...LOL). Oh well, I'll just title 2nd post and on so I don't have to care ;)
    If someone can test DLSS in EARLY DECEMBER, why can’t Anandtech in January? You could at least show it on NV vs. NV without so people see how FAST it is (39% as noted before by DigitalFoundry youtube vid above). Ah, right, you don’t want people to know there is a 39% bump in perf coming for many games huh? I see why AMD will skip it, it takes a lot of homework to get it right for each game, as the article from techpowerup discusses. Not feasible on 50mil net, maybe next year:
    “DLSS is possible only after NVIDIA has generated and sampled what it calls a "ground truth" images—the best iteration and highest image quality image you can engender in your mind, rendered at a 64x supersampling rate. The neural network goes on to work on thousands of these pre-rendered images for each game, applying AI techniques for image analysis and picture quality optimization. After a game with DLSS support (and NVIDIA NGX integration) is tested and retested by NVIDIA, a DLSS model is compiled. This model is created via a permanent back propagation process, which is essentially trial and error as to how close generated images are to the ground truth. Then, it is transferred to the user's computer (weighing in at mere MBs) and processed by the local Tensor cores in the respective game (even deeper GeForce Experience integration). It essentially trains the network to perform the steps required to take the locally generated image as close to the ground truth image as possible, which is all done via an algorithm that does not really have to be rendered.”

    Yeah, AMD can’t afford this on a PER GAME basis at under $50mil NET income yearly. Usually AMD posts a LOSS BTW, 3 of 4 recent years 400mil loss or MORE, 8B lost over the life of AMD as a company, 2018 should be ~400mil vs. NV 3B-4B+ NET per year now (3.1B 2017 + over 4B for 2018 1B+ per Q NET INCOME) .

    “With DLSS enabled, the game runs at a nice and steady 67 FPS (a 33% performance increase). That's a huge difference that alone can be the deciding factor for many.”
    Agreed, I guess that’s why Anadtech/toms refuse to delve into this tech? ;)
    “You can now increase settings on models and textures that would have previously driven FPS down below playable levels. DLSS will in turn bring the framerates back up.” OK, then, it’s great, says he likes quality also stating “pleased”. I’m sure someone will say it’s not as good as 4K native. Well no, the point is 4K “LIKE” quality on crappy hardware that can’t support it ;) As noted in the previous quote. Turn on options that would normally kill your fps, and use DLSS to get those fps back making it playable again. DLSS will always look better than your original res, as again, it’s turning 1080p into 1440p/4k (at 4k, so far in this game, it’s just fps boosting). From what I’ve seen its pretty much 1440p for free without a monitor upgrade, or reasonable 4k “LIKE” quality again, on 1080p or 1440p. Also enables playable 4k for some that would normally turn crap down to get there or not play at all.

    I could go on, but I can’t even be bothered to read the rest of the article as I keep having to check to see if benchmarks include some crap that makes the data USELESS to me yet again. You should be testing AMD best advantages vs. NV best advantages ALWAYS unless it changes QUALITY (which would be like cheating if you lower it for better perf). IE, turn on everything that boosts speed for BOTH sides, unless again, QUALITY is dropping then turn it off. USERS will turn on everything unless it HURTS them right? 8:02 in that youtube vid above, he gains 3-4% PER move up in adaptive shading settings. This lets the 2060 best 1080 by max 15%. Besides, there are so many OTHER reviews to read that maybe didn’t do all the dumb things pointed out here.

    I did skip to the end (conclusions on gpu reviews are usually ridiculous). Not quite mainstream, but that mark has moved up, just ask Intel/NV (hedt, and top 2080ti selling faster than lower models, might change with 2060 though). “The biggest question here isn't whether it will impact the card right now, but whether 6GB will still be enough even a year down the line.”...LOL. Uh, I can say that about EVERY card on the market at some point right? <15% of 125mil steam users have 8GB+. Don't you think game devs will aim at 85% of the market first (maybe a HD texture pack pushes a few over later, but main game aims at MAIN audience)? This is like consoles etc mentioned above holding us back, intel igpu doing the same. 6GB will too probably I guess.

    "Now it is admittedly performing 14-15% ahead of the 8GB GTX 1070, a card that at MSRP was a relatively close $379"...LOL. Anything to take a shot, even when the conclusion is asinine as you're not even talking what happens NEXT YEAR when all the games are using DLSS+RT (either or both), and maybe VSR which got 92% boost vs. 1060 here. That is a LOT more than 15% over 1070 too! FAR higher than 59% you quote without using it right??). I'd guess 20-30mil will be sold from RTX line in the next year (NV sells 65-70% of the discrete cards sold yearly, surely 1/2 of NV sales will be RTX after 7nm), and much of those will be 6GB or less? No doubt the 3050, next year or whatever will then have 6GB too and sell even larger numbers. If you are claiming devs will aim at 15% of the market with 8GB, I will humbly guess they will DEFINITELY aim at 6GB which IS already 11% (same as 8GB % BTW), and will double in the next year or less. Most are on 1080p and this card does that handily in everything. With only 3.5% on 1440p, I don’t think many people will be aiming at 8GB for a while. Sure you can max something the crap out of a few to CAUSE this, but I doubt many will USE it like this anyway (you benchmark most stuff with features disabled!).

    There are ~92-100mil discrete cards sold yearly now (36% of ~260-280mil pcs sold yearly), and 70% currenly are NV cards. How many 6GB cards OR LESS do you think will sell from ~62-70mil NV gpus sold in 2018? AMD might release a 6GB in the next year or two also. Navi has a 4GB at under $200 (rumor $129, I think higher but…), so plenty of new stuff will sell under 6GB. Hopefully 2050 etc will have 6GB of GDDR5x (cheaper than GDDR6 for now) or something too too get more 6GB out there raising the 40% on 2GB/4GB (~20% each, great upgrade for 2GB people). Your site is such a fan of TURNING OFF features, or graphics DOWN to play 1440p/4k, I don't get why this is a problem anyway. Can't you just turn off HD textures like you already do to avoid it (or something else)? Never mind, I know you can. So something to revisit is just hogwash. People can just turn down one or two things and magically it will be using 6GB or less again...LOL. Again, 6GB or LESS is 85% of the market in gamers! See steam. How many games hit 8GB in your benchmarks? LOL. You test with stuff OFF that would cause an 8GB hit (hd textures etc) already. What are you complaining about here? The card doesn't stop functioning because a game goes over 6GB, you just turn something down right? LOL.

    “ What makes the $350 pricing at least a bit more reasonable is its Radeon competition. Against RX Vega at its current prices the RTX 2060 (6GB) is near-lethal”
    So is it “a bit more reasonable” or LETHAL? LETHAL sounds VERY reasonable to anyone looking at AMD before July if navi even hits by then and they don’t have RT+DLSS AFAIK, so worth something with the numbers from the youtube guy at digitalfoundry. His results testing the NEW features (well duh anandtech) are quite amazing.

    “That driver is due on the same day of the RTX 2060 (6GB) launch, and it could mean the eventual negation of AMD’s FreeSync ecosystem advantage.”

    Uh, no, it DEFINITELY ENDS IT AS OF NOW. It is no ADVANTAGE if the other guy has it on all current cards right? It’s just a matter of checking the top ~100 freesync monitors for approval as NV will surely aim at the best sellers first (just like game devs vs. 85% of the market). Oh wait, they tested 400 already, 12 made it so far, but you can turn it on in all models if desired:
    “Owners of other Adaptive-Sync monitors will be able to manually enable VRR on Nvidia graphics cards as well, but Nvidia won't certify how well that support will work.”
    So maybe yours works already even if NOT on the list, just a matter of how well, but heck that describes freesync anyways (2nd rate gen1 at least, gen2 not much better as AMD isn’t forcing quality components still) vs. gsync which is easily the best solution (consider TCO over monitor life). You didn’t even mention DLSS in the conclusion and that is a MASSIVE boost to perf, netting the hit from RT basically. But you guys didn’t even go there…ROFLMAO. Yet again, you mention the 6GB in the final paragraph…LOL. How many times can you mention a “what if >6GB” scenario (that can simply be fixed by turning something down slightly or OFF like HD textures) vs. IGNORING completely DLSS a main feature of RTX cards? A LOT, apparently. Even beta benchmarks of the tech are AWESOME. See digitalfoundry guy above. He and the techpowerup VSR/DLSS info both say 32%/33% gain turning either on. You don’t think this should be discussed at all in your article? That is MASSIVE for EITHER tech right? As techpowerup notes in their DLSS article:
    “Our RTX 2080 Ti ran at 50 FPS, which was a bit too low for comfort. With DLSS enabled, the game runs at a nice and steady 67 FPS (a 33% performance increase). That's a huge difference that alone can be the deciding factor for many.”
    OK, so 2080ti shows 33% DLSS, and 2060 shows 32% at digitalfoundry on VSR. Seems like even patched in games will do it for this perf increase. Why would a dev ignore ~33% increase across the board? AMD better get this tech soon as I totally agree this would persuade MANY buyers alone. Note DF did their talk of this at 1080p, techpowerup did it at 4k but not as much value here IMHO, just making playable from unplayable really, but lower res add 4k LIKE LOOK too.
  • TheJian - Tuesday, January 8, 2019 - link

    Screw it, all in a row, doesn't look bad...LOL. 3rd final (assuming the post takes, each grows):
    How can you ignore something that is SIMPLE to integrate, and will be adding VSR plugins for game engines soon. Do you even do much work if NV includes it as a plugin? I doubt it. Also note, the more COMPLEX your pixel shaders are, the MORE you GAIN in perf using the tech. So your WHAT IF scenario (games always needing more stuff) works in reverse here right? Games will NOT pass this up if it’s easy to add especially with a plugin in game engines. But you guys didn’t even mention a 33% perf add and as he also noted, when GTX 980 launched it wasn’t a massive improvement, but over it’s life “as maxwell matured, it left previous gens in the DUST” with driver updates!
    For those who want to know VSR tech. This vid is Dec4…LOL. A month later Anandtech has never heard of it. Also note the info regarding handhelds, as VRS tech really helps when your gpu resources are stretched to the limit already (think Nintendo switch etc). Note Devs have been asking for this tech for ages, so he thinks next consoles will support it.
    Discussion here of Content Adaptive/Foveated/Motion Adaptive/Lens Optimized & VRS as a whole etc shading. NVIDIA claims developers can implement content-based shading rate reductions without modifying their existing rendering pipeline and with only small changes to shader code. This is HUGE for VR too as you can avoid rendering pixels that would be DISCARDED anyway before going to VR headset.
    “Turing’s new Variable Rate Shading techniques, as well as its more flexible Multiview Rendering capabilities will take time for developers to adopt, but the net gain could be over a 20 percent speed-up in graphically rich scenes and game engines, but with comparable or higher image quality as a result of these optimizations.”
    Actually we’ve already seen 33% in Wolfenstein right? So he’s a little low here, but point made.
    “Variable Rate Shading is a new, easy to implement rendering technique enabled by Turing GPUs. It increases rendering performance and quality by applying varying amount of processing power to different areas of the image.”
    Ahh, well, only a 32% boost feature (netting 92% boost over 1060…LOL), so who cares about this crap, and never mind maybe more too depending on complexity as noted before. But AMD doesn’t have it, so turn it all off and ignore free perf…LOL.
    Tomshardware knew what it was at 2080 review in SEPT and noted THIS:
    “But on a slower card in a more demanding game, it may become possible to get 20%+-higher performance at the 60ish FPS level. Perhaps more important, there was no perceivable image quality loss.”

    OK so why did they turn it all off in 2060 review?
    “To keep our Wolfenstein II benchmarks fair, we disable all of the Turing card's' Adaptive Shading features.”
    ER, UM, if QUALITY is NOT lost as they noted before, WTH would you turn it OFF for? Is it UNFAIR that NV is far faster due to BETTER tech that does NOT degrade quality? NO, it’s AMD’s problem they don’t have it. ALL users will turn EVERY feature on, if those features do NOT drop QUALITY right? Heck, ok, if you’re not going to test it AGAINST AMD, then at least test it against themselves, so people can see how massive the boost can be. Why would you ignore that? Ah right, sister site just as bad as anandtech…ROFL. Acting like one company doesn’t have features that they are CLEARLY spending R&D on (and REMEMBER as noted before devs wanted this tech!), just because the OTHER guy hasn’t caught up, is DUMB or MISLEADING for both sites that have went down the toilet for reliable data as you’d use your product. It only counts if AMD has it too…Until then, THESE FEATURES DON’T EXIST, we SWEAR…LOL. The second AMD gets it too, we’ll see Toms/Anand bench it…LOL. Then it won’t be “turned off turings adaptive features”, it will be “we turned on BOTH cards Adaptive features” because AMD now won’t be left in the DUST. They screw up their conclusion page too…LOL:
    “No, GeForce RTX 2060 needs to be faster and cheaper than the competition in order to turn heads.”
    Uh, Why do they have to be CHEAPER if they are FASTER than AMD?
    “Nvidia’s biggest sin is probably calling this card a GeForce RTX 2060. The GeForce GTX 1060 6GB launched at $250.”
    Uh, no, it started at $299 as founder’s edition just as this one is called that (by PCworld, Guru3d etc), so again, expect price drop once NV is done selling them direct 
    “GeForce RTX 2060 - Taking Turing Mainstream”
    I guess, some think $350 is mainstream…LOL.
    As PCWorld says, “Not only does the card pack the dedicated RT and tensor core hardware that gives RTX GPUs their cutting-edge ray tracing capabilities, it trades blows in traditional game performance with the $450 GTX 1070 Ti rather than the $380 GTX 1070.”
    “The only potential minor blemish on the spec sheet: memory capacity. The move to GDDR6 memory greatly improves overall bandwidth for the RTX 2060 versus the GTX 1060, but the 6GB capacity might not be enough to run textures and other memory-intensive graphics options at maximum settings in all games if you’re playing at 1440p resolution.”
    OK, so according to them, who cares, as 1440p cards have 8GB, and is only 3.5% of the market anyway…LOL.
    NV’s response to why 6GB “Right now the faster memory bandwidth is more important than the larger memory size.” They could have put cheaper 8GB of GDDR5, but they chose faster rather than more to hit $349 (and $300 next month probably from other vendors that are NOT founders model). Though they think maybe all will be $349 it seems.
    “We focused our testing on 1440p and 1080p, as those are the natural resolutions for these graphics cards.”
    LOL, agreed…Why anyone tests 4K here with ~1.5% is dumb.
    “We use the Ultra graphics preset but drop the Shadow and Texture Quality settings to High to avoid exceeding 8GB of VRAM usage”
    Ahh, so PCWorld proves Anandtech is misleading people like cards just die if you hit 8GB, nope, you turn something down in a game like Middle Earth Shadow of War…LOL. Anandtech acts like this is a SHOWSTOPPER. OMG, OMG…LOL. Hmm, 18 degrees lower than Vega64, 8 below vega56, ouch.

    Should all make sense, but it is Almost 10am and I’ve been up all night...LOL. No point in responding anandtech, that will just end with me using DATA to beat you to death like the 660ti article (used ryan's own data against his own conclusions, and tons of other data from elsewhere to prove him an outright liar), which ended with Ryan and company calling me names/attacking my character (not the data...LOL), which just looks bad for professionals ;) Best to just change how you operate, or every time a big product hits and I have time, boom, data on how ridiculous this site has become since Anand left (toms since Tom left...ROFL). I love days off. Stock homework all day, game a bit, destroy a few dumb reviews if I have time left over. :) Yeah, I plan days off for "EVENTS", like launches. Why does my review seem to cover more RELEVANT data than yours? ROFL. I own AMD stock, NOT Nvidia (Yet, this year…LOL, wait for Q1 down report first people IMHO). One more point, Vulkan already has VRS support, PCworld mentions they are working with MSFT on DX support for VRS but “Until then, it'll expose Adaptive Shading functionality through the NVAPI software development kit, which allows direct access to GPU features beyond the scope of DirectX and OpenGL.” OH OK ;) Back to reading OTHER reviews, done trashing this useless site (for gpus at least, too much info missing that buyers would LIKE to know about perf).
  • PeachNCream - Wednesday, January 9, 2019 - link

    TL;DR Reply
  • boozed - Friday, January 11, 2019 - link

    JTFC... Reply
  • LSeven777 - Tuesday, January 22, 2019 - link

    It's a shame you don't use all that energy to do something usefull. Reply
  • El Sama - Tuesday, January 8, 2019 - link

    This company needs to be put back into reality (where are you AMD?) at this trend we will be having 500 USD RTX 2260 in a few years. Reply

Log in

Don't have an account? Sign up now