GeForce RTX 3080 Ti Max-Q vs RTX 2070 Max-Q

#ad 
Buy on Amazon
VS

Primary details

GPU architecture, market segment, value for money and other general parameters compared.

Place in the ranking181not rated
Place by popularitynot in top-100not in top-100
Power efficiency27.27no data
ArchitectureTuring (2018−2022)Ampere (2020−2024)
GPU code nameTU106BGA103S
Market segmentLaptopLaptop
Release date29 January 2019 (5 years ago)25 January 2022 (2 years ago)

Detailed specifications

General parameters such as number of shaders, GPU core base clock and boost clock speeds, manufacturing process, texturing and calculation speed. Note that power consumption of some graphics cards can well exceed their nominal TDP, especially when overclocked.

Pipelines / CUDA cores23047424
Core clock speed885 MHz585 MHz
Boost clock speed1185 MHz1125 MHz
Number of transistors10,800 millionno data
Manufacturing process technology12 nm8 nm
Power consumption (TDP)80 Watt80 Watt
Texture fill rate170.6261.0
Floating-point processing power5.46 TFLOPS16.7 TFLOPS
ROPs6496
TMUs144232
Tensor Cores288232
Ray Tracing Cores3658

Form factor & compatibility

Information on compatibility with other computer components. Useful when choosing a future computer configuration or upgrading an existing one. For desktop graphics cards it's interface and bus (motherboard compatibility), additional power connectors (power supply compatibility).

Laptop sizelargeno data
InterfacePCIe 3.0 x16PCIe 4.0 x16
Supplementary power connectorsNoneNone

VRAM capacity and type

Parameters of VRAM installed: its type, size, bus, clock and resulting bandwidth. Integrated GPUs have no dedicated video RAM and use a shared part of system RAM.

Memory typeGDDR6GDDR6
Maximum RAM amount8 GB16 GB
Memory bus width256 Bit256 Bit
Memory clock speed1500 MHz1500 MHz
Memory bandwidth384.0 GB/s384.0 GB/s
Shared memory--

Connectivity and outputs

Types and number of video connectors present on the reviewed GPUs. As a rule, data in this section is precise only for desktop reference ones (so-called Founders Edition for NVIDIA chips). OEM manufacturers may change the number and type of output ports, while for notebook cards availability of certain video outputs ports depends on the laptop model rather than on the card itself.

Display ConnectorsNo outputsNo outputs
G-SYNC support+-

Supported technologies

Supported technological solutions. This information will prove useful if you need some particular technology for your purposes.

VR Ready+no data

API compatibility

List of supported 3D and general-purpose computing APIs, including their specific versions.

DirectX12 Ultimate (12_1)12 Ultimate (12_2)
Shader Model6.56.5
OpenGL4.64.6
OpenCL1.23.0
Vulkan1.2.1311.3
CUDA7.58.6

Pros & cons summary


Recency 29 January 2019 25 January 2022
Maximum RAM amount 8 GB 16 GB
Chip lithography 12 nm 8 nm

RTX 3080 Ti Max-Q has an age advantage of 2 years, a 100% higher maximum VRAM amount, and a 50% more advanced lithography process.

We couldn't decide between GeForce RTX 2070 Max-Q and GeForce RTX 3080 Ti Max-Q. We've got no test results to judge.


Should you still have questions concerning choice between the reviewed GPUs, ask them in Comments section, and we shall answer.

Vote for your favorite

Do you think we are right or mistaken in our choice? Vote by clicking "Like" button near your favorite graphics card.


NVIDIA GeForce RTX 2070 Max-Q
GeForce RTX 2070 Max-Q
NVIDIA GeForce RTX 3080 Ti Max-Q
GeForce RTX 3080 Ti Max-Q

Comparisons with similar GPUs

We selected several comparisons of graphics cards with performance close to those reviewed, providing you with more options to consider.

Community ratings

Here you can see the user ratings of the compared graphics cards, as well as rate them yourself.


4.2 343 votes

Rate GeForce RTX 2070 Max-Q on a scale of 1 to 5:

  • 1
  • 2
  • 3
  • 4
  • 5
4.6 9 votes

Rate GeForce RTX 3080 Ti Max-Q on a scale of 1 to 5:

  • 1
  • 2
  • 3
  • 4
  • 5

Questions & comments

Here you can ask a question about this comparison, agree or disagree with our judgements, or report an error or mismatch.