FSR2.x works different than FSR1, it requires game-engine level information (motion vectors and stuff) to not only up-scale to target resolution but also reconstruct detail and apply sharpening whereas FSR1 doesn't and was just a tad better upscaling algorithm.
To be able to use FSR 2.x, it needs to be natively supported by the game for that game-engine level information.
Typically FSR 2.x, like DLSS, is split into different Quality levels. Each will result in the original rendered frame to be at a lower resolution than the target resolution to be then scaled back up to the target resolution and sharpened:
Quality - 67% (1280 x 720 -> 1920 x 1080)
Balanced - 59% (1129 x 635 -> 1920 x 1080)
Performance - 50% (960 x 540 -> 1920 x 1080)
Ultra Performance - 33% (640 x 360 -> 1920 x 1080)
Upscaling and Reconstruction are cheaper than rendering a native frame but still a tad more expensive than simply rendering at a lower target resolution (cheaper/expensive in terms of calculation time spent)
Less required performance for image rendering can lead to a shift towards more draw calls required by the GPU for more utilization and in turn result in a CPU bottleneck if the CPU can't keep up with the now less strained GPU.
Bonus: If you lock your framerate, target FPS is reached but GPU utilization is low and yet there's no stutter: You GPU can easily handle what's being thrown at it and doesn't need to go the extra mile to keep up.
Alright, long story short:
FSR2.x works different than FSR1, it requires game-engine level information (motion vectors and stuff) to not only up-scale to target resolution but also reconstruct detail and apply sharpening whereas FSR1 doesn't and was just a tad better upscaling algorithm.
To be able to use FSR 2.x, it needs to be natively supported by the game for that game-engine level information.
Typically FSR 2.x, like DLSS, is split into different Quality levels. Each will result in the original rendered frame to be at a lower resolution than the target resolution to be then scaled back up to the target resolution and sharpened:
Source (AMD)
So why do you see less utilization then?
There's two things at play here
Bonus: If you lock your framerate, target FPS is reached but GPU utilization is low and yet there's no stutter: You GPU can easily handle what's being thrown at it and doesn't need to go the extra mile to keep up.