"Adjustable depth" is just their marketing lingo for "3D strength". The more you push the disparity, the more you increase the depth level.. but then there is another factor, convergence: whether the 3D pops outside of the screen, or stays behind it. Owl3D controls 3D strength and has a great autoconvergence to enhance the 3D pop and it isn't limited to 1080p.. not that it matters, because right now, it's faster to create a 3D video at 1080p and upscale it to 2160p than it is to try and convert a 2160p video natively.
EDIT: the more you push 3D strength, the more artifacts you will introduce, and that can't be helped, because when the system calculates what the right eye and left eye are both seeing, they can't "see" behind objects, obviously, so they have various techniques for creating data that isn't there. The hope is that someone will come up with a solid AI tool for 3D segmentation. This is a great article and possible solution:
https://arxiv.org/html/2409.08270v1
Bloodborne2025 I use the same software I use for the 3D, as they are all 60fps and include a 2160p version. I use SVFI (available on Steam) to go from 24fps to 60fps, and I still use Topaz Video AI to upscale from 1080p to 2160p.