I've looked into the FOVE C++ example. The example uses gaze convergence ray to check for intersections with scene, which can be very costly for complex scenes. So this is not entirely what I would expect to read from Fove. All I would like to have is 2D "points" on the lenses that the eyes are looking at. I did a sort of simulation of that process. I simply shoot a ray, in head coordinates (GetGazeConvergence gives it), towards plane at point (0, 0, 1) with normal (0, 0, -1) (left-handed coordinate system). I use the found point's X and Y for marking the point I'm gazing at. Here's the source code:
const Fove::SFVR_GazeConvergenceData convergence = headset->GetGazeConvergence();
Fove::SFVR_Vec3 point(0.0f, 0.0f, 1.0f);
Fove::SFVR_Vec3 normal(0.0f, 0.0f, -1.0f);
Plane plane = PlaneFromPointAndNormal(point, normal);
IntersectionRayPlane(convergence.ray.origin, convergence.ray.direction, plane, intersectionPoint, dist);
This code seems to be working not bad but I wonder if there are other, more robust ways to read the data that I need?
Please sign in to leave a comment.