FOVE SUPPORT CENTER

Python library for fove eye tracking

Comments

15 comments

  • Official comment
    Avatar
    Jeff

    SDK v0.16.1 is available on our developer site: https://www.getfove.com/developers/

    The only major change is the inclusion of the python plugin now. So it's fully available now to the public. Naturally, if you have comments/questions, keep posting them in this forum (preferable under a new thread).

    We'd like to continually improve the bindings over time.

     

  • Avatar
    Philip Weiss

    I don't believe there's a Python library for FOVE. I could make a crude version, but I would need someone to collaborate with to do it right. Although I don't know if I'll have time to.

  • Avatar
    Cojocari Miroslav

    Hi Philip,

     

    FOVE is awesome for research, because of the eye tracking part, but there are far less data analysts / researchers that know C++ than Python... Also there are some pretty tools for experimenting/visualization/interaction that are made in python or that accept a python connection.

    Maybe someone from staff can help us in the process?

    My C++ knowledge is near "zero", but I will gladly help with python part, testing or general ideas.

     

    Best regards,

    Miroslav

  • Avatar
    Cojocari Miroslav

    Hi guys,

    It seems that using "subprocess" is quite okay to use the data stream from the C++ sample syntax...

    But it's rather a poor workaround...

     

    Still trying to understand how to wrap functions into a python library...

  • Avatar
    Jeff

    Hi guys,

    I don't normally like to announce things in advance, but I thought a heads up here might be helpful. We're working on a python library for FOVE and are nearly done with it. Barring any major issues, it should be out, at least in beta form, with our next release. I'll report to this thread once it's out.

  • Avatar
    Cojocari Miroslav

    Thanks, Jeff,

    These are great news!

     

    I'll wait, then, the new release. Beta is already much better than no library.

  • Avatar
    Or R.

    Wonderful news! 

    This will really help our lab too.

    Thank you.

  • Avatar
    Jeff

    We have a beta ready of the python bindings, and could use a little external testing before public release.

    If you'd be interested in testing them out for us, can you send an email to support@getfove.com with something like "Python Beta" in the subject so we get your email and can send it to you privately?

  • Avatar
    Cojocari Miroslav

    Hi Jeff,

     

    Any news about the beta?

    Email sent.

     

    Best regards,

    Miroslav

  • Avatar
    Jeff

    The beta has been sent out to everyone who emailed! If you didn't receive it, I probably miscopied an email address, just ping me in that case.

    Per the notes I sent with the link, feel free to discuss or ask questions about the API here in this thread, and share code, etc. Just don't send the link out as the build is not properly versioned.

    I don't have a python example available yet, but you can view the documentation in the build itself and share code here. In particular, FOVE doesn't have a lot of in-house python experience (most of us are more on the C/C++ side) so your feedback or code examples are quite helpful.

    Perhaps we can build a python example repo together similar to the C++ examples repo.

     

  • Avatar
    Cojocari Miroslav

    Hi Jeff,

    Thanks for the python part!

    It's pretty cool, and the "data" part is quite easy to understand (the C ++ example was really helpful). I hope I will have a basic, structured sample with this part during the next days.

    On the other side, the "frame feeding" part seems a little bit more complicated :). As I understand, the actual frame is sent to the FOVE to be displayed with compositor.Submit(). But where can I find more info about the structure and the type of 'submitInfo'? 

    thanks!

    Best regards,

    Miroslav

  • Avatar
    Jeff

    Glad to hear that you've got the data stuff working.

    As far as the submitInfo for the compositor, since there's no high level API for the compositor, it's just a wrapper for the C struct which is documented here:

    https://s3-ap-northeast-1.amazonaws.com/archives-fove/developer/SDK_Docs_0.16/C/html/struct_fove___compositor_layer_submit_info.html

    Each frame requires a pose, and a texture for the left/right sides (it can be the same texture), along with uv boundaries. If your left/right textures are the same buffer, usually you would use the uv coordinates to specify each half. Otherwise you would usually give zero-to-one uvs and render each side to separate buffers.

    The texture pointer is to an object of one of the CompositorTexture classes, Fove_DX11Texture or Fove_GLTexture (GL is still beta, DirectX is recommended if possible), which in turn have more pointers or IDs depending on the rendering API.

    Eventually we plan to make a high-level pythonic wrapper for the compositor API eventually as well, though I didn't want it to block initial release on that, so you'll have to bear with some C-ish python there for now.

  • Avatar
    Sidney Leal

    Hi there,

    Very nice job with the API, thanks! This will help a lot in research.

    As Miroslav above, I'm a little lost with the compositor, but I'll keep trying.

     

    For the Gaze Data, this is my first minimalist test code, working nice:

    from fove.headset import Headset, ClientCapabilities

    with Headset(ClientCapabilities.Gaze + ClientCapabilities.Position) as fove_headset:
    print("Versions:", fove_headset.getSoftwareVersions())
    print("IOD:", float(fove_headset.getIOD()))
    print("Calibrated:", fove_headset.isEyeTrackingCalibrated())

    eyes_closed = fove_headset.checkEyesClosed()
    print("Eyes closed:", eyes_closed)

    left_gaze, right_gaze = fove_headset.getGazeVectors()
    print("Gaze data - Timestamp:", left_gaze.timestamp)
    print(" - Left Eye", "X:", left_gaze.vector.x, "Y:", left_gaze.vector.y, "Z:", left_gaze.vector.z)
    print(" - Right Eye", "X:", right_gaze.vector.x, "Y:", right_gaze.vector.y, "Z:", right_gaze.vector.z)

     

    And my output:

    Versions: <Versions: client: 0.16.0, runtime: 0.15.0, protocol: 18, min_firmware: 40, max_firmware: 51, too_old_headset: 0>
    IOD: 0.06300000101327896
    Calibrated: True
    Eyes closed: Eye.Neither
    Gaze data - Timestamp: 1547555265221000
    - Left Eye X: 0.0890180692076683 Y: -0.05430266261100769 Z: 0.9945486187934875
    - Right Eye X: 0.0867355465888977 Y: -0.042516157031059265 Z: 0.9953237175941467

     

     

  • Avatar
    Cojocari Miroslav

    Hi,

    What is the License for the python library?

    I have some basic code as a gitlab project and I can just make it public and share here. In this way I can just update there the code without copy pasting here. But I think I need to add there the license part.

     

    Still working on understanding the compositor part :)

     

    Best regards,

    Miroslav

  • Avatar
    Jeff

    License is the same as the rest of the SDK: BSD 3-clause (in the LICENSE file)

    Let me know if you have further questions or problems with the compositor bit. Since they wrap the same C API, your python code should have the same general pattern shown in the C++ examples: https://github.com/FoveHMD/FoveCppSample/blob/master/DirectX11Example.cpp

    The general process is:

    1. Create a headset object
    2. Create a compositor object from the headset
    3. Create a compositor object layer (will fail if compositor is not running or not yet connected over IPC, so use isReady to wait for the right time)
    4. Create your own render texture with DX11 using the resolution returned from creating the layer
    5. waitForRenderPose() to sync frame rate with the HMD screen
    6. draw to your render texture
    7. submit() with a pointer to your render texture (ID3D11Texture2D), then loop back to #5

Please sign in to leave a comment.