Back to the Developer Tools Page → Motive API page
SDK/API Support Disclaimer
This guide provides detailed instructions on commonly used functions of the Motive API for developing custom applications. For a full list of the functions, refer to the Motive API Function Reference page. Also, for a sample use case of the API functions, please check out the provided marker project. In this guide, the following topics will be covered:
C:\Program Files\OptiTrack\Motive\lib
C:\Program Files\OptiTrack\Motive\
C:\Program Files\OptiTrack\Motive\lib32
C:\Program Files\OptiTrack\Motive\inc\
#include "NPTrackingTools.h"
NPTRACKINGTOOLS_INC
NPTRACKINGTOOLS_LIB
When using the API, connected devices and the Motive API library need to be properly initialized at the beginning of a program and closed down at the end. The following section covers Motive API functions for initializing and closing down devices.
// Initializing all connected cameras TT_Initialize();
// Initializing all connected cameras TT_Initialize(); //Update for newly arrive cameras TT_Update();
// Closing down all of the connected cameras TT_Shutdown(); return 0;
TT_LoadProject("project.ttp"); // Loading TTP file
TT_LoadCalibration("CameraCal.cal"); // Loading CAL file
Note:
Connected cameras are accessible by index numbers. The camera indexes are assigned in the order the cameras are initialized. Most of the API functions for controlling cameras require an index value. When processing all of the cameras, use the TT_CameraCount function to obtain the total camera count and process each camera within a loop. For pointing to a specific camera, you can use the TT_CameraID or TT_CameraName functions to check and use the camera with given its index value. This section covers Motive API functions for checking and configuring camera frame rate, camera video type, camera exposure, pixel brightness threshold, and IR illumination intensity.
TT_CameraVideoType(int cameraIndex) // Returns Video Type TT_CameraExposure (int cameraIndex) // Returns Camera Exposure TT_CameraThreshold(int cameraIndex) // Returns Pixel Threshold TT_CameraIntensity(int cameraIndex) // Returns IR Illumination Intensity TT_CameraFrameRate(int cameraIndex) // Returns Camera Frame Rate
TT_SetCameraSettings(int cameraIndex, int videoType, int exposure, int threshold, int intensity);
TT_SetCameraFrameRate(int cameraIndex, int framerate);
//== Changing exposure and threshold settings for all of the cameras ==// int cameraCount = TT_CameraCount(); int intensity = 10; int framerate = 100; for (int i = 0; i < cameraCount; i++) { TT_SetCameraSettings(i, TT_CameraVideoType(i), TT_CameraExposure(i), TT_CameraThreshold(i), intensity); TT_SetCameraFrameRate(i, framerate); //== Outputting the Settings ==// printf("Camera #%d:\n", i); printf("\tFPS: %d\n\tIntensity: %d\n\tExposure: %d\n\tIntensity: %d\n\tVideo Type:%d\n", TT_CameraFrameRate(i), TT_CameraIntensity(i), TT_CameraExposure(i), TT_CameraIntensity(i),TT_CameraVideoType(i)); }
Camera Settings
Video Types
There are other camera settings, such as imager gain, that can be configured using the Motive API. Please refer to the Motive API Function Reference page for descriptions on other functions.
In order to process multiple consecutive frames, you must update the camera frames using the following API functions: TT_Update or TT_UpdateSingleFrame. Call one of the two functions repeatedly within a loop to process all of the incoming frames. In the 'marker sample', TT_Update function is called within a while loop as the frameCounter variable is incremented, as shown in the example below.
// marker.cpp sample project int main() { TT_Initialize(); int frameCounter = 0; // Frame counter variable while (!_kbhit()) { if(TT_Update() == NPRESULT_SUCCESS) { // Each time the TT_Update function successfully updates the frame, // the frame counter is incremented, and the new frame is processed. frameCounter++; ////// PROCESS NEW FRAME ////// } } }
TT_Update() // Process all outstanding frames of data.
TT_UpdateSingleFrame() // Process one outstanding frame of data.
After loading valid camera calibration, you can use the API functions to track retroreflective markers and get their 3D coordinates. The following section demonstrates using the API functions for obtaining the 3D coordinates. Since marker data is obtained for each frame, always call the TT_Update, or the TT_UpdateSingleFrame, function each time newly captured frames are received.
int totalMarker = TT_FrameMarkerCount(); printf("Frame #%d: (Markers: %d)\n", framecounter, totalMarker); //== Use a loop to access every marker in the frame ==// for (int i = 0 ; i < totalMarker; i++) { printf("\tMarker #%d:\t(%.2f,\t%.2f,\t%.2f)\n\n", i, TT_FrameMarkerX(i), TT_FrameMarkerY(i), TT_FrameMarkerZ(i)); }
For tracking 6 degrees of freedom (DoF) movement of a rigid body, a corresponding rigid body (RB) asset must be defined. A RB asset is created from a set of reflective markers attached to a rigid object which is assumed to be undeformable. There are two main approaches for obtaining RB assets when using Motive API; you can either import existing rigid body data or you can define new rigid bodies using the TT_CreateRigidBody function. Once RB assets are defined in the project, rigid body tracking functions can be used to obtain the 6 DoF tracking data. This section covers sample instructions for tracking rigid bodies using the Motive API.
Read: We strongly recommend reading through the Rigid Body Tracking page for more information on how rigid body assets are defined in Motive.
TT_LoadRigidBodies("rbfile.tra"); // Loading TRA file TT_AddRigidBodies("rbfile.tra");
NPRESULT TT_CreateRigidBody(const char* name, int userDataID, int markerCount, float *markerList);
Example: Creating RB Assets
int markerCount = TT_FrameMarkerCount; vector<float> markerListRelativeToGlobal(3*markerCount); // add markers to markerListRelativeToGlobal using TT_FrameMarkerX, etc for (int i = 0; i < markerCount; ++i) { markerListRelativeToGlobal.push_back(TT_FrameMarkerX(i)); markerListRelativeToGlobal.push_back(TT_FrameMarkerY(i)); markerListRelativeToGlobal.push_back(TT_FrameMarkerZ(i)); } // then average the locations in x, y and z for (int i = 0; i < markerCount; ++i) { float sx += markerListRelativeToGlobal[3*i]; float sy += markerListRelativeToGlobal[3*i + 1]; float sz += markerListRelativeToGlobal[3*i + 2]; } float ax = sx/markerCount; float ay = sy/markerCount; float az = sz/markerCount; vector<float> pivotPoint = {ax, ay, az}; vector<float> markerListRelativeToPivotPoint(3*markerCount); // subtract the pivot point location from the marker location for (int i = 0; i < markerCount; ++i) { markerListRelativeToPivotPoint.push_back(markerListRelativeToGlobal[3*i] - ax); markerListRelativeToPivotPoint.push_back(markerListRelativeToGlobal[3*i + 1] - ay); markerListRelativeToPivotPoint.push_back(markerListRelativeToGlobal[3*i + 2] - az); } TT_CreateRigidBody("Rigid Body New", 1, markerCount, markerListRelativeToPivotPoint);
void TT_RigidBodyLocation(int rbIndex, //== RigidBody Index float *x, float *y, float *z, //== Position float *qx, float *qy, float *qz, float *qw, //== Quaternion float *yaw, float *pitch, float *roll); //== Euler
Example: RB Tracking Data
//== Declared variables ==// float x, y, z; float qx, qy, qz, qw; float yaw, pitch, roll; int rbcount = TT_RigidBodyCount(); for(int i = 0; i < rbcount; i++) { //== Obtaining/Saving the rigid body position and orientation ==// TT_RigidBodyLocation( i, &x, &y, &z, &qx, & qy, &qz, &qw, &yaw, &pitch, &roll ); if( TT_IsRigidBodyTracked( i ) ) { printf( "%s: Pos (%.3f, %.3f, %.3f) Orient (%.1f, %.1f, %.1f)\n", TT_RigidBodyName( i ), x, y, z, yaw, pitch, roll ); } }
NPRESULT TT_RigidBodySettings(int rbIndex, RigidBodySolver::cRigidBodySettings &settings);
NPRESULT TT_SetRigidBodySettings(int rbIndex, RigidBodySolver::cRigidBodySettings &settings);
Once the API has been successfully initialized, data streaming can be enabled, or disabled, by calling either the TT_StreamNP, TT_StreamTrackd, or TT_StreamVRPN function. The TT_StreamNP function enables/disables data streaming via the NatNet. The NatNet SDK is a client/server networking SDK designed for sending and receiving NaturalPoint data across networks, and tracking data from the API can be streamed to client applications from various platforms via the NatNet protocol. Once the data streaming is enabled, connect the NatNet client application to the server IP address to start receiving the data.
TT_StreamNP(true); //Enabling NatNet Streaming.
The TT_StreamNP function is equivalent to Broadcast Frame Data from the Data Streaming pane in Motive.