Back to the Main Page → Back to Motive Documentation
After a capture volume has been calibrated and all markers have been placed, you are now ready for capture. There are two different modes in Motive: Live mode and Edit mode, which can be selected in the Timeline Pane. Data recording is done in the Live Mode, and the Edit Mode is used for playback and post-processing of recorded capture data. In Live mode, all cameras are active. In other words, Motive continuously reconstructs reflections detected in the volume, and real-time capture data can be either recorded or live-streamed to another pipeline. Here we will cover concepts and tips that are important for the recording pipeline, such as required setup steps, captured data types, and marker types. For more information on live-streaming, read through the Data Streaming page.
Tip: Cameras with indicator rings will illuminate in blue when in live move, in green when recording, and turned-off in edit mode.
Motive saves data into a Take file (TAK extension), which can be replayed in Edit mode later. Before recording Takes, create a project in Motive.
Project files (TTP) contains a calibration along Take files that are related to the project. Session folders are used to group Take files. Plan ahead and create a list of captures in a text file or a spreadsheet, and you can create empty takes by copying and pasting the list into the Project Pane. Doing so will assign names to Takes and will help you organize the capture session.
In Live mode, the Timeline pane shows controls for recording. The captured Take name can be assigned in the name box or directly from the project pane. The Again+ button adds an incrementing numerical suffix to the Take name for captures that are taken again. You can also just simply start recording takes and let Motive automatically generate new take names on the fly. Select the data types (2D/3D/JT) you wish to capture and start recording. To start the capture, select Live mode and press the recording button (red). Recording progress is indicated in the white box where record time and frames are shown in (Hour:Minute:Second:Frames).
Tip: For Skeleton tracking, always start and end the capture with a T-pose or A-pose, so that the skeleton tracking can be established in Edit mode as well.
For more details on each data type, refer to the Data Types page.
Review: Reconstruction is a process where 3D coordinates are reconstructed, or calculated, from triangulating multiple 2D marker positions obtained by each camera in the system. Thus, 2D data is used to obtain 3D data.
There are different video types, or image-processing modes, which could be used when capturing with OptiTrack cameras. Available modes vary slightly between different camera models, and each mode processes captured frames in a different manner within both camera hardware and software level. Depending on the video type, captured precision and required amount of CPU resources will vary. The video types can be categorized into tracking modes (object mode, precision mode, segment mode) and reference modes (MJPEG and raw grayscale). Frames from only the cameras using one of the tracking modes will contribute to reconstruction of 3D data. To change video types, simply right-click on one of the camera vantages from the 2D camera preview pane, and the available image processing modes can be selected under the video types. When recording, captured frames of only configured video types will be recorded into the 2D data of the Take.
(Tracking Mode)Object mode performs on-camera detection of centroid location, size, and roundness of the markers, and then, respective 2D object metrics are sent to the host PC. In general, this mode is best recommended for obtaining the 3D data. Compared to other processing modes, the Object mode provides smallest CPU footprint and, as a result, lowest processing latency can be achieved while maintaining the high accuracy. However, be aware that the 2D reflections are truncated into object metrics in this mode. The Object mode is beneficial for Prime Series and Flex 13 cameras when lowest latency is necessary or when the CPU performance is taxed by Precision Grayscale mode (e.g. high camera counts using a less powerful CPU).
Supported Camera Models: Prime series, Flex 13, and S250e camera models.
(Tracking Mode) Segment mode performs on-camera detection of thresholded pixels, and locations of these pixels within the captured image are delivered to the host PC. Then, the 2D object metrics (centroid location, size, and roundness) are computed from the CPU. Segment mode divides processing between the camera hardware and the CPU, providing a balance between precision and processing load. Segment mode images also provide information on the shape of the detect pixels, which can be used to better visualize the reflection. This mode is recommended for Flex 3 cameras and the Tracking Bars when CPU performance is taxed by Precision Grayscale mode (e.g. high camera counts using a less powerful CPU).
Supported Camera Models: All camera models.
(Tracking Mode) Precision Mode performs on-camera detection of centroids. These centroid regions of interests are sent to the PC for additional processing and determination of the centroid location. This provides very high-quality centroid locations but is very computationally expensive and is only recommended for low to moderate camera count systems for 3D tracking when Object Mode is unavailable.
Supported Camera Models: Flex series, Tracking Bars, S250e, Slim13e, and Prime 13 series camera models.
(Reference Mode) The MJPEG -compressed grayscale mode captures grayscale frames, compressed on-camera for scalable reference video capabilities. Grayscale images are used only for reference purpose, and processed frames will not contribute to the reconstruction of 3D data. The MJPEG mode can run at full frame rate and be synchronized with tracking cameras.
(Reference Mode) Processes full resolution, uncompressed, grayscale images. Grayscale images are used only for reference purpose, and processed frames will not contribute to the reconstruction of 3D data. Because of the high bandwidth associated with sending raw grayscale frames, this mode is not fully synchronized with other tracking cameras and they will run at lower frame rate.
Reference mode cameras contain a larger amount of data compared to the tracking modes, such as the precision mode or the object mode. For this reason, only a few, one or two, cameras should be set to the reference mode to prevent any framedrop issues.
Cameras can also be set to record grayscale reference videos during capture. These videos are synchronized with other captured frames, and they are used to observe what goes on during recorded capture. To record the reference video, drag the camera that you wish to use down into the Reference group in the Cameras Pane. Check the 2D camera view to make sure the selected reference camera is capturing grayscale video.
Compared to object images that are taken by non-reference cameras in the system, grayscale videos are much bigger in data size, and recording reference video consumes more network bandwidth. High data traffic can increase the system latency or cause reductions in the system frame rate. For this reason, we recommend setting no more than one or two cameras in the reference mode. Also, instead of using raw grayscale video, compressed MJPEG grayscale video can be recorded to reduce the data traffic. Reference views can be observed from the Camera View or Reference View pane. They overlay captured assets over the video which is very useful for analysis.
Tip: Latency can be monitored from the status bar located at the bottom.
Note: Grayscale images are used only for reference purpose, and processed frames will not contribute to reconstruction of 3D data.
Throughout capture, you might recognize that there are different types of markers that appear in the perspective view. In order to correctly interpret the tracking data, it is important to understand the differences between these markers. There are three different displayed marker types: markers, rigid body markers, and bone (or skeleton) markers.
Marker data, labeled or unlabeled, is the reconstructed 3D positions of markers from the 2D images of each camera. These markers do not present rigid body or skeleton solver calculations but locate the actual marker position solely through the Point Cloud reconstruction engine. These markers are represented as a solid sphere in the viewport. By default, labeled markers are colored in white, and unlabeled markers are colored in orange as shown in the chart. Marker colors can be changed from the Application Settings.
Rigid body markers or bone markers are expected marker positions. They appear as transparent spheres within a rigid body, or a skeleton, and they reflect the position that the rigid body or skeleton solver expects to find a corresponding reconstructed marker. Calculating these positions assumes that the marker is fixed on a rigid segment that doesn’t deform over the course of capture. When the rigid body solver or skeleton solver are correctly tracking reconstructed markers, both marker reconstructions, and expected marker positions will have similar position values and will closely align in the viewport.
When creating rigid bodies, their associated markers will appear as a network of lines between markers. Skeleton marker expected positions would be located next to body segments, or bones. Please see Figure 2. If the marker placement is distorted during capture, the actual marker position will deviate from the expected position. Eventually, the marker may become unlabeled. Figure 1. shows how actual and expected marker positions could align or deviate from each other. Due to the nature of marker-based mocap systems, labeling errors may occur during capture. Thus, understanding each marker type in Motive is very important for correct interpretation of the data.
Back: Create Assets
Next: Data Types