Quick Start Guide: Getting Started
Welcome to the Quick Start Guide: Getting Started!
This guide provides a quick walk-through of installing and using OptiTrack motion capture systems. To help you get familiarized with the mocap system, key concepts and instructions are summarized into each section of this page, and they will get you kick-started with your capture experience. Note that OptiTrack motion capture systems and Motive offers features far beyond the ones listed in this guide. Using additional features, capability of the system can be further optimized to fit your own capture applications. For more detailed information on each workflow, read through the corresponding workflow pages in this wiki: hardware setup and software setup.
Preparing the Capture Area
For best tracking results, you need to prepare and clean up the capture environment before setting up the system. Remove unnecessary obstacles that could block the camera views. Cover open windows and minimize incoming sunlight when capturing indoors. Avoid installing a system over reflective flooring since LED illumination from cameras will reflect off of it. If this is not an option, use rubber mats to cover the reflective area. Items with reflective surfaces or illuminating features should be removed or covered with non-reflective materials in order to avoid extraneous reflections.
Key Checkpoints for a Good Capture Area
- Minimize ambient lights, especially sunlight and other infrared light sources.
- Clean capture volume. Remove unnecessary obstacles within the area.
- Tape, or Cover, remaining reflective objects in the area.
See Also: Hardware Setup workflow pages.
Cabling and Load Balancing
Ethernet Camera System
Ethernet Camera Models: Prime series and Slim 13E cameras. Follow the below wiring diagram and connect each of the required system components.
- Connect PoE Switch(s) into the Host PC: Start by connecting a PoE switch into the host PC via an Ethernet cable. Since the camera system takes up a large amount of data bandwidth, the Ethernet camera network traffic must be separated from the office/local area network. If the computer used for capture is connected to an existing network, you will need to use a second Ethernet port or add-on network card for connecting the computer to the camera network. When you do, make sure to turn off your computer's firewall for the particular network under Windows Firewall settings.
- Uplink Switch: For systems with higher camera counts using multiple PoE switches, use an uplink Ethernet switch to link and connect all of the switches to the Host PC. Never daisy chain multiple PoE switches in series, because doing so can introduce additional latency to the system.
- High Camera Counts: For setting up more than 24 Prime series cameras, we recommend using a 10 Gigabit uplink switch and connecting it to the host PC via an Ethernet cable that supports 10 Gigabit transfer rate — Cat6a, Cat6e, and Cat7. This will provide larger data bandwidth and reduce the data transfer latency.
- Connect the Ethernet Cameras to the PoE Switch(s): Ethernet cameras connect to the host PC via PoE/PoE+ switches using Cat 6, or above, Ethernet cables.
- Power the Switches: Each camera is powered by the PoE switches. The PoE and PoE+ switches must support full power (15.4 and 30 Watts respectively) to every port simultaneously.
- Ethernet Cables: Ethernet cable connection is subject to the limitations of the PoE (Power over Ethernet) and Ethernet communications standards, meaning that the distance between camera and switch can go up to about 100 meters when using Cat 6 cables (Ethernet cable type Cat5e or below is not supported). For best performance, do not connect devices other than the capture computer to the camera network. Add-on network cards should be installed if additional Ethernet ports are required.
- External Sync: If you wish to connect external devices, use the eSync synchronization hub. Connect the eSync into one of the PoE switches using an Ethernet cable.
See Also: Cabling and Wiring page.
USB Camera System
USB Camera Models: Flex series and Slim 3U camera models. Follow the below wiring diagram to connect each of the required system components.
- USB Cables: Keep USB cable length restrictions in mind, each USB 2.0 cable must not exceed 5 meters in length.
- Connect the OptiHub(s) into a Host PC: Use USB 2.0 cables (type A/B) to connect each OptiHub into a host PC. To optimize available bandwidth, evenly split the OptiHub connections between different USB adapters of the host PC. For large system setups, up to two 5 meters active USB extensions can be used for connecting an OptiHub, providing total 15 meters in length.
- Power the Optihub: Use provided power adapters to connect each OptiHub into an external power. All USB cameras will be powered by the OptiHub(s).
- Connect the Cameras into the OptiHub(s): Use USB 2.0 cables (type B/mini-b) to connect each USB camera into an OptiHub. When using multiple OptiHubs, evenly distribute the camera connections among the OptiHubs in order to balance the processing load. Note that USB extensions are not supported when connecting a camera into an OptiHub.
- Multiple OptiHubs: Up to four OptiHubs, 24 USB cameras, can be used in one system. When setting up multiple OptiHubs, all OptiHubs must be connected, or cascaded, in a series chain with RCA synchronization cables. More specifically, a Hub SYNC Out port of one OptiHub needs to be connected into a Hub Sync In port of another OptiHub, as shown in the diagram.
- External Sync: When integrating external devices, use the External Sync In/Out ports that are available on each OptiHub.
See Also: Cabling and Wiring page.
Placing and Aiming Cameras
Optical motion capture systems utilize multiple 2D images from each camera to compute, or reconstruct, corresponding 3D coordinates. For best tracking results, cameras must be placed so that each of them captures unique vantage of the target capture area. Place the cameras circumnavigating around the capture volume, as shown in the example below, so that markers in the volume will be visible by at least two cameras at all times. Mount cameras securely onto stable structures (e.g. truss system) so that they don't move throughout the capture. When using tripods or camera stands, ensure that they are placed in stable positions. After placing cameras, aim the cameras so that their views overlap the most in the region where most of the capture will take place. Any camera movement after calibration will require re-calibrating the system. Cable strain-relief should be used at the camera end of camera cables to prevent potential damage to the camera.
Host PC Requirements
In order to properly run a motion capture system using Motive, the host PC must satisfy the minimum system requirements. Required minimum specifications vary depending on sizes of mocap systems and types of cameras used. Consult our Sale Engineers, or use the Build Your Own feature on our website to find out host PC specification requirements.
Motive is a software platform designed to control motion capture systems for various tracking applications. Motive not only allows the user to calibrate and configure the system, but it also provides interfaces for both capturing and processing of 3D data. The captured data can be recorded or live-streamed into other pipelines. If you are new to Motive, we recommend you to read through Motive Basics page to learn about basic navigation controls in Motive.
Download and Install
To install Motive, simply download the Motive software installer for your operating system from the Motive Download Page, then run the installer and follow its prompts.
Note: Anti-virus software can interfere with Motive's ability to communicate with cameras or other devices, and it may need to be disabled or configured to allow the device communication to properly run the system.
- Insert the Hardware license key into the computer.
- Launch Motive
- Activate your software using the License Tool, which can be accessed in the Motive splash screen. You will need to input the License Serial Number and the Hash Code for your license.
- After activation, the License tool will place the license file associated to the Hardware key in the License folder.
Note: Duo/Trio Tracking Bars come with embedded Motive: Tracker license. Once the device is recognized by the computer, you will be able to run Motive without the authentication process.
When you first launch Motive, you will see the Quick Start panel, the Cameras (Devices) panel and Project pane stacked on the left column, the Perspective View and the Cameras Preview at the center of the UI as shown in the image above. Initial layout may be slightly different for systems with different camera models or software licenses. The following chart briefly explains the main purpose of some of the key panels.
See Also: List of UI pages from the Documentation Reference Guide page.
|UI Name||Description||Related Page|
|Quick Start Panel||The quick start panel provides quick access to typical initial actions when using Motive. Each option will quickly lead you to the layouts and actions for corresponding selection. If you wish not to see this panel again, you can uncheck the box at the bottom. This panel can be re-accessed under the Help tab.||N/A|
|Cameras pane||Connected cameras will be listed under the Cameras pane. This panel is where you configure settings (FPS, exposure, LED, and etc.) for each camera and decide whether to use selected cameras for 3D tracking or reference videos. Only the cameras that are set to tracking mode will contribute to reconstructing 3D coordinates. Cameras in reference mode capture grayscale images for reference purposes only. The Cameras pane can be accessed under the View tab in Motive or by clicking icon on the main toolbar.||Cameras pane|
|Project pane||The Project pane is the primary interface for managing capture files in Motive. This pane contains lists of capture files, trackable objects, and their corresponding properties. When you first setup the capture, start by creating a new project under the Files tab. The Project pane can be accessed under the View tab in Motive or by clicking icon on the main toolbar.||Project pane|
|Perspective View pane||The Perspective View pane is where 3D data is displayed in Motive. Here, you can view, analyze, and select reconstructed 3D coordinates within a calibrated capture volume. This panel can be used both in live capture and recorded data playback. You can also select multiple markers and define rigid bodies and skeleton assets. If desired, additional view panes can be opened under the View tab or by clicking icons on the main toolbar.||Perspective View pane|
|Camera Preview pane||The Camera Preview pane shows 2D views of cameras in a system. Here you can monitor each camera view and apply mask filters. This pane is also used to examine 2D objects (circular reflections) that are captured, or filtered, in order to examine what reflections are processed and reconstructed into 3D coordinates. If desired, additional view panes can be opened under the View tab or by clicking icons on the main toolbar.||Camera Preview pane|
|Calibration pane||The Calibration pane is used in camera calibration process. In order to compute 3D coordinates from captured 2D images, the camera system needs to be calibrated first. All tools necessary for calibration is included within the Calibration pane, and it can also be accessed under the View tab or by clicking icon on the main toolbar.||Calibration pane and Calibration|
|Reconstruction pane||The Reconstruction pane sets the parameters for the reconstruction engine. Reconstruction is process of obtaining 3D coordinates, and from the Reconstruction pane, the reconstruction settings and bounds as well as other related features can be adjusted to optimize the acquisition of 3D marker coordinates. The Reconstruction pane can be accessed under the View tab in Motive or by clicking icon on the main toolbar.||Reconstruction and Data Types|
|Timeline pane||The Timeline pane is where you can initiate recording (Live Mode) or playback recorded data (Edit Mode). In the Live mode, you can use the Timeline pane to start recording, assign filename for the capture, and set data types you wish to record. In the Edit mode, you can use the Timeline Editor window to move forward and backward within the recorded capture, examine selected trajectories, select specific frame ranges, and delete or modify trajectories. The Timeline pane can be accessed under the View tab or by clicking icon on the main toolbar.||Timeline pane|
Set Up a Project
Start your capture by first creating a new Project. To setup a new project, select New Project under the Files tab in Motive, browse to a folder that you wish to setup the project in, and save the TTP project file. The location of the project file (TTP) establishes a root folder where all subsequent capture data is stored. Each capture will be saved as a Take (TAK) file, and recorded Take files will be grouped in session folders which will be created within the project directory where the TTP file is located. Each project is managed from the Project pane, and all associated sessions folders and corresponding Take files can be loaded at once by opening the project (TTP) file.
See Also: Motive Basics page.
In order to track 3D points, all cameras must first be calibrated. From the calibration process, Motive computes position and orientation of each camera as well as amounts of lens distortions in captured images. Using the calibration data, Motive constructs a 3D capture volume, and within this volume, motion tracking is accomplished. All of the calibration tools can be found under the Calibration pane. The following tutorial video and instructions provide details on how to perform camera calibration in Motive. Read through the Calibration page to learn more about camera calibration and what other tools are available for a better workflow.
See Also: Calibration page.
- Ensure that the volume is free of unwanted objects and all light interference has been physically masked or covered.
- Open the Calibration pane or use the calibration layout (CTRL + 1).
- Clear existing masking by clicking button from the Camera Preview pane.
- Mask the remaining extraneous reflections using Motive. Click Block Visible from the Calibration pane, or use the icon in the Camera Preview pane, to apply software masking to automatically block any remaining light sources or reflections in the volume. When masks are applied properly, all of the extraneous reflections (white) in the 2D Camera Preview pane will be covered with red pixels.
- Prepare a calibration wand.
- From the Calibration pane, click Start Wanding to begin.
- Bring the wand into the capture volume, and wave the wand throughout the volume and allow cameras to collect wanding samples.
- When the system indicates that enough samples have been collected, click on the Calculate button to begin the calculation.
- When the Ready to Apply button becomes enabled, click Apply Result.
- Calibration results window will be displayed. After examining the wanding result, click Apply to apply the calibration.
Setting the Ground Plane
- Now that all of the cameras have been calibrated, you need to define the ground plane of the capture volume.
- Place a calibration square inside the capture volume. Position the square so that the vertex marker is placed directly over the desired global origin.
- Orient the calibration square so that the longer arm is directed towards the desired +Z axes and the shorter arm is directed towards the desired +X axes of the volume. Motive uses the y-up right-hand coordinate system.
- Level the calibration square parallel to the ground plane.
- (Optional) In the 3D view in Motive, select the calibration square markers. If retro-reflective markers on the calibration square are the only reconstructions within the capture volume, Motive will automatically detect the markers.
- Access the Ground Plane tab in the Calibration pane.
- While the calibration square markers are selected, click Set Ground Plane from the Ground Plane Calibration Square section.
- Motive will prompt you to save the calibration file. Save the file to the corresponding project folder.
Place the retro-reflective markers onto subjects (rigid body or skeleton) that you wish to track. Double-check that the markers are attached securely. For skeleton tracking, open the Skeleton pane and choose a marker set you wish to use. Follow the skeleton avatar diagram for placing the markers. If you are using a mocap suit, make sure that the suit fits as tightly as possible. Motive derives the position of each body segment from related markers that you place on the suit. Accordingly, it is important to prevent the shifting of markers as much as possible. Sample marker placements are shown below.
Define Skeletons and Rigid Bodies
To define a rigid body, simply select three or more markers in the Perspective View, right-click, and select Rigid Body → Create Rigid Body From Selected. You can also utilize CTRL+T hotkey for creating rigid body assets.
To define a skeleton, have the actor enter the volume with markers attached at appropriate locations. Under the dropdown menu in the Skeleton pane, select a marker set you wish to use, and a corresponding model with desired marker locations will be displayed. After verifying that the marker locations on the actor correspond to those in the Skeleton pane, instruct the actor to strike the calibration pose. Most common calibration pose used is the T-pose. The T-pose requires a proper standing posture with back straight and head looking directly forward. Then, both arms are stretched to sides, forming a “T” shape. While in T-pose, select all of the markers within the desired skeleton in the 3D view and click Create button in the Skeleton pane. In some cases, you may not need to select the markers if only the desired actor is in view.
Once the volume is calibrated and skeletons are defined, now you are ready to capture. In the Timeline pane, press the dimmed red record button or simply press the spacebar when in the Live mode to begin capturing. This button will illuminate in bright red to indicate recording is in progress. You can stop recording by clicking the record button again, and a corresponding capture file (TAK extension), also known as capture Take, will be saved within your project. Once a Take has been saved, you can playback captures, reconstruct, edit, and export your data in a variety of formats for additional analysis or use with most 3D software.
When tracking skeletons, it is beneficial to start and end the capture with a T-pose. This allows you to recreate the skeleton in post-processing when needed.
See Also: Data Recording page.
After capturing a Take. Recorded 3D data and its trajectories can be post-processed using the Data Editing tools, which can be found in the Edit Tools Pane. Data editing tools provide post-processing features such as deleting unreliable trajectories, smoothing select trajectories, and interpolating missing (occluded) marker positions. Post-editing the 3D data can improve the quality of tracking data.
General Editing Steps
- Skim through overall frames in a Take and get the idea which frames and markers need to be cleaned up.
- Refer to the Labeling Pane and inspect gap percentages in each marker.
- Select a marker that is often occluded or misplaced.
- Look through the Editor in Timeline Pane, and inspect the gaps in the trajectory.
- For each gap frames, look for an unlabeled marker at the expected location near the solved marker position. Re-assign the proper marker label if the unlabeled marker exists.
- Use Trim Tails feature to trim the both ends of the trajectory in each gap. It trims off few frames adjacent to the gap where tracking errors might exist. This prepares occluded trajectories for Gap Fillings.
- Find the gaps to be filled, and use the Fill Gaps feature to model the estimated trajectories for occluded markers.
In Motive, captured markers are Reconstructed into 3D coordinates. The reconstructed markers need to be labeled for Motive to distinguish different reconstructions within a capture. Trajectories of annotated reconstructions can be exported individually or used (solved altogether) to track movements of the target subjects. Markers associated with Rigid Bodies and Skeletons are labeled automatically through the auto-labeling process. Note that rigid body and skeleton markers can be auto-labeled both during Live mode (before capture) and Edit mode (after capture). Individual markers can also be labeled, but each marker needs to be manually labeled in post-processing using MarkerSet assets and the Labeling pane. These manual Labeling tools can also be used to correct any labeling errors. Read through the Labeling page for more details in assigning and editing marker labels.
- Auto-label: Automatically label sets of rigid body markers and skeleton markers using the corresponding asset definitions.
- Manual Label: Labeling individual markers manually using the Labeling pane, assigning labels defined in the MarkerSet, rigid body, or skeleton assets.
See Also: Labeling page.
Motive exports reconstructed 3D tracking data in various file formats, and exported files can be imported into other pipelines to further utilize capture data. Supported formats include CSV and C3D for Motive: Tracker, and additionally, FBX, BVH, and TRC for Motive: Body. To export tracking data, select a Take to export and open the export dialog window, which can be accessed from File → Export Tracking Data or right-click on a Take → Export Tracking data from the Project Pane. Multiple Takes can be selected and exported from the Project pane or by using the Motive Batch Processor. From the export dialog window the frame rate, measurement scale, and frame range of exported data can be configured. Frame ranges can also be specified by selecting a frame range in the Timeline Pane before exporting a file. In the export dialog window, corresponding export options are available for each file format.
See Also: Data Export page.
Motive offers multiple options to stream tracking data onto external applications in real-time. Tracking data can be streamed in both Live mode and Edit mode. Streaming plugins are available for Autodesk Motion Builder, Visual3D, Unreal Engine 4, 3ds Max, Maya (VCS), VRPN, and trackd, and they can be downloaded from the OptiTrack website. For other streaming options, the NatNet SDK enables users to build custom client and server applications to stream capture data. Common motion capture applications rely on real-time tracking, and the OptiTrack system is designed to deliver data at an extremely low latency even when streaming to third-party pipelines. Detailed instructions on specific streaming protocols are included in the PDF documentation that ships with the respective plugins or SDK's.