Jump to: navigation, search

11 Dec 2010. This page is under construction...


This is a brief summary of the progress of the Enactor MATLAB toolbox that can be used to ‘enact’ an experiment in a virtual environment mapped onto an actual experimental motion capture space. By connecting to sn onlinr data stream via MatRiver functions, functions using Enactor objects can be used to control 3-D Mobile Brain/body Imaging (MoBI) experiments.

The Enactor toolbox is object oriented. It is compromised of a number of classes and many low-level atomic Matlab functions called in class methods. Enactor classes use MATLAB ‘Simulink 3-D’ (formerly Matlab VRML) nodes internally and wrap a hierarchy of them (Transform, Shape, Material, …) inside their transformNode field. In addition to holding MATLAB virtual reality nodes (objects), these classes provide functionalities based on their underlying geometry that can be used to support interactive MoBI experiments.

Figure 1. Class structure of the Enactor toolbox.

Figure 1 above shows the currently implemented class structure of the toolbox. Arrows indicate parent-child relationships. For each class, its properties (values) are listed below the class title. At the bottom of each class icon, its methods (functions) are shown. For example, since VrBox class is a child of vrPrism, it inherits its fields (prism and isFilled) and its distanceTo() method.

The vrObjectCollection (‘virtual object collection’) class holds an array of vrObject’s and calculates the minimum distance from a point (or an array of points) to each of them using its distanceTo() function. vrObjectCollection may be used to define a composite object in the virtual environment. For example, an instance of this class can form a connected maze composed of many vrVerticalWalls.

It is possible to calibrate vrBox and vrSphere objects can represent the location and geometry of actual physical objects. An instance of vrBox can represent an actual box, for example the LCD screen or table surface at which a subject is sitting. Similarly, an instance of vrSphere can represent the location and extent of an actual ball, or else a virtual ‘hotspot’ around a point in the experimental space. To achieve this, both these classes have a create_from_points() method, given a series of calibration point locations (in sequence) during calibration, constructs a corresponding virtual object location and shape. Figure 2 illustrates the calibration of these objects to points in the motion capture space.

Figure 2. Calibration points for instances of vrSphere and vrBox.

To calibrate the position of vrShpere, two opposite points on a diameter are used for calibration. Similarly, vrBox uses five points, four clockwise on one face and the fifth on one opposite corner as shown schematically in Figure 3.

Figure 3. Order of calibration points for vrBox.

Function Reference