WO2019215472A2 - Passive marker systems and methods for motion tracking - Google Patents

Passive marker systems and methods for motion tracking Download PDF

Info

Publication number
WO2019215472A2
WO2019215472A2 PCT/IB2018/001638 IB2018001638W WO2019215472A2 WO 2019215472 A2 WO2019215472 A2 WO 2019215472A2 IB 2018001638 W IB2018001638 W IB 2018001638W WO 2019215472 A2 WO2019215472 A2 WO 2019215472A2
Authority
WO
WIPO (PCT)
Prior art keywords
pixels
image
pixel
chord
light
Prior art date
Application number
PCT/IB2018/001638
Other languages
French (fr)
Other versions
WO2019215472A3 (en
Inventor
Chanchai Poonpol
Shinhaeng Lee
Original Assignee
Olympus Corporation
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Olympus Corporation filed Critical Olympus Corporation
Priority to PCT/IB2018/001638 priority Critical patent/WO2019215472A2/en
Publication of WO2019215472A2 publication Critical patent/WO2019215472A2/en
Publication of WO2019215472A3 publication Critical patent/WO2019215472A3/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/246Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
    • G06T7/248Analysis of motion using feature-based methods, e.g. the tracking of corners or segments involving reference images or patches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/60Analysis of geometric attributes
    • G06T7/66Analysis of geometric attributes of image moments or centre of gravity
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10048Infrared image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20036Morphological image processing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30204Marker

Definitions

  • Light sources can include passive sources such as, for example, reflectors that reflect light toward the camera. Passive sources are often implemented as retroreflectors to reflect light that is generated near the camera lens. To determine a position of a reflective light sources for motion tracking purposes, the centers of the markers are determined as position estimates within the 2-dimensional captured image. Light sources can also include active light sources such as, for example, LEDs or other light sources. Conventional solutions provide a sufficient number of markers to achieve a desired spatial resolution of the system, and desired temporal resolution is achieved by assuring a sufficiently fast scanning rate.
  • Infrared ranging products are also known and can be used to measure the distance to objects based on a time of flight measurement. Infrared signals are transmitted toward an object, reflected back, and the time-of-flight measured to determine the distance to the object.
  • Infrared signals are transmitted toward an object, reflected back, and the time-of-flight measured to determine the distance to the object.
  • using such systems to range to reflective markers has proven difficult because the depth information is not valid for infrared reflected by these reflected markers.
  • a passive marker for motion tracking using an optical camera system includes an outer surface that reflects light from a light source in a direction toward the light source; and a plurality of nonreflective spots on the outer surface of the passive marker.
  • each of the plurality of nonreflective spots may be completely surrounded by reflective material such that the nonreflective spots comprise discrete spots of non-reflectivity.
  • the process may further include determining a distance of a center point of the chords to an edge of the object in a direction perpendicular to the chord.
  • the process may further include receiving a grayscale image object of the object and converting the grayscale image to the image comprising light and dark pixels.
  • the process may also include receiving a grayscale image object of the object and converting the grayscale image to the image comprising light and dark pixels.
  • the image having light and dark pixels may be a binary image and the process may further include performing a thresholding operation to convert the grayscale image of the object into a binary image comprising light and dark pixels.
  • the image may include a plurality of image objects that represent a respective marker.
  • Figure 2 illustrates an example system of an image capture system with which embodiments of the systems and methods disclosed herein may be implemented.
  • Figure 6 illustrates an example system for image processing using a hybrid marker in accordance with one embodiment of the systems and methods described herein.
  • Embodiments of the technology disclosed herein can be utilized with any of a number of different capture systems utilizing markers. Such systems can be used for object detection, motion tracking and other purposes. Motion tracking applications may include, for example, animation, robotics, object tracking, video gaming, biomechanics, virtual reality, and so on.
  • Figures 1 and 2 illustrate examples of systems with which embodiments disclosed herein may be implemented.
  • Figure 1 illustrates an example of a subject 140 (e.g., a dancer, actress, animal, etc.) to be tracked, including a plurality of markers in accordance with one embodiment.
  • the subject 140 is an actress with a plurality of markers 142 attached to her head, arms, legs, hands, feet and torso.
  • FIG. 2 illustrates an example system of an image capture system with which embodiments of the systems and methods disclosed herein may be implemented.
  • this example includes a plurality of subjects 238 (e.g., subjects 140) moving on a stage 240.
  • the stage can be an open space with flooring or other ground covering suitable for the activity being performed and lighting that can be adjusted to enhance the motion capture.
  • the flooring can include appropriate markings such as boxes, grids, lines, spots, for example, so that the subject or subjects can determine their position as they perform on the stage.
  • the stage can include a green screen or other like background to allow video overlay, and further examples may include acoustic absorbing materials to allow synchronous motion capture and audio recording.
  • the stage 240 may also include mats, pads, stunt beams and other props to facilitate desired motion of the subjects 238.
  • the lighting used to illuminate the passive markers can be of a narrow wavelength so that filters, narrow-band sensors, or other techniques can be used to improve the signal-to-noise ratio of the image capture.
  • IR illumination in the wavelength range of 700 nm to 1 mm (e.g., approximately 850nm wavelength) as the light source.
  • Other examples may use light at other wavelengths or broad-band illumination. Infrared illumination, however, may be
  • Cameras 231 in this example also include a communication circuit 235 to transmit the captured images to a processing circuit 244.
  • Communication circuit 235 can include a wired or wireless communications interface to provide the data transfer.
  • processing circuit 244 includes the circuitry (e.g., hardware, software or a combination thereof) to detect the positions of and track the motion of the markers from the series of images (e.g., video stream) captured by cameras 231.
  • grayscale image 346 may appear as a solid background with a plurality of dots or circles indicating positions of the markers in the field of view.
  • the data structure representing the pixel value at that pixel location can be a 1, or a 0, or other value representing an on state or a light pixel; and where insufficient light is detected, the data structure representing that pixel value at that pixel location can be different, such as a 0 or a 1 or other value representing an off state or a dark pixel.
  • light and dark pixels can be any value provided that the processing system can distinguish a light pixel from a dark pixel (e.g., based on intensity or color). That is, if the processing circuit can distinguish light and dark pixels, the light pixels are sufficiently light and the dark pixels are likewise sufficiently dark.
  • circle detection circuitry detects the centers of the markers in the image, which appear as circles within the image.
  • the center coordinate of each marker can be defined with 2-D position data (x, y). These steps are repeated for multiple frames of images captured over time.
  • the motion of each marker can be tracked from frame to frame.
  • a surface may be deemed to be nonreflective if its reflectance is less than 30% at the operational wavelength of the motion tracking light sources. In still other embodiments, a surface may be deemed to be nonreflective if its reflectance is less than 40% at the operational wavelength of the motion tracking light sources. Other reflectance levels above 30% may still be effective for the nonreflective surfaces provided the distance measurement apparatus is able to measure a distance to the nonreflective surface.
  • FIGS 6 and 7 illustrate an example of a new processing technique. Particularly, Figure 6 illustrates an example system for image processing using a hybrid marker in accordance with one embodiment of the systems and methods described herein; and Figure 7 illustrates an example process for image processing using a hybrid marker in accordance with one embodiment of the systems and methods described herein.
  • the markers are illuminated with a light source and cameras detect light reflected from the markers.
  • the process of illuminating markers and capturing reflections can be performed using conventional techniques.
  • hybrid markers with nonreflective spot areas may be used such as, for example, those markers described above with reference to Figure 5. This results in a grayscale image 646 that can be sent from the detection circuitry in the cameras to threshold circuitry 648, which may be internal to the cameras or external to the cameras.

Abstract

A passive marker for motion tracking using an optical camera system includes an outer surface that reflects light from a light source in a direction toward the light source; and a plurality of nonreflective spots on the outer surface of the passive marker.

Description

PASSIVE MARKER SYSTEMS AND METHODS FOR MOTION
TRACKING
Technical Field
[0001] The disclosed technology relates generally to motion tracking, and more particularly some embodiments relate to markers for motion tracking, systems and methods for detecting marker centers or systems and methods for determining a distance to a marker.
Description of the Related Art
[0002] Conventional motion capture systems enjoy widespread usage in a number of applications including applications for video game development, motion picture filming, animation and others. These systems use various techniques to capture the actual movements of an actor, animal or other moving object so that they can be reproduced such as, for example, in an animated form. An early form of motion capture used to create animation is known as rotoscoping. Rotoscoping involves artists tracing over video footage, frame by frame, to reproduce the movement of an actor or actors captured by the footage. A more contemporary form of motion capture involves the use of multiple light sources attached to the body of the moving object and a camera or cameras to detect movement of these markers. When the object moves the markers move with the object and this motion is captured by the cameras. For example, consider a scenario in which the objective is to capture the motion of an actress dancing on a stage. The markers can be attached to the head, torso and limbs of the actress such that they move as the actress moves. The movement of the markers can be captured by the camera or cameras and recorded. The markers may be implemented as any of a variety of implements they can be detected by a sensor such as, for example, light sources, acoustic markers, inertial markers, magnetic markers and so on.
[0003] Light sources can include passive sources such as, for example, reflectors that reflect light toward the camera. Passive sources are often implemented as retroreflectors to reflect light that is generated near the camera lens. To determine a position of a reflective light sources for motion tracking purposes, the centers of the markers are determined as position estimates within the 2-dimensional captured image. Light sources can also include active light sources such as, for example, LEDs or other light sources. Conventional solutions provide a sufficient number of markers to achieve a desired spatial resolution of the system, and desired temporal resolution is achieved by assuring a sufficiently fast scanning rate.
[0004] In these motion tracking operations, infrared cameras with infrared light sources have gained increased popularity. Multiple IR cameras can be used to monitor multiple passive retroreflector markers because it is generally easy to identify the markers on a captured image using thresholding image processing. Multiple cameras can be used to triangulate to determine three-dimensional marker positions.
[0005] Infrared ranging products are also known and can be used to measure the distance to objects based on a time of flight measurement. Infrared signals are transmitted toward an object, reflected back, and the time-of-flight measured to determine the distance to the object. However, using such systems to range to reflective markers has proven difficult because the depth information is not valid for infrared reflected by these reflected markers.
Brief Summary of Embodiments
[0006] Embodiments of the technology disclosed herein are directed toward devices and methods for motion tracking. In one embodiment, a passive marker for motion tracking using an optical camera system, includes an outer surface that reflects light from a light source in a direction toward the light source; and a plurality of nonreflective spots on the outer surface of the passive marker. In some embodiments, each of the plurality of nonreflective spots may be completely surrounded by reflective material such that the nonreflective spots comprise discrete spots of non-reflectivity.
[0007] The reflective surface may comprise a reflective paint or other coating and the nonreflective spots may be disposed on the reflective paint or other coating. In other embodiments, the reflective surface may include a reflective coating applied to a non-reflective surface in a pattern that yields the plurality of non-reflective spots on the outer surface of the reflective marker. In various embodiments, the nonreflective spots may include at least one of circles, rectangles, triangles, and stars.
[0008] In a further embodiment, a process for determining the center of an image object of an image is provided. The image may include light and dark pixels and the image object may represent a marker in a motion tracking system. The process may include: 1) for a line of pixels in the image, examine a pixel in the line of pixels in the image; 2) determine if the examined pixel in the line of pixels is a light pixel or a dark pixel wherein: (a) if the examined pixel is a dark pixel, examine a next pixel in the line of pixels and repeat step 2); and (b) if the examined pixel is a light pixel: (i) incrementing a length value of a chord of the object defined by light pixels on the line of pixels; (ii) examine a next pixel in the line of pixels in the image; (iii) if the next pixel in the line of pixels is a light pixel, repeating operations 2) (i) - 2) (iii), but if the next pixel in the line of pixels is a dark pixel, proceed to operation 3); and 3) determine a final length of the chord of the object for the line of pixels and compare the final length of the chord to a previously determined maximum chord length for a previously scanned line or lines of pixels; (a) if the determined final length of the chord is greater than a maximum chord length for any previously scanned line of pixels, determine a final length of the chord as a maximum chord length and return to step 1) for a next line of pixels in the image; and (b) if the determined final length of the chord is less than a maximum chord length for any previously scanned line of pixels, compute the center of the object as a center of the center pixel of the maximum-chord-length chord.
[0009] The process may further include determining a distance of a center point of the chords to an edge of the object in a direction perpendicular to the chord. The process may further include receiving a grayscale image object of the object and converting the grayscale image to the image comprising light and dark pixels.
[0010] The image having light and dark pixels may be a binary image and the process may further include performing a thresholding operation to convert the grayscale image of the object into a binary image comprising light and dark pixels. The image may also include a plurality of image objects.
[0011] In still further embodiments, a system for motion capture, includes: a plurality of light sources; a plurality of cameras, each camera including an image sensor to receive light reflected from a light source to the image sensor from a reflective marker positioned on a subject; and a motion capture circuit that includes a non transitive machine readable storage medium storing executable program instructions which when executed cause the motion capture circuit to perform a process of determining the center of an image object of an image from an image sensor, the image comprising light and dark pixels and the image object representing the reflective marker. The process may include: 1) for a line of pixels in the image, examine a pixel in the line of pixels in the image; 2) determine if the examined pixel in the line of pixels is a light pixel or a dark pixel wherein: (a) if the examined pixel is a dark pixel, examine a next pixel in the line of pixels and repeat step 2); and (b) if the examined pixel is a light pixel: (i) incrementing a length value of a chord of the object defined by light pixels on the line of pixels; (ii) examine a next pixel in the line of pixels in the image; (iii) if the next pixel in the line of pixels is a light pixel, repeating operations 2) (i) - 2) (iii), but if the next pixel in the line of pixels is a dark pixel, proceed to operation 3); and 3) determine a final length of the chord of the object for the line of pixels and compare the final length of the chord to a previously determined maximum chord length for a previously scanned line or lines of pixels; (a) if the determined final length of the chord is greater than a maximum chord length for any previously scanned line of pixels, determine a final length of the chord as a maximum chord length and return to step 1) for a next line of pixels in the image; and (b) if the determined final length of the chord is less than a maximum chord length for any previously scanned line of pixels, compute the center of the object as a center of the center pixel of the maximum-chord-length chord. The process may further include determining a distance from a center point of the chords to an edge of the object in a direction perpendicular to the chord.
[0012] The process may also include receiving a grayscale image object of the object and converting the grayscale image to the image comprising light and dark pixels. [0013] The image having light and dark pixels may be a binary image and the process may further include performing a thresholding operation to convert the grayscale image of the object into a binary image comprising light and dark pixels. The image may include a plurality of image objects that represent a respective marker.
[0014] In yet another embodiment, a computer program product embodied in a non-transitive computer-readable medium may be provided and may include instructions, that when executed, cause one or more processors to perform operations to determine the center of an image object of an image from an image sensor, the image comprising light and dark pixels and the image object representing the reflective marker, the process operations including: 1) for a line of pixels in the image, examine a pixel in the line of pixels in the image; 2) determine if the examined pixel in the line of pixels is a light pixel or a dark pixel wherein: (a) if the examined pixel is a dark pixel, examine a next pixel in the line of pixels and repeat step 2); and (b) if the examined pixel is a light pixel: (i) incrementing a length value of a chord of the object defined by light pixels on the line of pixels; (ii) examine a next pixel in the line of pixels in the image; (iii) if the next pixel in the line of pixels is a light pixel, repeating operations 2) (i) - 2) (iii), but if the next pixel in the line of pixels is a dark pixel, proceed to operation 3); and 3) determine a final length of the chord of the object for the line of pixels and compare the final length of the chord to a previously determined maximum chord length for a previously scanned line or lines of pixels; (a) if the determined final length of the chord is greater than a maximum chord length for any previously scanned line of pixels, determine a final length of the chord as a maximum chord length and return to step 1) for a next line of pixels in the image; and (b) if the determined final length of the chord is less than a maximum chord length for any previously scanned line of pixels, compute the center of the object as a center of the center pixel of the maximum-chord-length chord. The process may also include determining a distance from a center point of the chords to an edge of the object in a direction perpendicular to the chord.
[0015] The process may include receiving a grayscale image object of the object and converting the grayscale image to the image having light and dark pixels.
[0016] The image including light and dark pixels may be a binary image and the process further comprises performing a thresholding operation to convert the grayscale image of the object into a binary image including light and dark pixels. The image may include a plurality of image objects representing a plurality of markers..
[0017] Other features and aspects of the disclosed technology will become apparent from the following detailed description, taken in conjunction with the accompanying drawings, which illustrate, by way of example, the features in accordance with embodiments of the disclosed technology. The summary is not intended to limit the scope of any inventions described herein, which are defined solely by the claims attached hereto. Brief Description of the Drawings
[0018] The technology disclosed herein, in accordance with one or more various embodiments, is described in detail with reference to the following figures. The drawings are provided for purposes of illustration only and merely depict typical or example embodiments of the disclosed technology. These drawings are provided to facilitate the reader's understanding of the disclosed technology and shall not be considered limiting of the breadth, scope, or applicability thereof. It should be noted that for clarity and ease of illustration these drawings are not necessarily made to scale.
[0019] Figure 1 illustrates an example of a subject (e.g., a dancer, actress, animal, etc.) to be tracked, including a plurality of markers, in accordance with one embodiment.
[0020] Figure 2 illustrates an example system of an image capture system with which embodiments of the systems and methods disclosed herein may be implemented.
[0021] Figure 3 is a diagram illustrating an example process for marker detection and determining the center of the markers in accordance with one embodiment.
[0022] Figure 4 is a flow diagram illustrating an example process for detecting the center of a circle in accordance with the embodiment illustrated in Figure 3. [0023] Figure 5 is a diagram illustrating an example of a marker that includes one or more non-reflective areas on the surface of the marker in accordance with one embodiment of the systems and methods described herein.
[0024] Figure 6 illustrates an example system for image processing using a hybrid marker in accordance with one embodiment of the systems and methods described herein.
[0025] Figure 7 illustrates an example process for image processing using a hybrid marker in accordance with one embodiment of the systems and methods described herein.
[0026] Figure 8 is a diagram illustrating two examples of a binary image having three light areas representing detected markers.
[0027] Figure 9 is a diagram illustrating an example of pixel scanning to detect a center point in accordance with one embodiment of the systems and methods described herein.
[0028] Figure 10 is a flow diagram illustrating an example process for center detection in accordance with the example of Figure 9.
[0029] Figure 11 is a diagram illustrating an example of calculating the distance between the image sensor and the center of the marker based on a distance measurement made to a point off center on the marker. [0030] Figure 12 is a diagram illustrating an example computing system that can be used as one way to implement circuitry described herein.
[0031] The figures are not intended to be exhaustive or to limit the invention to the precise form disclosed. It should be understood that the invention can be practiced with modification and alteration, and that the disclosed technology be limited only by the claims and the equivalents thereof.
Detailed Description of the Embodiments
[0032] Embodiments of the technology disclosed herein are directed toward systems and methods for motion tracking. In some embodiments, a marker for motion tracking is provided that includes a reflective surface as well as areas of little or no reflectivity on the surface. These nonreflective spots on the surface can be used, for example, to aid in determining a distance to the marker. Further embodiments include systems and methods for determining a center point of a marker using binary image data and calculating a distance to a marker based on a distance measured at a point that is off center on the marker.
[0033] Embodiments of the technology disclosed herein can be utilized with any of a number of different capture systems utilizing markers. Such systems can be used for object detection, motion tracking and other purposes. Motion tracking applications may include, for example, animation, robotics, object tracking, video gaming, biomechanics, virtual reality, and so on. [0034] Figures 1 and 2 illustrate examples of systems with which embodiments disclosed herein may be implemented. Figure 1 illustrates an example of a subject 140 (e.g., a dancer, actress, animal, etc.) to be tracked, including a plurality of markers in accordance with one embodiment. In this example, the subject 140 is an actress with a plurality of markers 142 attached to her head, arms, legs, hands, feet and torso. Although twenty-one markers are shown in this example, other applications can include a lesser or greater number of markers depending on the resolution desired or the type of movement to be captured. Indeed, in many applications, a large number of markers may be used to provide data in all directions for capture by multiple cameras and to provide high resolution information for motion capture. Note: although twenty-one markers 142 are illustrated in this example, only three are labeled with a reference number to avoid unnecessary clutter in the drawings. Also, although the markers are illustrated as small circles representing spherical markers, other sizes, shapes or geometries can be provided.
[0035] Figure 2 illustrates an example system of an image capture system with which embodiments of the systems and methods disclosed herein may be implemented. With reference now to Figure 2, this example includes a plurality of subjects 238 (e.g., subjects 140) moving on a stage 240. In some applications, the stage can be an open space with flooring or other ground covering suitable for the activity being performed and lighting that can be adjusted to enhance the motion capture. The flooring can include appropriate markings such as boxes, grids, lines, spots, for example, so that the subject or subjects can determine their position as they perform on the stage. In some examples the stage can include a green screen or other like background to allow video overlay, and further examples may include acoustic absorbing materials to allow synchronous motion capture and audio recording. The stage 240 may also include mats, pads, stunt beams and other props to facilitate desired motion of the subjects 238.
[0036] The example illustrated in Figure 2 also includes a plurality of cameras 231. In some applications, multiple cameras can be positioned around the perimeter of the stage and in various elevations to capture movement of the subjects 238. Cameras 231 can include image sensors 232 to capture movement of the subjects 238. Particularly, image sensors 232 are configured to detect and capture light (e.g., reflections) from markers on the subjects 238. Although not illustrated in the example of Figure 2, cameras 231 can also include light sources located at the camera (e.g. near or surrounding the lens) so that the retroreflective markers will reflect light back to the camera lens. The lighting used to illuminate the passive markers can be of a narrow wavelength so that filters, narrow-band sensors, or other techniques can be used to improve the signal-to-noise ratio of the image capture. For example, some applications may use IR illumination in the wavelength range of 700 nm to 1 mm (e.g., approximately 850nm wavelength) as the light source. Other examples may use light at other wavelengths or broad-band illumination. Infrared illumination, however, may be
- IB - desirable as it is not visible to the human eye and therefore will not distract the performers.
[0037] Cameras 231 in this example also include a communication circuit 235 to transmit the captured images to a processing circuit 244. Communication circuit 235 can include a wired or wireless communications interface to provide the data transfer. Although not illustrated, some preprocessing can be included in cameras 231 as well. Processing circuit 244 includes the circuitry (e.g., hardware, software or a combination thereof) to detect the positions of and track the motion of the markers from the series of images (e.g., video stream) captured by cameras 231.
[0038] As noted above, accurate motion tracking with passive sensors typically includes a process step that determines the center of the markers. Figure 3 is a diagram illustrating an example process for marker detection and determining the center of the markers in accordance with one embodiment. Figure 4 is a flow diagram illustrating an example process for detecting the center of a circle in accordance with the embodiment illustrated in Figure 3. Referring now to Figures 3 and 4, in this example the system includes a plurality of passive markers 342. At operation 442, these markers are illuminated with infrared light sources at or near camera lenses (not illustrated in Figure 3). The infrared light is reflected back to the camera lenses and focused on to infrared image sensor 344. At operation 444, infrared sensor 344 detects the received light and outputs a grayscale image 346. For example, grayscale image 346 may appear as a solid background with a plurality of dots or circles indicating positions of the markers in the field of view.
[0039] At operation 448, threshold detection circuitry 348 is used to convert the grayscale image 346 to a binary image. For example, this can be done by applying upper and lower thresholding processing for each pixel. One example of this thresholding is given by: pixelout(x, y ) = j
Figure imgf000016_0001
Where THR is the threshold value and MAX is the value to be given if pixel value is greater than the threshold value. Accordingly, threshold detection circuitry 348 outputs binary image 350. In one example, the pixels of the binary image include a first value where sufficient light is detected by the image sensor, and a second value where no light, or insufficient light (below threshold) is detected. For example, where sufficient light is detected, the data structure representing the pixel value at that pixel location (e.g., a bit, byte, pixel or other data set) can be a 1, or a 0, or other value representing an on state or a light pixel; and where insufficient light is detected, the data structure representing that pixel value at that pixel location can be different, such as a 0 or a 1 or other value representing an off state or a dark pixel. Generally speaking, light and dark pixels can be any value provided that the processing system can distinguish a light pixel from a dark pixel (e.g., based on intensity or color). That is, if the processing circuit can distinguish light and dark pixels, the light pixels are sufficiently light and the dark pixels are likewise sufficiently dark.
[0040] At operation 452, circle detection circuitry detects the centers of the markers in the image, which appear as circles within the image. The center coordinate of each marker can be defined with 2-D position data (x, y). These steps are repeated for multiple frames of images captured over time. At operation 454 the motion of each marker can be tracked from frame to frame.
[0041] Because the markers are in reality spherical (or otherwise of a three- dimensional geometry), it may be desirable in some applications to determine the center of the marker in 3 dimensions. This can allow for more accurate position determination and therefore, more accurate motion tracking.
[0042] In some embodiments, a new configuration for a marker is provided. Particularly, in some embodiments a hybrid marker can be provided that includes one or more non-reflective areas on the surface of the marker. Examples of this are shown in Figure 5. In the example at 510, the marker includes a plurality of non-reflective spots 514 spaced about its surface. The spots in this example are generally circular or elliptical, although other spot shapes can be used. A few alternative spot shapes are shown at 520. These examples include star shaped spots at 523, rounded spots (circular or elliptical) at 525 and square or rectangular at 527. In other applications, other quantities, shapes and sizes of nonreflective areas can be used. The aggregate amount of surface area covered by the nonreflective spots affects the amount of light reflected by the marker toward the cameras. The larger the area covered by nonreflective spots, the less light is reflected from the marker toward the cameras.
[0043] In some embodiments, the spots can be distributed uniformly about the surface of the marker, and the spot size can be uniform such that the ratio of reflective area to nonreflective area is uniform around the surface of the marker. In other embodiments the concentration of spots, or the spot size, the spot shape, or any combination of the foregoing, can vary at different locations on the surface of the marker. Depending on these variations, this can lead to variations in the ratio of reflective area to nonreflective area about the surface of the marker.
[0044] Any of a number of different techniques can be used to create nonreflective spots on a reflective marker. For example, the marker can be sprayed or otherwise coated with a nonreflective coated, the spot areas masked, and the marker sprayed or otherwise coated with a reflective coating to provide the desired pattern. Likewise, the opposite approach can be followed in which the markers first sprayed or otherwise coated with a reflective material, masked, and sprayed or otherwise coated with a nonreflective coating. In another example, a wrap can be prepared with the reflective and nonreflective coatings patterned their own and the marker wrapped with this wrap. In still another example, the marker can be sprayed or coated with a reflective or nonreflective coating and stickers or other appliques applied to the surface to create the desired pattern.
[0045] In some embodiments, reflectivity and non-reflectivity can be determined based on performance of the system. For example, a surface may be considered to be reflective if it reflects a sufficient amount of light to enable the cameras to effectively detect motion for motion tracking operations in the system. Similarly, a surface may be considered to be nonreflective if it returns a low enough level of light from the motion tracking light sources such that the distance measurements can be made to the marker with sufficient accuracy for system operations. That is, reflectance of the nonreflective spots should be low enough such that light reflected from the motion tracking light sources off of the nonreflective spots will not inhibit the distance measurements. In some embodiments, a surface may be deemed to be reflective if its reflectance is greater than 80% at the operational wavelength of the motion tracking light sources. In other embodiments, a surface may be deemed to be reflective if its reflectance is greater than 70% at the operational wavelength of the motion tracking light sources. In still other embodiments, a surface may be deemed to be reflective if its reflectance is greater than 60% at the operational wavelength of the motion tracking light sources. Other reflectance levels below 60% may still be effective for motion tracking operations in various applications. In some embodiments, a surface may be deemed to be nonreflective if its reflectance is less than 20% at the operational wavelength of the motion tracking light sources. In other embodiments, a surface may be deemed to be nonreflective if its reflectance is less than 30% at the operational wavelength of the motion tracking light sources. In still other embodiments, a surface may be deemed to be nonreflective if its reflectance is less than 40% at the operational wavelength of the motion tracking light sources. Other reflectance levels above 30% may still be effective for the nonreflective surfaces provided the distance measurement apparatus is able to measure a distance to the nonreflective surface.
[0046] New processing techniques can be used to perform image processing with hybrid markers. Figures 6 and 7 illustrate an example of a new processing technique. Particularly, Figure 6 illustrates an example system for image processing using a hybrid marker in accordance with one embodiment of the systems and methods described herein; and Figure 7 illustrates an example process for image processing using a hybrid marker in accordance with one embodiment of the systems and methods described herein. Referring now to Figures 6 and 7, at operation 732, the markers are illuminated with a light source and cameras detect light reflected from the markers. The process of illuminating markers and capturing reflections can be performed using conventional techniques. However, in some embodiments, hybrid markers with nonreflective spot areas may be used such as, for example, those markers described above with reference to Figure 5. This results in a grayscale image 646 that can be sent from the detection circuitry in the cameras to threshold circuitry 648, which may be internal to the cameras or external to the cameras.
[0047] At operation 734, thresholding circuitry examines pixels of the grayscale image 646 to determine, for a given pixel, whether that pixel has exceeded a threshold level of brightness (or whiteness). Similar to the process described above with reference to Figure 4, this results in one pixel value (e.g., a light value or an on value) where the marker is detected, and a second pixel value (e.g., a dark value or an off value) where the marker is not detected. However, because a hybrid marker is used, the marker will not be detected where the nonreflective spots occur. Accordingly, rather than a light or white circle on a dark or black background, the binary image will also include the dark spots on the light background of the marker.
[0048] The resultant binary image 650 is transmitted to image processing circuitry 660. In this example, image processing circuitry 660 includes dilation circuitry 664 and erosion circuitry 668, which can be used to perform morphological dilation and erosion operations, respectively on the binary image. At operation 736, dilation circuitry 664 performs a morphological dilation on the pixels. In one embodiment, this operation adds pixels to the boundaries of the objects in the image. The operation can be implemented to examine the value of a given input pixel and its surrounding neighbors and output the maximum value (e.g., value of the light pixels) of all pixels in the input pixels neighborhood. Accordingly, if a dark pixel is next to or near a light pixel, the dilation operation will effectively replace the dark pixel value with a light pixel value.
This can have the effect of expanding the light area of the circle to fill in in the dark spots with light values. This operation can be implemented to act based on a kernel matrix 658 and the size of the matrix can be chosen to define how much expansion is provided. Kernel matrix 658 can be a user defined matrix that can be adjusted based on the spot size and density. For dilation, the kernel matrix 658 can be adjusted to be larger than the gaps between the dark spots on the image, such that most if not all of the dark spots are completely filled in.
[0049] While dilation can have the effect of removing or 'filling in' the dark spots in the image, it can similarly expand the outer edges of the marker's representation, and sometimes affecting the roundness of its representation in the image. Figure 8 is a diagram illustrating two examples of a binary image having three light areas representing detected markers. Figure 8 at 810 illustrates the light areas after dilation in which the dark spots are removed by the dilation operation. However, as this also shows, the circles representing the markers are enlarged and are no longer circular. In some embodiments, and erosion operation can be performed to correct this. Accordingly, at operation 738 erosion circuitry performs and erosion operation to at least partially correct for the distortion caused by the dilation operation. In some embodiments, the erosion operation is implemented such that the value of a pixel operated on is the minimum value of all pixels and that pixels neighborhood. Accordingly, if the light pixel has at least one (or other threshold number of pixels) in its neighborhood that is dark, this light pixel is output as a dark pixel. Figure 8 at 820 illustrates an example of the light areas after erosion. As can be seen in this example, this operation returns the light shapes to more of a circular geometry. With the circles re-created, at operation 740, circle detection circuitry 672 determines the center of the circles in the image. The refined binary images 675 and the data regarding their center points can be used by motion tracking circuitry (e.g., processing circuit 244) to track the motion of the subjects based on the output images and center data.
[0050] Various embodiments may use different techniques for determining the centers of the circles representing markers detected by the image sensors. Example embodiments of such a process are described with reference to Figures 9 and 10. Figure
9 is a diagram illustrating an example of pixel scanning to detect a center point in accordance with one embodiment of the systems and methods described herein. Figure
10 is a flow diagram illustrating an example process for center detection in accordance with the example of Figure 9. Referring now to Figure 9, in this example process, pixels of the binary image are scanned, and each pixel is read to determine whether it is a dark pixel or a light pixel. When a light pixel is detected as illustrated at 996, this indicates that the scanning has reached the outer edge of the circle 993. Scanning continues until a dark pixel is detected as illustrated at 997. The resultant scanline between light pixel detection 996 and dark pixel detection 997 describes a chord 917 at the scanline. This chord 917 has a length, XLEN, 911 and a center point 914. Center point 914 of chord 917 is a distance, YLEN, 912 from the topmost point of circle 993. This process is repeated for multiple scanlines until the maximum length chord is detected. The center point of the chord with the maximum length relative to the other chords can be defined as the center point of the circle 993. The center point's coordinates can be determined by the midpoint of the chord on that scanline and the distance, YLEN, of that midpoint from the circumference of circle 993. Once the length of the detected cords begins decreasing from an identified maximum length, the system can determine that it is now scanning beyond (in this example, below) the diameter of the circle and no further scanning is required for this circle. In the example illustrated in Figure 10, pixels are scanned from left to right as illustrated by pixel scan direction 968. In other examples, other scan directions can be utilized.
[0051] An example process for determining the center point in accordance with the foregoing is now described. Referring now to figure 10, at operation 922, a first pixel is examined to determine whether it is light or dark. If it is dark, as illustrated at operation 924, the operation continues scanning as illustrated by flow line 925. In accordance with the example of Figure 9, this may indicate that the system is scanning and detecting pixels to the left of circle 993. If at operation 924 it is determined that the current pixel being examined is light, the system increments the length (e.g. XLEN 911) for that scanline. For example, for the first light pixel detected on that scanline, the length can be incremented from 0 to 1. This operation continues at steps 928 and 930 where subsequent pixels on that scanline are read and it is determined whether they are light or dark pixels. As subsequent light pixels are detected, the length of the chord for that scanline (e.g. XLEN 911) is incremented. This is illustrated by flow line 931. When the process encounters a dark pixel at operation 930, the system can determine the final length of the chord on that scanline as illustrated by operation 932. In the example of figure 10, this can be analogized to a detect dark pixel event 997, signifying the end of chord 917, and computing the length, XLEN 911, of that chord.
[0052] At operation 934, the system determines whether the length of that chord is greater than the maximum length of any previous cords measured. If so, it sets this current length to the maximum length and determines, the center of the chord and then determines the distance YLEN 912 of the chord center to the top circumference of circle 993. This is illustrated at operation 936. At operation 940, the system defines this point as the current center point for the circle. If at operation 934 of the system determines that the length of the current cord is not greater than the maximum length of any previous cords measured, in some embodiments the system can stop processing because this indicates that this past the diameter of the circle. In other embodiments, the system can continue processing until chords on all scanlines have been measured, or until a sufficient number of measurements resulting in XLen<Lmax have been taken to ensure that this is not an anomaly. As indicated by operation 938, once a chord is measured, processing can proceed to the next scanline and the operation repeats to detect the circle edge and measure the length of the chord on that scanline. In other embodiments, the system need not wait until steps 932, 934 and 936 are completed before continuing to scan the next line, and scanning can proceed to the as soon as the system determines that it has reached the end of the circle.
[0053] Although not illustrated in Figure 10, in other embodiments when the system detects a dark pixel event, the system can determine whether or not it is already scanning inside the circle. This determination can be made, for example, by examining the results for previous pixels scanned on that scanline. If the system is scanning inside of the circle and encounters a dark pixel, this indicates that the system has reached the outer edge of the circle (e.g., a dark pixel event 997) and it can then determine whether the current length is greater than the maximum length, and if so update the length and determine the center. However, if the system is scanning outside of the circle and encounters a dark pixel, it simply continues scanning until it reaches a light pixel.
[0054] For applications in which there may be multiple circles on a captured image, the foregoing algorithms and other like embodiments may be extended by adding additional registers to capture the information for each circle. This can be accomplished, in some embodiments, by adding additional registers and updating only the related information. The current pixel can be categorized to the related circle object data set using, for example, a search window size. Using this information for multiple circles can allow multiple markers to be tracked at the same time with the same imaging systems.
[0055] In addition to determining the center of the circle (e.g., for x,y position determination), it may also be important to determine the distance of the marker from the cameras. Because the marker is generally spherical (or otherwise not flat), it may be important to measure distance from the same relative point on each marker such as, for example, from the center point. Figure 11 is a diagram illustrating an example of calculating the distance between the image sensor and the center of the marker based on a distance measurement made to a point off center on the marker. In this example, it is assumed that the system intends to measure the distance, da, from IR sensor 1138 to the center of marker 1130. It is also assumed in this example that the marker 1130 is too reflective at the center point to get an accurate distance measurement, and therefore the distance must be measured from a nonreflective spot. In the illustrated example, that reflective spot is nonreflective spot 1134. Accordingly, the system is able to measure the distance, dm, from image sensor 1138 to nonreflective spot 1134. The system can then be configured to compute the desired actual distance, da, based on the measured distance, dm. This can be computed by measuring the measured distance, dm and subtracting the difference, b, from the measured distance, dm. Accordingly, da— dm— b. [0056] Because b is equal to the difference between R and a. Therefore, da can be determined as: da— dm— b— dm— (Z?— ci)— dm— R + a, where b = R— a.
[0057] By applying Pythagorean theorem to the right triangle Rac, this can be written as: da = dm - R + VZ?2 - c2 ,
where R2 = a2 + c2.
[0058] Because the depth image provides the coordinate information, the value of c can be calculated from the center of circle.
[0059] As used herein, a circuit might be implemented utilizing any form of hardware, software, or a combination thereof. For example, one or more processors, controllers, ASICs, PLAs, PALs, CPLDs, FPGAs, logical components, software routines or other mechanisms might be implemented to make up a circuit. In implementation, the various circuits described herein might be implemented as discrete circuits or the functions and features described can be shared in part or in total among one or more circuits. In other words, as would be apparent to one of ordinary skill in the art after reading this description, the various features and functionality described herein may be implemented in any given application and can be implemented in one or more separate or shared circuits in various combinations and permutations. Even though various features or elements of functionality may be individually described or claimed as separate circuits, one of ordinary skill in the art will understand that these features and functionality can be shared among one or more common circuits, and such description shall not require or imply that separate circuits are required to implement such features or functionality.
[0060] Where circuits are implemented in whole or in part using software, in one embodiment, these software elements can be implemented to operate with a computing or processing system capable of carrying out the functionality described with respect thereto. One such example computing system is shown in Figure 12. Various embodiments are described in terms of this example-computing system 2600. After reading this description, it will become apparent to a person skilled in the relevant art how to implement the technology using other computing systems or architectures.
[0061] Referring now to Figure 12, computing system 1200 may represent, for example, computing or processing capabilities found within desktop, laptop and notebook computers; hand-held computing devices (smart phones, cell phones, palmtops, tablets, etc.); mainframes, supercomputers, workstations or servers; or any other type of special-purpose or general-purpose computing devices as may be desirable or appropriate for a given application or environment. Computing system 1200 might also represent computing capabilities embedded within or otherwise available to a given device. For example, a computing system might be found in other electronic devices such as, for example, digital cameras, navigation systems, cellular telephones, portable computing devices, modems, routers, WAPs, terminals and other electronic devices that might include some form of processing capability.
[0062] Computing system 1200 might include, for example, one or more processors, controllers, control modules, or other processing devices, such as a processor 1204. Processor 1204 might be implemented using a general-purpose or special-purpose processing engine such as, for example, a microprocessor (whether single-, dual- or multi-core processor), signal processor, graphics processor (e.g., GPU) controller, or other control logic. In the illustrated example, processor 1204 is connected to a bus 1202, although any communication medium can be used to facilitate interaction with other components of computing system 1200 or to communicate externally.
[0063] Computing system 1200 might also include one or more memory modules, simply referred to herein as main memory 1208. For example, in some embodiments random access memory (RAM) or other dynamic memory, might be used for storing information and instructions to be executed by processor 1204. Main memory 1208 might also be used for storing temporary variables or other intermediate information during execution of instructions to be executed by processor 1204. Computing system 1200 might likewise include a read only memory ("ROM") or other static storage device coupled to bus 1202 for storing static information and instructions for processor 1204.
[0064] The computing system 1200 might also include one or more various forms of information storage mechanism 1210, which might include, for example, a media drive 1212 and a storage unit interface 1220. The media drive 1212 might include a drive or other mechanism to support fixed or removable storage media 1214. For example, a hard disk drive, a floppy disk drive, a magnetic tape drive, an optical disk drive, a CD or DVD drive (R or RW), a flash drive, or other removable or fixed media drive might be provided. Accordingly, storage media 1214 might include, for example, a hard disk, a floppy disk, magnetic tape, cartridge, optical disk, a CD or DVD, or other fixed or removable medium that is read by, written to or accessed by media drive 1212. As these examples illustrate, the storage media 1214 can include a computer usable storage medium having stored therein computer software or data.
[0065] In alternative embodiments, information storage mechanism 1210 might include other similar instrumentalities for allowing computer programs or other instructions or data to be loaded into computing system 1200. Such instrumentalities might include, for example, a fixed or removable storage unit 1222 and an interface
- BO - 1220. Examples of such storage units 1222 and interfaces 1220 can include a program cartridge and cartridge interface, a removable memory (for example, a flash memory or other removable memory module) and memory slot, a flash drive and associated slot (for example, a USB drive), a PCMCIA slot and card, and other fixed or removable storage units 1222 and interfaces 1220 that allow software and data to be transferred from the storage unit 1222 to computing system 1200.
[0066] Computing system 1200 might also include a communications interface 1224. Communications interface 1224 might be used to allow software and data to be transferred between computing system 1200 and external devices. Examples of communications interface 1224 might include a modem or softmodem, a network interface (such as an Ethernet, network interface card, WiMedia, IEEE 802.XX, Bluetooth® or other interface), a communications port (such as for example, a USB port, IR port, RS232 port, or other port), or other communications interface. Software and data transferred via communications interface 1224 might typically be carried on signals, which can be electronic, electromagnetic (which includes optical) or other signals capable of being exchanged by a given communications interface 1224. These signals might be provided to communications interface 1224 via a channel 1228. This channel 1228 might carry signals and might be implemented using a wired or wireless communication medium. Some examples of a channel might include a phone line, a cellular link, an RF link, an optical link, a network interface, a local or wide area network, and other wired or wireless communications channels.
[0067] In this document, the terms "computer program medium" and "computer usable medium" are used to generally refer to media such as, for example, memory 1208, storage unit 1220, media 1214, and channel 1228. These and other various forms of computer program media or computer usable media may be involved in carrying one or more sequences of one or more instructions to a processing device for execution. Such instructions embodied on the medium, are generally referred to as "computer program code" or a "computer program product" (which may be grouped in the form of computer programs or other groupings). When executed, such instructions might enable the computing system 1200 to perform features or functions of the disclosed technology as discussed herein.
[0068] While various embodiments of the disclosed technology have been described above, it should be understood that they have been presented by way of example only, and not of limitation. Likewise, the various diagrams may depict an example architectural or other configuration for the disclosed technology, which is done to aid in understanding the features and functionality that can be included in the disclosed technology. The disclosed technology is not restricted to the illustrated example architectures or configurations, but the desired features can be implemented using a variety of alternative architectures and configurations. Indeed, it will be apparent to one of skill in the art how alternative functional, logical or physical partitioning and configurations can be implemented to implement the desired features of the technology disclosed herein. Also, a multitude of different constituent module names other than those depicted herein can be applied to the various partitions. Additionally, with regard to flow diagrams, operational descriptions and method claims, the order in which the steps are presented herein shall not mandate that various embodiments be implemented to perform the recited functionality in the same order unless the context dictates otherwise.
[0069] Although the disclosed technology is described above in terms of various exemplary embodiments and implementations, it should be understood that the various features, aspects and functionality described in one or more of the individual embodiments are not limited in their applicability to the particular embodiment with which they are described, but instead can be applied, alone or in various combinations, to one or more of the other embodiments of the disclosed technology, whether or not such embodiments are described and whether or not such features are presented as being a part of a described embodiment. Thus, the breadth and scope of the technology disclosed herein should not be limited by any of the above-described exemplary embodiments.
[0070] Terms and phrases used in this document, and variations thereof, unless otherwise expressly stated, should be construed as open ended as opposed to limiting.
- BB - As examples of the foregoing: the term "including" should be read as meaning
"including, without limitation" or the like; the term "example" is used to provide exemplary instances of the item in discussion, not an exhaustive or limiting list thereof; the terms "a" or "an" should be read as meaning "at least one," "one or more" or the like; and adjectives such as "conventional," "traditional," "normal," "standard," "known" and terms of similar meaning should not be construed as limiting the item described to a given time period or to an item available as of a given time, but instead should be read to encompass conventional, traditional, normal, or standard technologies that may be available or known now or at any time in the future. Likewise, where this document refers to technologies that would be apparent or known to one of ordinary skill in the art, such technologies encompass those apparent or known to the skilled artisan now or at any time in the future.
[0071] The presence of broadening words and phrases such as "one or more," "at least," "but not limited to" or other like phrases in some instances shall not be read to mean that the narrower case is intended or required in instances where such broadening phrases may be absent. The use of the term "module" does not imply that the components or functionality described or claimed as part of the module are all configured in a common package. Indeed, any or all of the various components of a module, whether control logic or other components, can be combined in a single package or separately maintained and can further be distributed in multiple groupings or packages or across multiple locations.
[0072] Additionally, the various embodiments set forth herein are described in terms of exemplary block diagrams, flow charts and other illustrations. As will become apparent to one of ordinary skill in the art after reading this document, the illustrated embodiments and their various alternatives can be implemented without confinement to the illustrated examples. For example, block diagrams and their accompanying description should not be construed as mandating a particular architecture or configuration.

Claims

Claims What is claimed is:
1. A passive marker for motion tracking using an optical camera system, comprising:
an outer surface that reflects light from a light source in a direction toward the light source; and
a plurality of nonreflective spots on the outer surface of the passive marker.
2. The passive marker of claim 1, wherein each of the plurality of nonreflective spots is completely surrounded by reflective material.
3. The passive marker of claim 1, wherein the reflective surface comprises a reflective paint or other coating and the nonreflective spots are disposed on the reflective paint or other coating.
4. The passive marker of claim 1, wherein the reflective surface comprises a reflective coating applied to a non-reflective surface in a pattern that yields the plurality of non-reflective spots on the outer surface of the reflective marker.
5. The passive marker of claim 1, wherein the nonreflective spots comprise at least one of circles, rectangles, triangles, and stars.
6. A process for determining the center of an image object of an image, the image comprising light and dark pixels and the image object representing a marker in a motion tracking system, the process comprising:
1) for a line of pixels in the image, examine a pixel in the line of pixels in the image;
2) determine if the examined pixel in the line of pixels is a light pixel or a dark pixel wherein:
(a) if the examined pixel is a dark pixel, examine a next pixel in the line of pixels and repeat step 2); and
(b) if the examined pixel is a light pixel: (i) incrementing a length value of a chord of the object defined by light pixels on the line of pixels; (ii) examine a next pixel in the line of pixels in the image; (iii) if the next pixel in the line of pixels is a light pixel, repeating operations 2) (i) - 2) (iii), but if the next pixel in the line of pixels is a dark pixel, proceed to operation 3); and
3) determine a final length of the chord of the object for the line of pixels and compare the final length of the chord to a previously determined maximum chord length for a previously scanned line or lines of pixels;
(a) if the determined final length of the chord is greater than a maximum chord length for any previously scanned line of pixels, determine a final length of the chord as a maximum chord length and return to step 1) for a next line of pixels in the image; and
(b) if the determined final length of the chord is less than a maximum chord length for any previously scanned line of pixels, compute the center of the object as a center of the center pixel of the maximum-chord-length chord.
7. The process of claim 6, further comprising determining a distance of a center point of the chords to an edge of the object in a direction perpendicular to the chord.
8. The process of claim 6, further comprising receiving a grayscale image object of the object and converting the grayscale image to the image comprising light and dark pixels.
9. The process of claim 8, wherein the image comprising light and dark pixels is a binary image and the process further comprises performing a thresholding operation to convert the grayscale image of the object into a binary image comprising light and dark pixels.
10. The process of claim 6, wherein the image comprising light and dark pixels is a binary image of the object.
11. The process of claim 6, wherein the image comprises a plurality of image objects.
12. A system for motion capture, comprising:
a plurality of light sources;
a plurality of cameras, each camera including an image sensor to receive light reflected from a light source to the image sensor from a reflective marker positioned on a subject; and
a motion capture circuit comprising a non-transitive machine readable storage medium storing executable program instructions which when executed cause the motion capture circuit to perform a process of determining the center of an image object of an image from an image sensor, the image comprising light and dark pixels and the image object representing the reflective marker, the process comprising:
1) for a line of pixels in the image, examine a pixel in the line of pixels in the image;
2) determine if the examined pixel in the line of pixels is a light pixel or a dark pixel wherein:
(a) if the examined pixel is a dark pixel, examine a next pixel in the line of pixels and repeat step 2); and
(b) if the examined pixel is a light pixel: (i) incrementing a length value of a chord of the object defined by light pixels on the line of pixels; (ii) examine a next pixel in the line of pixels in the image; (iii) if the next pixel in the line of pixels is a light pixel, repeating operations 2) (i) - 2) (iii), but if the next pixel in the line of pixels is a dark pixel, proceed to operation 3); and
3) determine a final length of the chord of the object for the line of pixels and compare the final length of the chord to a previously determined maximum chord length for a previously scanned line or lines of pixels;
(a) if the determined final length of the chord is greater than a maximum chord length for any previously scanned line of pixels, determine a final length of the chord as a maximum chord length and return to step 1) for a next line of pixels in the image; and
(b) if the determined final length of the chord is less than a maximum chord length for any previously scanned line of pixels, compute the center of the object as a center of the center pixel of the maximum-chord-length chord.
13. The system of claim 12, wherein the process further comprises determining a distance from a center point of the chords to an edge of the object in a direction perpendicular to the chord.
14. The process of claim 12, wherein the process further comprises receiving a grayscale image object of the object and converting the grayscale image to the image comprising light and dark pixels.
15. The process of claim 14, wherein the image comprising light and dark pixels is a binary image and the process further comprises performing a thresholding operation to convert the grayscale image of the object into a binary image comprising light and dark pixels.
16. The process of claim 12, wherein the image comprising light and dark pixels is a binary image of the object.
17. The process of claim 12, wherein the image comprises a plurality of image objects each representing a marker.
18. A computer program product embodied in a non-transitive computer-readable medium including instructions, that when executed, cause one or more processors to perform operations to determine the center of an image object of an image from an image sensor, the image comprising light and dark pixels and the image object representing the reflective marker, the process operations comprising:
1) for a line of pixels in the image, examine a pixel in the line of pixels in the image;
2) determine if the examined pixel in the line of pixels is a light pixel or a dark pixel wherein:
(a) if the examined pixel is a dark pixel, examine a next pixel in the line of pixels and repeat step 2); and (b) if the examined pixel is a light pixel: (i) incrementing a length value of a chord of the object defined by light pixels on the line of pixels; (ii) examine a next pixel in the line of pixels in the image; (iii) if the next pixel in the line of pixels is a light pixel, repeating operations 2) (i) - 2) (iii), but if the next pixel in the line of pixels is a dark pixel, proceed to operation 3); and
3) determine a final length of the chord of the object for the line of pixels and compare the final length of the chord to a previously determined maximum chord length for a previously scanned line or lines of pixels;
(a) if the determined final length of the chord is greater than a maximum chord length for any previously scanned line of pixels, determine a final length of the chord as a maximum chord length and return to step 1) for a next line of pixels in the image; and
(b) if the determined final length of the chord is less than a maximum chord length for any previously scanned line of pixels, compute the center of the object as a center of the center pixel of the maximum-chord-length chord.
19. The computer program product of claim 18, wherein the process further comprises determining a distance from a center point of the chords to an edge of the object in a direction perpendicular to the chord.
20. The computer program product of claim 18, wherein the process further comprises receiving a grayscale image object of the object and converting the grayscale image to the image comprising light and dark pixels.
21. The computer program product of claim 20, wherein the image comprising light and dark pixels is a binary image and the process further comprises performing a thresholding operation to convert the grayscale image of the object into a binary image comprising light and dark pixels.
22. The computer program product of claim 18, wherein the image comprising light and dark pixels is a binary image of the object.
23. The computer program product of claim 18, wherein the image comprises a plurality of image objects each representing a marker.
PCT/IB2018/001638 2018-05-10 2018-05-10 Passive marker systems and methods for motion tracking WO2019215472A2 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
PCT/IB2018/001638 WO2019215472A2 (en) 2018-05-10 2018-05-10 Passive marker systems and methods for motion tracking

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/IB2018/001638 WO2019215472A2 (en) 2018-05-10 2018-05-10 Passive marker systems and methods for motion tracking

Publications (2)

Publication Number Publication Date
WO2019215472A2 true WO2019215472A2 (en) 2019-11-14
WO2019215472A3 WO2019215472A3 (en) 2019-12-12

Family

ID=67847748

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/IB2018/001638 WO2019215472A2 (en) 2018-05-10 2018-05-10 Passive marker systems and methods for motion tracking

Country Status (1)

Country Link
WO (1) WO2019215472A2 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112526535A (en) * 2020-11-03 2021-03-19 上海炬佑智能科技有限公司 ToF sensing device and distance detection method thereof

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
GB0622451D0 (en) * 2006-11-10 2006-12-20 Intelligent Earth Ltd Object position and orientation detection device
CN105678817B (en) * 2016-01-05 2017-05-31 北京度量科技有限公司 A kind of method that high speed extracts circular image central point

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112526535A (en) * 2020-11-03 2021-03-19 上海炬佑智能科技有限公司 ToF sensing device and distance detection method thereof
CN112526535B (en) * 2020-11-03 2024-03-08 上海炬佑智能科技有限公司 ToF sensing device and distance detection method thereof

Also Published As

Publication number Publication date
WO2019215472A3 (en) 2019-12-12

Similar Documents

Publication Publication Date Title
US20200400428A1 (en) Systems and Methods of Locating a Control Object Appendage in Three Dimensional (3D) Space
US20200349765A1 (en) Object modeling and movement method and apparatus, and device
CN109076145B (en) Automatic range control for active illumination depth camera
US9741136B2 (en) Systems and methods of object shape and position determination in three-dimensional (3D) space
Rocchini et al. A low cost 3D scanner based on structured light
US20140307920A1 (en) Systems and methods for tracking occluded objects in three-dimensional space
US7711182B2 (en) Method and system for sensing 3D shapes of objects with specular and hybrid specular-diffuse surfaces
JP2010231780A (en) Method for estimating 3d pose of specular object
CN112669362B (en) Depth information acquisition method, device and system based on speckles
CN110807833B (en) Mesh topology obtaining method and device, electronic equipment and storage medium
CN113689578B (en) Human body data set generation method and device
CN109711246A (en) A kind of dynamic object recognition methods, computer installation and readable storage medium storing program for executing
US11830156B2 (en) Augmented reality 3D reconstruction
Xu et al. An adaptive correspondence algorithm for modeling scenes with strong interreflections
Liao et al. Indoor scene reconstruction using near-light photometric stereo
WO2019215472A2 (en) Passive marker systems and methods for motion tracking
US20220180545A1 (en) Image processing apparatus, image processing method, and program
JP5441752B2 (en) Method and apparatus for estimating a 3D pose of a 3D object in an environment
CN112348956B (en) Method, device, computer equipment and storage medium for reconstructing grid of transparent object
CN117128892A (en) Three-dimensional information measuring device, measuring method and electronic equipment
US20230368457A1 (en) Method and system for three-dimensional scanning of arbitrary scenes
CN111462309B (en) Modeling method and device for three-dimensional head, terminal equipment and storage medium
CN116681690A (en) Eyeball tracking method, device, computer equipment and storage medium
Jiddi Photometric registration of indoor real scenes using an RGB-D camera with application to mixed reality
Song et al. Light Pose Calibration for Camera-light Vision Systems

Legal Events

Date Code Title Description
NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 18908288

Country of ref document: EP

Kind code of ref document: A2