US20030076980A1 - Coded visual markers for tracking and camera calibration in mobile computing systems - Google Patents

Coded visual markers for tracking and camera calibration in mobile computing systems Download PDF

Info

Publication number
US20030076980A1
US20030076980A1 US10262693 US26269302A US2003076980A1 US 20030076980 A1 US20030076980 A1 US 20030076980A1 US 10262693 US10262693 US 10262693 US 26269302 A US26269302 A US 26269302A US 2003076980 A1 US2003076980 A1 US 2003076980A1
Authority
US
Grant status
Application
Patent type
Prior art keywords
marker
coded
method
user
code
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Abandoned
Application number
US10262693
Inventor
Xiang Zhang
Nassir Navab
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Siemens Corporate Research Inc
Original Assignee
Siemens Corporate Research Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING; COUNTING
    • G06KRECOGNITION OF DATA; PRESENTATION OF DATA; RECORD CARRIERS; HANDLING RECORD CARRIERS
    • G06K9/00Methods or arrangements for reading or recognising printed or written characters or for recognising patterns, e.g. fingerprints
    • G06K9/36Image preprocessing, i.e. processing the image information without deciding about the identity of the image
    • G06K9/46Extraction of features or characteristics of the image
    • G06K9/4604Detecting partial patterns, e.g. edges or contours, or configurations, e.g. loops, corners, strokes, intersections
    • GPHYSICS
    • G06COMPUTING; CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/246Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
    • GPHYSICS
    • G06COMPUTING; CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • G06T7/74Determining position or orientation of objects or cameras using feature-based methods involving reference images or patches
    • GPHYSICS
    • G06COMPUTING; CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration

Abstract

A method for determining a pose of a user is provided including the steps of capturing a video image sequence of an environment including at least one coded marker; detecting if the coded marker is present in the video images; if the marker is present, extracting feature correspondences of the coded marker; determining a code of the coded marker using the feature correspondences; and comparing the determined code with a database of predetermined codes to determine the pose of the user. According to an embodiment, the coded marker includes four color blocks arranged in a square formation and the determining a code of the at least one marker further includes determining a color of each of the four blocks. According to another embodiment, the marker includes a coding matrix and a code of the marker being determined by numbered squares of the coding matrix being covered by a circle.

Description

    PRIORITY
  • This application claims priority to an application entitled “DESIGN CODED VISUAL MARKERS FOR TRACKING AND CAMERA CALIBRATION IN MOBILE COMPUTING SYSTEMS” filed in the United States Patent and Trademark Office on Oct. 4, 2001 and assigned Serial No. 60/326,960, the contents of which are hereby incorporated by reference.[0001]
  • BACKGROUND OF THE INVENTION
  • 1. Field of the Invention [0002]
  • The present invention relates generally to computer vision systems, and more particularly, to a system and method for tracking and camera calibration in a mobile computing system using coded visual markers. [0003]
  • 2. Description of the Related Art [0004]
  • In certain real-time mobile computing applications, it is crucial to precisely track the motion and obtain the pose (i.e., position and orientation) of a user in real-time, also known as localization. There are several methods currently available to carry out the localization. For example, in augmented reality (AR) applications, magnetic or/and inertia trackers have been employed. However, it is not unusual that the performance of magnetic and inertia trackers are limited by their own characteristics. For example, the magnetic trackers are affected by the interference of nearby metal structures and the currently available inertia trackers can only be used to obtain information on orientation and are usually not very accurate in tracking very slow rotations. Additionally, infrared trackers have been employed but these devices usually require the whole working area or environment to be densely covered with infrared sources or reflectors, thus making them not suitable for a very large working environment. [0005]
  • Vision-based tracking methods have been used with limited success in many applications for motion tracking and camera calibration. Ideally, people should be able to track the motion or locate an object of interest based only on the natural features of captured scenes, i.e., viewed scenes, of the environment. Despite the dramatic progress of computer hardware in the last decade and a large effort to develop adequate tracking methods, there is still not a versatile vision-based tracking method available. Therefore, in controlled environments, such as large industrial sites, marker-based tracking is the preferred method of choice. [0006]
  • Current developments of computer vision-based applications are making use of the latest advances in computer hardware and information technology (IT). One such development is to combine mobile computing and augmented reality technology to develop systems for localization and navigation guidance, data navigation, maintenance assistance, and system reconstruction in an industrial site. In these applications, a user is equipped with a mobile computer. In order to guide the user to navigate through the complex industrial site, a camera is attached to the mobile computer to track and locate the user in real-time via a marker-based tracking system. The localization information then can be used for database access and to produce immersive AR views. [0007]
  • To be used for real-time motion tracking and camera calibration in the applications described above, the markers of a marker-based tracking system need to have the following characteristics: (1) sufficient number of codes available for identification of distinct markers; (2) methods available for marker detection and decoding in real-time; and (3) robust detection and decoding under varying illumination conditions, which ensures the applicability of the marker in various environments. [0008]
  • SUMMARY OF THE INVENTION
  • According to one aspect of the present invention, a method for determining a pose of a user is provided including the steps of capturing a video image sequence of an environment including at least one coded marker; detecting if the at least one coded marker is present in the video images; if the at least one marker is present, extracting feature correspondences of the at least one coded marker; determining a code of the at least one coded marker using the feature correspondences; and comparing the determined code with a database of predetermined codes to determine the pose of the user. [0009]
  • According to another aspect of the present invention, the at least one coded marker includes four color blocks arranged in a square formation and the determining a code of the at least one marker further includes determining a color of each of the four blocks. [0010]
  • According to a further aspect of the present invention, the detecting step further includes applying a watershed transformation to the at least one coded marker to extract a plurality of closed-edge strings that form a contour of the at least one marker. [0011]
  • According to another aspect of the present invention, the at least one marker includes a coding matrix including a plurality of columns and rows with a numbered square at intersections of the columns and rows, the coding matrix being surrounded by a rectangular frame and a code of the at least one marker being determined by the numbered squares being covered by a circle. The coding matrix includes m columns and n rows, where m and n are whole number, resulting in 3×2[0012] m×n−4.
  • According to a further aspect of the present invention, a system is provided including a plurality of coded markers located throughout an environment, each of the plurality of coded markers relating to a location in the environment, codes of the plurality of coded markers being stored in a database; a camera for capturing a video image sequence of the environment, the camera coupled to a processor; and the processor adapted for detecting if at least one coded marker is present in the video images, if the at least one marker is present, extracting feature correspondences of the at least one coded marker, determining a code of the at least one coded marker using the feature correspondences, and comparing the determined code with the database to determine the pose of the user. In one embodiment, the at least one coded marker includes four color blocks arranged in a square formation and a code of the at least one marker being determined by a color sequence of the blocks. In another embodiment, the at least one marker includes a coding matrix including a plurality of columns and rows with a numbered square at intersections of the columns and rows, the coding matrix being surrounded by a rectangular frame and a code of the at least one marker being determined by the numbered squares being covered by a circle. [0013]
  • In a further aspect, the camera and processor are mobile devices. [0014]
  • In another aspect, the system further includes a display device, wherein the display device will provide to the user information relative to the location of the at least one marker. Additionally, wherein based on a first location of the at least one marker, the display device will provide to the user information to direct the user to a second location. [0015]
  • In yet another aspect, the system further includes an external database of information relative to a plurality of items located throughout the environment, wherein when the user is in close proximity to at least one of the plurality of items, the processor provides the user with access to the external database. Furthermore, the system includes a display device for displaying information of the external database to the user and for displaying virtual objects overlaid on the at least one item. [0016]
  • In a further aspect, the system includes a head-mounted display for overlaying information of the at least one item in a view of the user.[0017]
  • BRIEF DESCRIPTION OF THE DRAWINGS
  • The above and other objects, features, and advantages of the present invention will become more apparent in light of the following detailed description when taken in conjunction with the accompanying drawings in which: [0018]
  • FIG. 1 is a block diagram of a system for tracking a user according to an embodiment of the present invention; [0019]
  • FIGS. [0020] 2(A) through 2(C) are several views of color coded visual markers used for tracking a user in an environment according to an embodiment of the present invention;
  • FIG. 3 is a flowchart illustrating a method for detecting and decoding the color coded visual markers of FIG. 2; [0021]
  • FIG. 4 is an image of a marker showing feature correspondences and lines projected onto the image to determine edges of the four blocks of the color coded visual marker; [0022]
  • FIGS. [0023] 5(A) through 5(C) are several views of black/white matrix coded visual markers used for tracking a user in an environment according to another embodiment of the present invention;
  • FIG. 6 is a flowchart illustrating a method for detecting and decoding the black/white matrix coded visual markers of FIG. 5; [0024]
  • FIG. 7 is an image of a marker depicting the method used to extract a corner point of the matrix coded visual marker according to the method illustrated in FIG. 6; and [0025]
  • FIG. 8 is a diagram illustrating the interpolation of marker points of a black/white matrix coded visual marker in accordance with the present invention.[0026]
  • DETAILED DESCRIPTION OF PREFERRED EMBODIMENTS
  • Preferred embodiments of the present invention will be described herein below with reference to the accompanying drawings. In the following description, well-known functions or constructions are not described in detail to avoid obscuring the invention in unnecessary detail. [0027]
  • The present invention is directed to coded visual markers for tracking and camera calibration in mobile computing systems, systems employing the coded visual markers and methods for detecting and decoding the markers when in use. According to one embodiment of the present invention, color coded visual markers are employed in systems for tracking a user and assisting the user in navigating a site or interacting with a piece of equipment. In another embodiment, black and white matrix coded visual markers are utilized. [0028]
  • Generally, the marker-based tracking system of the present invention includes a plurality of markers placed throughout a workspace or environment of a user. Each of the markers are associated with a code or label and the code is associated with either a location of the marker or an item the marker is attached to. The user directs a camera, coupled to a processor, to one or more of the markers. The camera captures an image of the marker or markers and determines the code of the markers. It then uses the codes to extract information about the location of the markers or about items in the close proximity to the markers. [0029]
  • It is to be understood that the present invention may be implemented in various forms of hardware, software, firmware, special purpose processors, or a combination thereof. In one embodiment, the present invention may be implemented in software as an application program tangibly embodied on a program storage device. The application program may be uploaded to, and executed by, a machine comprising any suitable architecture such as that shown in FIG. 1. Preferably, the machine [0030] 100 is implemented on a computer platform having hardware such as one or more central processing units (CPU) 102, a random access memory (RAM) 104, a read only memory (ROM) 106, input/output (I/O) interface(s) such as keyboard 108, cursor control device (e.g., a mouse) 110, display device 112 and camera 116 for capturing video images. The computer platform also includes an operating system and micro instruction code. The various processes and functions described herein may either be part of the micro instruction code or part of the application program (or a combination thereof) which is executed via the operating system. In addition, various other peripheral devices may be connected to the computer platform such as an additional data storage device 114 and a printing device. Preferably, the machine 100 is embodied in a mobile device such as a laptop computer, notebook computer, personal digital assistant (PDA), etc.
  • It is to be further understood that, because some of the constituent system components and method steps depicted in the accompanying figures may be implemented in software, the actual connections between the system components (or the process steps) may differ depending upon the manner in which the present invention is programmed. Given the teachings of the present invention provided herein, one of ordinary skill in the related art will be able to contemplate these and similar implementations or configurations of the present invention. [0031]
  • FIGS. [0032] 2(A) through 2(C) are several views of color coded visual markers used for tracking a user in an environment according to an embodiment of the present invention. The color based markers work well for relatively simple cases under friendly illuminative conditions. Each of these markers 202, 204, 206 includes four square blocks of either black or color. To simplify the marker detection and color classification, the color of the color blocks is limited to be one of the three primitive colors (i.e., red, green, and blue).
  • Referring to FIG. 2(A), the four blocks [0033] 208, 210, 212, 214 are centered at the four corners points of an invisible square 216, shown as a dashed line in FIG. 2(A). To determine the orientation of a marker, at least one and at most three of the four blocks of a marker are white patched 218. If there are two white patched blocks in one marker, the two blocks are preferably next to each other (not in diagonal) to ensure that there will be no confusion in determining the orientation.
  • The marker [0034] 202 is coded by the colors of the four blocks 208, 210, 212, 214 and the number of white patched blocks. For marker coding, the color coded visual markers use ‘r’ for red, ‘g’ for green, ‘b’ for blue, and ‘d’ for black. The order of the code is clockwise from the first white centered block 208, which is the block at the upper-left, and will include a letter for each color of the representative block.(Note, the lower-left block is preferably not white patched and at most the marker will include three white patched blocks). The number at the end of the code is the number of white patched blocks of the corresponding marker. For example, the marker shown in FIG. 2(A) is coded as drdr1(block 208 is black, block 210 is red, block 212 is black and block 214 is red), the marker shown in FIG. 2(b) is coded as rgbd2(block 220 is red, block 222 is green, block 224 is blue and block 226 is black), and the marker shown in FIG. 2(C) is coded as dddd3 (blocks 228, 230, 232 and 234 are all black). Therefore, a color coded marker system according to an embodiment of the present invention can have 3×44=768 different color markers.
  • With reference to FIG. 3, a method for detecting and decoding a color coded visual marker of an embodiment of the present invention will be described. [0035]
  • Initially, a user equipped with a mobile computer having a camera coupled to the computer will enter a workspace or environment that has the color coded markers placed throughout. A video sequence of the environment including at least one marker will be captured (step [0036] 302) to acquire an image of the marker. A watershed transformation is applied to the image to extract closed-edge strings that form the contours of the marker (step 304). Since the markers are using the three primitive colors for marker coding, the watershed transformation need be applied to the two color components from RGB with the lower intensities to extract the color blocks.
  • In step [0037] 306, strings which are less than a predetermined value for representing a square block in a marker are eliminated. Then, the closed-edge strings are grouped based on the similarity of their lengths. The four strings that have the least maximum mutual distance will be put in one group (step 308). The maximum mutual distance among a group of N closed-edge strings is defined as follows:
  • dmax:=max(S(di,j))  (1)
  • where, 1≦i≦N,1≦j≦N,and i≠j,d[0038] i,j is the distance between the weight center of string i and the weight center of string j; S represent the set of di,j for all eligible i and j. The four weight centers of the strings in each group are used as correspondences of the centers of the four blocks of a marker to compute a first estimation of a homography from the marker model plane to the image plane (step 310). The homography is used to project eight straight lines to form the four blocks of the marker as shown in FIG. 4 (step 312). These back projected lines are then used as an initialization to fit straight lines on the image plane. The cross points of these straight lines are taken as the first estimation of the correspondences of the corner points of the marker.
  • Along the first estimated edges, a 1-D Canny edge detection method, as is known in the art, (in the direction perpendicular to the first estimated edges) is used to locate accurately the edge points of the square blocks (step [0039] 314). Then, the eight straight lines fitted from these accurate edge points are used to extract the feature correspondences, i.e., corner points, of the marker with sub-pixel accuracy. Once the corner points of the marker are extracted along with the edge points of the square blocks, the blocks of the marker can be defined and each block can be analyzed for its color.
  • To determine the color of the blocks of the marker (step [0040] 316), the average values of the red, green, and blue component (denoted as R, G, and B) of all the pixels inside the block (the white patch area excluded) are measured. Then, the intensity I, hue H, and saturation S of the averaged block color is computed as follows: I = R + G + B ) / 3 S = 1.0 - 3.0 * min ( R , G , B ) R + G + B H = cos - 1 { 0.5 [ ( R - G ) - ( R - B ) ] ( R - G ) 2 + ( R - B ) ( G - B ) ( 2 )
    Figure US20030076980A1-20030424-M00001
  • The color of the corresponding square block is then determined by the values of I, H, and S as follows: if I≦I[0041] thr, the color is black; else, if S≦Sthr, the color is still black; else, if 0≦H<2π/3, the color is red; if 2π/3≦H<4π/3, the color is green; if 4π/3≦H<2π, the color is blue. Here, Ithr and Sthr are user adjustable thresholds.
  • Once the color of each block of a marker is determined, the code for the marker is derived as described above (step [0042] 318), for example, drdr1. Once the code has been determined, the code can be matched against a database of codes, where the database will have information related to the code (step 320) and the pose of the marker can be determined. For example, the information may include a location of the marker, a type of a piece of equipment the marker is attached to, etc.
  • By applying these color coded visual markers in real-time tracking and pose estimation fast real-time marker detection and extraction of correspondences can be achieved. The color coded visual markers provide up to 16 accurate correspondences available for calibration. Additionally, by taking the cross points of the color block, the correspondences of the four center points of the blocks can be located with higher accuracy, where four points provides the least correspondences for computing the homography resulting in faster processing. [0043]
  • FIGS. [0044] 5(A) through 5(C) are several views of matrix coded visual markers used for tracking a user in an environment according to another embodiment of the present invention. Using the black/white matrix coded markers can avoid the problems caused by instability of color classification under unfriendly lighting conditions.
  • Referring to FIG. 5([0045] a), a black/white matrix coded marker 502 is formed by a thick rectangular frame 504 and a coding matrix 506 formed by a pattern of small black circles 508 distributed inside the inner rectangular of the marker. For example, the markers shown in FIGS. 5(A)-(C) are coded with a 4×4 coding matrix.
  • The marker [0046] 502 with a 4×4 coding matrix is coded using a 12-bit binary number with each bit correspond to a numbered position in the coding matrix as shown in FIG. 5(A). The 4 corner positions labeled ‘a’, ‘b’, ‘c’, and ‘d’ in the coding matrix are reserved for a determination of marker orientation. If the corresponding numbered position is covered by a small black circle, then the corresponding numbered bit of the 12-bit binary number is 1, otherwise it is 0. The marker is thus labeled by the decimal value of the 12-bit binary number.
  • To indicate uniquely the orientation of marker [0047] 502, the position labeled a is always white, i.e., a=0, while the position labeled d is always covered by a black circle, d=1. In addition, in the case that b is black, then c has to be also black. A letter is added to the end of the marker label to indicate one of the three combinations: a for (a=0, b=1, c=1, d=1), b for (a=0, b=0, c=1, d=1), and c for (a=0, b=0, c=0, d=1). Therefore, for a 4×4 coding matrix, there can be up to 3×1212=12,288 distinct markers. Using a 5×5 coding matrix, there can be up to 3×221=6,291,456 distinct markers. Generally, using a m×n coding matrix, a black/white matrix coded visual marker system of an embodiment of the present invention can have 3×2m×n−4 markers. For some of the applications that need only a much smaller number of markers than the coding capacity, the redundant positions in the coding matrix can be used for implementation of automatic error-bit correction to improve the robustness of the marker decoding. Following the coding convention stated above, the marker shown in FIG. 5(B) is coded as 4095 b (wherein the 12-bit number is 111111111111) and the marker shown in FIG. 5(C) is 1365 a (e.g., 010101010101).
  • With reference to FIG. 6, a method for detecting and decoding a matrix coded visual marker of an embodiment of the present invention will be described. [0048]
  • Initially, a user equipped with a mobile computer having a camera coupled to the computer will enter a workspace or environment that has the black/white matrix coded markers placed throughout. A video sequence of the environment including at least one marker will be captured (step [0049] 602) to acquire an image of the marker. A watershed transformation is applied to the image to extract insulated low intensity areas and store their edges as closed-edge strings (step 604). The two closed-edge strings are found that have very close weight centers to form a contour of the marker, i.e.,
  • di,j≦dthr,
  • where, d[0050] i,j is the distance between the weight centers of the closed-edge strings i and j, dthr is an adjustable threshold (step 606). An additional condition for the two closed-edge strings to be a candidate of a marker contour is
  • if li<lj, then clower lj≦li≦cupper lj;
  • else clower li≦lj≦cupper li
  • where l[0051] i and lj are the lengths (in number of edge points) of the edge strings, clower and cupper the coefficients for the lower and upper limit of the string length. For example, when the width of the inner square is 0.65 times of the width of the outer square, clower=0.5 and cupper=0.8 can be chosen. In addition, another condition check can be applied to see whether a bounding box of the shorter edge string is totally inside the bounding box of the longer edge string. FIG. 7 shows an example of such candidate edge strings.
  • For most conditions, there is no extreme projective distortion on the images of the markers. Therefore, the method can extract image points of the outer corners of a marker from the candidate edge strings (step [0052] 608). First, the points are sorted in the longer edge string to an order that all the edge points are sequential connected. Then, a predetermined number, e.g., twenty, of evenly distributed edge points are selected from the edge string that evenly divide the sorted edge string into segments. With no extreme projective distortion, there should be 4 to 6 selected points on each side of the marker. As for the case shown in FIG. 7, the cross point of straight lines fitted using points 1 to 4 and points 5 to 8 will be the first estimation of the image correspondence of a corner point of the marker. The other corner points can be found similarly (step 610).
  • Based on the corner points obtained from the previous step, the estimation of the image correspondences of the marker corner can be improved by using all the edge points of the edge string to fit the lines and find the cross points (step [0053] 612). The 1-D Canny edge detection method is then applied to find the edge of the marker (step 614) and the final correspondences of the marker corners are computed. Once the marker has been detected, the image correspondences of the circles in the coding matrix need to be identified to determine the code of the marker.
  • There are two ways to extract the image correspondences of the circles of the matrix for decoding (step [0054] 616): (1)Project the marker to the image with the first estimation of a homography obtained from the correspondences of corner points c1,c2,c3 and c4. To get accurate back projection, a non-linear optimization is needed in the estimation of the homography. (2) To avoid the non-linear optimization, an approximation of the feature points can be approximated using linear interpolation. For this purpose, the interpolation functions of the 4-node-2-dimensional linear serendipity element from finite element method, as is known in the art, can be used. Shown in FIG. 8, the approximate image correspondence (u, v) of point (X, Y) can be obtained from: u ( X , Y ) = i = 1 4 ( N i ( X , Y ) u i ) v ( X , Y ) = i = 1 4 ( N i ( X , Y ) v i ) ( 3 )
    Figure US20030076980A1-20030424-M00002
  • where the interpolation function N[0055] i(X,Y) is expressed as N i ( X , Y ) = 1 4 ( 1 + XX i ) ( 1 + YY i ) , ( 4 )
    Figure US20030076980A1-20030424-M00003
  • for i=1, 2, 3, and 4. [0056]
  • Then, the 1-D Canny edge detection is also applied to accurately locate the correspondences of the corners of the inner square. [0057]
  • Once the circles of the matrix of a marker is determined, the code for the marker is derived as described above (step [0058] 618), for example, 4095 b as shown in FIG. 6(B). Once the code has been determined, the code can be matched against a database of codes, where the database will have information related to the code (step 620) and the pose of the marker can be determined. Additionally, the centers of the black circles can be used as additional correspondences for camera calibration. For a marker using a 4×4 coding matrix, there can be up to 23 correspondences (i.e., the marker coded 4095 a).
  • By using the black/white matrix coded markers as described above, the marker detection and decoding is based on the image intensity only. Therefore, the detection and decoding are not affected by a color classification problem, and stable decoding results can be obtained under various environments. For the purposes of detecting markers and finding correspondences, only an 8-bit gray level image is needed, resulting in processing a smaller amount of data and achieving better system performance. Additionally, the black/white matrix coded markers provide a larger number of different coded markers, resulting increased coding flexibility. [0059]
  • In some applications, it's not necessary to have a large number (e.g., tens of thousands) of distinctly coded markers but the marker decoding robustness is more important. To increase the decoding robustness, error-correcting coding can be applied to the decoding of markers. For example, if using the 4×4 decoding matrix, up to 12 bits are available for marker coding. Without considering automatic error-correction, up to 12,288 different markers are available. According to the Hamming bound theorem, as is known in the art, a 12-bit binary signal can have 2[0060] 5=32 codes with the least Hamming distance of 5 (to which a 2-bit automatic error correction can be applied). If only 1-bit automatic error correction coding is needed (the least Hamming distance is 3), up to 28=256 codes with 12-bit coding is available.
  • For example, assume the codes ‘000000001001’ and ‘000000000111’ are eligible codes from a set of codes that have at least a Hamming distance of 3 between any two of eligible codes. Then, by marker detection and decoding, a resulting code r=‘000000000011’ that is not in the set of eligible codes is obtained. There is at least 1 bit error in r. Comparing with all the eligible codes, the Hamming distance between r and the second code, ‘000000000111’, is 1, and the Hamming distance between r and the first code, ‘000000001001’, is 2. The Hamming distances between r and any other legal code is larger than or equal to 3. Therefore, by choosing the eligible code that has the least Hamming distance to r, the 1-bit error can be automatically corrected and the final decoding result is then set to ‘000000000111’, which is the second code. [0061]
  • The marker systems of the present invention can obtain accurate (sub-pixel) correspondences of more than 4 co-planar points using one marker or a set of markers in the same plane Since the metric information of the feature points on the markers are known, there are two cases when the information can be used to carry out camera calibration: (i) to obtain both intrinsic and extrinsic camera parameters; (ii) pose estimation, i.e., when the intrinsic camera parameters are known, to obtain the extrinsic parameters. In the first case, a homography-based calibration algorithm can be applied. For the second case, either the homography-based algorithm or a conventional 3-point algorithm can be applied. In many cases, the camera's intrinsic parameters can be obtained using Tsai's algorithm, as is known in the art, or the homography-based algorithm. [0062]
  • The coded visual markers of the present invention can be used in many applications, for example, for localization and data navigation. In this application, a user is equipped with a mobile computer that has (wireless) network connection with a main system, e.g. a server, so the user can access a related database. There is a camera attached to the mobile computer, for example, a SONY VAIO™ with a built-in USB camera and a built-in microphone or Xybernaut™ mobile computer with an plug-in USB camera and microphone. The system can help the user to locate their coordinates in large industrial environments and present to them information obtained from the database and the real-time systems. The user can interact with the system using keyboard, touch pad, or even voice. In this application, the markers coordinates and orientation in the global system are predetermined, the camera captures the marker and the system computes for the pose of the camera related to the captured marker, and thus obtain the position and orientation of the camera in the global system. Such localization information is then used for accessing related external databases, for example, to obtain the closest view of an on-site image with a 3-D reconstructed virtual structure overlay, or present the internal design parameters of a piece of equipment of interest. Additionally, the localization information can also be used to navigate the user through the site. [0063]
  • Furthermore, the coded visual markers of the present invention can be employed in Augmented Reality (AR) systems. A head mounted-display (HMD) is a key component to create an immersive AR environment for the users, i.e., an environment where virtual objects are combined with real objects. There are usually two kinds of HMDs, optical-see-through HMD and video-see-through HMD. The optical-see-through HMD directly uses a scene of the real world with the superimposition of virtual objects projected to the eye using a projector attached to eyeglasses. Since the real-world is directly captured by the eye, it usually requires the calibration of the HMD with the user's eyes to obtain good registration between the virtual objects and the real world. In addition, it also requires better motion tracking performance to reduce the discrepancies between the real and virtual world objects. The video-see-through uses a pair of cameras to capture the scenes of the real-world which is projected to the user. The superimposition of virtual objects is performed on the captured images. Therefore, only the camera needs to be calibrated for such AR processes. With the real-time detection and decoding features of the present invention, the coded markers described above are suitable for motion tracking and calibration in the HMD applications for both industrial and medical applications. [0064]
  • While the invention has been shown and described with reference to certain preferred embodiments thereof, it will be understood by those skilled in the art that various changes in form and detail may be made therein without departing from the spirit and scope of the invention as defined by the appended claims. [0065]

Claims (34)

    What is claimed is:
  1. 1. A method for determining a pose of a user comprising the steps of:
    capturing a video image sequence of an environment including at least one coded marker;
    detecting if the at least one coded marker is present in the video images;
    if the at least one marker is present, extracting feature correspondences of the at least one coded marker;
    determining a code of the at least one coded marker using the feature correspondences; and
    comparing the determined code with a database of predetermined codes to determine the pose of the user.
  2. 2. The method as in claim 1, wherein the at least one coded marker comprises four color blocks arranged in a square formation.
  3. 3. The method as in claim 2, wherein the detecting step further comprises applying a watershed transformation to the at least one coded marker to extract a plurality of closed-edge strings that form a contour of the at least one marker.
  4. 4. The method as in claim 3, wherein the detecting step further comprises grouping at least four closed-edge strings with a least maximum mutual distance.
  5. 5. The method as in claim 4, wherein the extracting step further comprises;
    locating a weight center for each of the at least four closed-edge strings; and
    using the weight centers as a correspondence of each of the four blocks to compute a homography from the at least one coded marker to an image of the marker.
  6. 6. The method as in claim 5, wherein the extracting step further comprises projecting eight lines onto the marker image using the homography to locate the four blocks of the at least one marker.
  7. 7. The method as in claim 6, wherein the extracting step further comprises applying a 1-d Canny edge detection to locate the edge points of the four blocks.
  8. 8. The method as in claim 2, wherein the determining a code of the at least one marker further comprises determining a color of each of the four blocks.
  9. 9. The method as in claim 8, wherein the at least one of the four blocks of the at least one marker includes a white patch.
  10. 10. The method as in claim 1, wherein the at least one marker comprises a coding matrix including a plurality of columns and rows with a numbered square at intersections of the columns and rows, the coding matrix being surrounded by a rectangular frame and a code of the at least one marker being determined by the numbered squares being covered by a circle.
  11. 11. The method as in claim 10, wherein the coding matrix includes m columns and n rows, where m and n are whole number, resulting in 3×2m×n−4 codes.
  12. 12. The method as in claim 10, wherein the detecting step further comprises applying a watershed transformation to the at least one coded marker to extract a plurality of closed-edge strings that form a contour of the at least one marker.
  13. 13. The method as in claim 12, wherein the detecting step further comprises locating at least two closed-edge strings that have close weight centers.
  14. 14. The method as in claim 13, wherein the detecting step further comprises locating a corner of the rectangular frame of the at least one marker by determining a cross-point of the at least two closed-edge strings.
  15. 15. The method as in claim 12, wherein the detecting step comprises locating corners of the rectangular frame of the at least one marker by locating cross-points of the plurality of closed-edge strings.
  16. 16. The method as in claim 15, wherein the extracting step further comprises applying a 1-d Canny edge detection to locate the edge points of the rectangular frame.
  17. 17. The method as in claim 16, wherein the extracting step further comprises
    computing a homography from the corners and edge points;
    extracting image feature correspondences of the at least one marker; and
    determining locations of the circles in the coding matrix by the image correspondences.
  18. 18. The method as in claim 17, wherein the extracting image feature correspondences is performed by linear interpolation.
  19. 19. The method as in claim 17, further comprising the step of calibrating a camera used to capture the video image sequence with the image correspondences of the least one marker.
  20. 20. The method as in claim 19, further comprising the step of determining a position and orientation of the camera relative to the at least one marker.
  21. 21. A program storage device readable by machine, tangibly embodying a program of instructions executable by the machine to perform method steps for determining a pose of a user, the method steps comprising:
    capturing a video image sequence of an environment including at least one coded marker;
    detecting if the at least one coded marker is present in the video images;
    if the at least one marker is present, extracting feature correspondences of the at least one coded marker;
    determining a code of the at least one coded marker using the feature correspondences; and
    comparing the determined code with a database of predetermined codes to determine the pose of the user.
  22. 22. The program storage device as in claim 21, further comprising the step of determining a location of the user based on the pose of the user and a position of the at least one marker.
  23. 23. A system comprising:
    a plurality of coded markers located throughout an environment, each of the plurality of coded markers relating to a location in the environment, codes of the plurality of coded markers being stored in a database;
    a camera for capturing a video image sequence of the environment, the camera coupled to a processor; and
    the processor adapted for detecting if at least one coded marker is present in the video images, if the at least one marker is present, extracting feature correspondences of the at least one coded marker, determining a code of the at least one coded marker using the feature correspondences, and comparing the determined code with the database to determine the pose of the user.
  24. 24. The system as in claim 23, wherein the at least one coded marker comprises four color blocks arranged in a square formation and a code of the at least one marker being determined by a color sequence of the blocks.
  25. 25. The system as in claim 23, wherein the at least one marker comprises a coding matrix including a plurality of columns and rows with a numbered square at intersections of the columns and rows, the coding matrix being surrounded by a rectangular frame and a code of the at least one marker being determined by the numbered squares being covered by a circle.
  26. 26. The system as in claim 23, wherein the camera and processor are mobile devices.
  27. 27. The system as in claim 23, wherein the camera and processor are housed in an integral mobile device.
  28. 28. The system as in claim 23, wherein based on a first location of the at least one marker, the processor being adapted to direct the user to a second location.
  29. 29. The system as in claim 23, further comprising a display device, wherein the display device will provide to the user information relative to the location of the at least one marker.
  30. 30. The system as in claim 23, further comprising a display device, wherein based on a first location of the at least one marker, the display device will provide to the user information to direct the user to a second location.
  31. 31. The system as in claim 23, further comprising an external database of information relative to a plurality of items located throughout the environment, wherein when the user is in close proximity to at least one of the plurality of items, the processor provides the user with access to the external database.
  32. 32. The system as in claim 31, further comprising a display device for displaying information of the external database to the user.
  33. 33. The system as in claim 31, further comprising a display device for displaying virtual objects overlaid on the at least one item.
  34. 34. The system as in claim 31, further comprising a head-mounted display for overlaying information of the at least one item in a view of the user.
US10262693 2001-10-04 2002-10-02 Coded visual markers for tracking and camera calibration in mobile computing systems Abandoned US20030076980A1 (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
US32696001 true 2001-10-04 2001-10-04
US10262693 US20030076980A1 (en) 2001-10-04 2002-10-02 Coded visual markers for tracking and camera calibration in mobile computing systems

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US10262693 US20030076980A1 (en) 2001-10-04 2002-10-02 Coded visual markers for tracking and camera calibration in mobile computing systems
US11704137 US7809194B2 (en) 2001-10-04 2007-02-08 Coded visual markers for tracking and camera calibration in mobile computing systems

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US11704137 Division US7809194B2 (en) 2001-10-04 2007-02-08 Coded visual markers for tracking and camera calibration in mobile computing systems

Publications (1)

Publication Number Publication Date
US20030076980A1 true true US20030076980A1 (en) 2003-04-24

Family

ID=26949399

Family Applications (2)

Application Number Title Priority Date Filing Date
US10262693 Abandoned US20030076980A1 (en) 2001-10-04 2002-10-02 Coded visual markers for tracking and camera calibration in mobile computing systems
US11704137 Active 2025-03-30 US7809194B2 (en) 2001-10-04 2007-02-08 Coded visual markers for tracking and camera calibration in mobile computing systems

Family Applications After (1)

Application Number Title Priority Date Filing Date
US11704137 Active 2025-03-30 US7809194B2 (en) 2001-10-04 2007-02-08 Coded visual markers for tracking and camera calibration in mobile computing systems

Country Status (1)

Country Link
US (2) US20030076980A1 (en)

Cited By (35)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20030068074A1 (en) * 2001-10-05 2003-04-10 Horst Hahn Computer system and a method for segmentation of a digital image
US20030218638A1 (en) * 2002-02-06 2003-11-27 Stuart Goose Mobile multimodal user interface combining 3D graphics, location-sensitive speech interaction and tracking technologies
US20040136567A1 (en) * 2002-10-22 2004-07-15 Billinghurst Mark N. Tracking a surface in a 3-dimensional scene using natural visual features of the surface
DE102005005242A1 (en) * 2005-02-01 2006-08-10 Volkswagen Ag Camera offset determining method for motor vehicle`s augmented reality system, involves determining offset of camera position and orientation of camera marker in framework from camera table-position and orientation in framework
US20070263923A1 (en) * 2004-04-27 2007-11-15 Gienko Gennady A Method for Stereoscopic Measuring Image Points and Device for Carrying Out Said Method
US20080062124A1 (en) * 2006-09-13 2008-03-13 Electronics And Telecommunications Research Institute Mouse interface apparatus using camera, system and method using the same, and computer recordable medium for implementing the same
WO2008073563A1 (en) * 2006-12-08 2008-06-19 Nbc Universal, Inc. Method and system for gaze estimation
US20080170750A1 (en) * 2006-11-01 2008-07-17 Demian Gordon Segment tracking in motion picture
US20080291272A1 (en) * 2007-05-22 2008-11-27 Nils Oliver Krahnstoever Method and system for remote estimation of motion parameters
US20090174595A1 (en) * 2005-09-22 2009-07-09 Nader Khatib SAR ATR treeline extended operating condition
US20090290753A1 (en) * 2007-10-11 2009-11-26 General Electric Company Method and system for gaze estimation
US7850067B1 (en) * 2007-11-27 2010-12-14 Sprint Communications Company L.P. Color bar codes
US8055296B1 (en) 2007-11-06 2011-11-08 Sprint Communications Company L.P. Head-up display communication system and method
US20110305368A1 (en) * 2010-06-11 2011-12-15 Nintendo Co., Ltd. Storage medium having image recognition program stored therein, image recognition apparatus, image recognition system, and image recognition method
CN101650828B (en) 2009-09-07 2012-03-07 东南大学 Method for reducing random error of round object location in camera calibration
US20120133780A1 (en) * 2010-11-29 2012-05-31 Microsoft Corporation Camera calibration with lens distortion from low-rank textures
US8264422B1 (en) 2007-11-08 2012-09-11 Sprint Communications Company L.P. Safe head-up display of information
EP2172873A3 (en) * 2008-10-06 2012-11-21 Mobileye Vision Technologies Bundling of driver assistance systems
US8355961B1 (en) 2007-08-03 2013-01-15 Sprint Communications Company L.P. Distribution center head-up display
US20130050499A1 (en) * 2011-08-30 2013-02-28 Qualcomm Incorporated Indirect tracking
CN103218820A (en) * 2013-04-22 2013-07-24 苏州科技学院 Camera calibration error compensation method based on multi-dimensional characteristics
GB2499498A (en) * 2011-12-23 2013-08-21 Zappar Ltd Locating identifier in image data and augmented reality view
US8558893B1 (en) 2007-08-03 2013-10-15 Sprint Communications Company L.P. Head-up security display
CN103477348A (en) * 2011-04-21 2013-12-25 微软公司 Color channels and optical markers
US8625854B2 (en) 2005-09-09 2014-01-07 Industrial Research Limited 3D scene scanner and a position and orientation system
EP2492845A3 (en) * 2011-02-24 2014-08-20 Nintendo Co., Ltd. Image recognition program, image recognition apparatus, image recognition system, and image recognition method
US8882591B2 (en) 2010-05-14 2014-11-11 Nintendo Co., Ltd. Storage medium having image display program stored therein, image display apparatus, image display system, and image display method
CN104680535A (en) * 2015-03-06 2015-06-03 南京大学 Calibration target, calibration system and calibration method for binocular direct-vision camera
US20150193935A1 (en) * 2010-09-09 2015-07-09 Qualcomm Incorporated Online reference generation and tracking for multi-user augmented reality
US9158777B2 (en) 2010-03-30 2015-10-13 Gravity Jack, Inc. Augmented reality methods and apparatus
US9189856B1 (en) * 2013-03-13 2015-11-17 Electronic Scripting Products, Inc. Reduced homography for recovery of pose parameters of an optical apparatus producing image data with structural uncertainty
US20160104323A1 (en) * 2014-10-10 2016-04-14 B-Core Inc. Image display device and image display method
US9338447B1 (en) * 2012-03-14 2016-05-10 Amazon Technologies, Inc. Calibrating devices by selecting images having a target having fiducial features
WO2017105964A1 (en) * 2015-12-16 2017-06-22 Lucasfilm Entertainment Company Ltd. Multi-channel tracking pattern
US9852512B2 (en) 2013-03-13 2017-12-26 Electronic Scripting Products, Inc. Reduced homography based on structural redundancy of conditioned motion

Families Citing this family (20)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
FR2857131A1 (en) * 2003-07-01 2005-01-07 Thomson Licensing Sa Method of automatic registration of a geometric model of a scene on an image of the scene, implementation device and programming support.
US7848564B2 (en) * 2005-03-16 2010-12-07 Lucasfilm Entertainment Company Ltd. Three-dimensional motion capture
CA2566260C (en) * 2005-10-31 2013-10-01 National Research Council Of Canada Marker and method for detecting said marker
JP5084167B2 (en) * 2006-03-31 2012-11-28 キヤノン株式会社 Position and orientation measuring method and apparatus
US8542236B2 (en) * 2007-01-16 2013-09-24 Lucasfilm Entertainment Company Ltd. Generating animation libraries
US8130225B2 (en) 2007-01-16 2012-03-06 Lucasfilm Entertainment Company Ltd. Using animation libraries for object identification
US8199152B2 (en) * 2007-01-16 2012-06-12 Lucasfilm Entertainment Company Ltd. Combining multiple session content for animation libraries
CN101589408B (en) * 2007-01-23 2014-03-26 日本电气株式会社 Marker generating and marker detecting system, method and program
US8144153B1 (en) 2007-11-20 2012-03-27 Lucasfilm Entertainment Company Ltd. Model production for animation libraries
EP2157545A1 (en) * 2008-08-19 2010-02-24 Sony Computer Entertainment Europe Limited Entertainment device, system and method
US9142024B2 (en) 2008-12-31 2015-09-22 Lucasfilm Entertainment Company Ltd. Visual and physical motion sensing for three-dimensional motion capture
US8948447B2 (en) 2011-07-12 2015-02-03 Lucasfilm Entertainment Companyy, Ltd. Scale independent tracking pattern
US9508176B2 (en) 2011-11-18 2016-11-29 Lucasfilm Entertainment Company Ltd. Path and speed based character control
CN104271046B (en) 2012-03-07 2018-01-16 齐特奥股份有限公司 A method for tracking and guidance sensors and instruments and systems
US9519968B2 (en) 2012-12-13 2016-12-13 Hewlett-Packard Development Company, L.P. Calibrating visual sensors using homography operators
EP2767232A1 (en) 2013-02-15 2014-08-20 Koninklijke Philips N.V. System and method for determining a vital sign of a subject
US9317770B2 (en) * 2013-04-28 2016-04-19 Tencent Technology (Shenzhen) Co., Ltd. Method, apparatus and terminal for detecting image stability
US9579573B2 (en) 2013-06-10 2017-02-28 Pixel Press Technology, LLC Systems and methods for creating a playable video game from a three-dimensional model
KR20150116260A (en) 2014-04-07 2015-10-15 삼성전자주식회사 Method for marker tracking and an electronic device thereof
US9707595B2 (en) * 2015-12-16 2017-07-18 Waste Repurposing International, Inc. Household hazardous waste recovery

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5856844A (en) * 1995-09-21 1999-01-05 Omniplanar, Inc. Method and apparatus for determining position and orientation

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6782119B1 (en) * 2000-06-29 2004-08-24 Ernest Ross Barlett Space planning system
US7526122B2 (en) * 2001-07-12 2009-04-28 Sony Corporation Information inputting/specifying method and information inputting/specifying device

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5856844A (en) * 1995-09-21 1999-01-05 Omniplanar, Inc. Method and apparatus for determining position and orientation

Cited By (48)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20030068074A1 (en) * 2001-10-05 2003-04-10 Horst Hahn Computer system and a method for segmentation of a digital image
US6985612B2 (en) * 2001-10-05 2006-01-10 Mevis - Centrum Fur Medizinische Diagnosesysteme Und Visualisierung Gmbh Computer system and a method for segmentation of a digital image
US20030218638A1 (en) * 2002-02-06 2003-11-27 Stuart Goose Mobile multimodal user interface combining 3D graphics, location-sensitive speech interaction and tracking technologies
US20040136567A1 (en) * 2002-10-22 2004-07-15 Billinghurst Mark N. Tracking a surface in a 3-dimensional scene using natural visual features of the surface
US20080232645A1 (en) * 2002-10-22 2008-09-25 Billinghurst Mark N Tracking a surface in a 3-dimensional scene using natural visual features of the surface
US7343278B2 (en) * 2002-10-22 2008-03-11 Artoolworks, Inc. Tracking a surface in a 3-dimensional scene using natural visual features of the surface
US7987079B2 (en) 2002-10-22 2011-07-26 Artoolworks, Inc. Tracking a surface in a 3-dimensional scene using natural visual features of the surface
US20070263923A1 (en) * 2004-04-27 2007-11-15 Gienko Gennady A Method for Stereoscopic Measuring Image Points and Device for Carrying Out Said Method
DE102005005242A1 (en) * 2005-02-01 2006-08-10 Volkswagen Ag Camera offset determining method for motor vehicle`s augmented reality system, involves determining offset of camera position and orientation of camera marker in framework from camera table-position and orientation in framework
US8625854B2 (en) 2005-09-09 2014-01-07 Industrial Research Limited 3D scene scanner and a position and orientation system
US7787657B2 (en) * 2005-09-22 2010-08-31 Raytheon Company SAR ATR treeline extended operating condition
US20090174595A1 (en) * 2005-09-22 2009-07-09 Nader Khatib SAR ATR treeline extended operating condition
US20080062124A1 (en) * 2006-09-13 2008-03-13 Electronics And Telecommunications Research Institute Mouse interface apparatus using camera, system and method using the same, and computer recordable medium for implementing the same
US20080170750A1 (en) * 2006-11-01 2008-07-17 Demian Gordon Segment tracking in motion picture
WO2008073563A1 (en) * 2006-12-08 2008-06-19 Nbc Universal, Inc. Method and system for gaze estimation
US20080291272A1 (en) * 2007-05-22 2008-11-27 Nils Oliver Krahnstoever Method and system for remote estimation of motion parameters
US8355961B1 (en) 2007-08-03 2013-01-15 Sprint Communications Company L.P. Distribution center head-up display
US8558893B1 (en) 2007-08-03 2013-10-15 Sprint Communications Company L.P. Head-up security display
US20090290753A1 (en) * 2007-10-11 2009-11-26 General Electric Company Method and system for gaze estimation
US8055296B1 (en) 2007-11-06 2011-11-08 Sprint Communications Company L.P. Head-up display communication system and method
US8264422B1 (en) 2007-11-08 2012-09-11 Sprint Communications Company L.P. Safe head-up display of information
US7850067B1 (en) * 2007-11-27 2010-12-14 Sprint Communications Company L.P. Color bar codes
EP2172873A3 (en) * 2008-10-06 2012-11-21 Mobileye Vision Technologies Bundling of driver assistance systems
CN101650828B (en) 2009-09-07 2012-03-07 东南大学 Method for reducing random error of round object location in camera calibration
US9158777B2 (en) 2010-03-30 2015-10-13 Gravity Jack, Inc. Augmented reality methods and apparatus
US8882591B2 (en) 2010-05-14 2014-11-11 Nintendo Co., Ltd. Storage medium having image display program stored therein, image display apparatus, image display system, and image display method
US9256797B2 (en) 2010-06-11 2016-02-09 Nintendo Co., Ltd. Storage medium having image recognition program stored therein, image recognition apparatus, image recognition system, and image recognition method
US20110305368A1 (en) * 2010-06-11 2011-12-15 Nintendo Co., Ltd. Storage medium having image recognition program stored therein, image recognition apparatus, image recognition system, and image recognition method
US8731332B2 (en) * 2010-06-11 2014-05-20 Nintendo Co., Ltd. Storage medium having image recognition program stored therein, image recognition apparatus, image recognition system, and image recognition method
US9558557B2 (en) * 2010-09-09 2017-01-31 Qualcomm Incorporated Online reference generation and tracking for multi-user augmented reality
US20150193935A1 (en) * 2010-09-09 2015-07-09 Qualcomm Incorporated Online reference generation and tracking for multi-user augmented reality
US8818132B2 (en) * 2010-11-29 2014-08-26 Microsoft Corporation Camera calibration with lens distortion from low-rank textures
US20120133780A1 (en) * 2010-11-29 2012-05-31 Microsoft Corporation Camera calibration with lens distortion from low-rank textures
EP2492845A3 (en) * 2011-02-24 2014-08-20 Nintendo Co., Ltd. Image recognition program, image recognition apparatus, image recognition system, and image recognition method
CN103477348A (en) * 2011-04-21 2013-12-25 微软公司 Color channels and optical markers
EP2700040A2 (en) * 2011-04-21 2014-02-26 Microsoft Corporation Color channels and optical markers
EP2700040A4 (en) * 2011-04-21 2014-10-15 Microsoft Corp Color channels and optical markers
US20130050499A1 (en) * 2011-08-30 2013-02-28 Qualcomm Incorporated Indirect tracking
GB2499498B (en) * 2011-12-23 2018-01-03 Zappar Ltd Content identification and distribution
US8814048B2 (en) 2011-12-23 2014-08-26 Zappar Limited Content identification and distribution
GB2499498A (en) * 2011-12-23 2013-08-21 Zappar Ltd Locating identifier in image data and augmented reality view
US9338447B1 (en) * 2012-03-14 2016-05-10 Amazon Technologies, Inc. Calibrating devices by selecting images having a target having fiducial features
US9189856B1 (en) * 2013-03-13 2015-11-17 Electronic Scripting Products, Inc. Reduced homography for recovery of pose parameters of an optical apparatus producing image data with structural uncertainty
US9852512B2 (en) 2013-03-13 2017-12-26 Electronic Scripting Products, Inc. Reduced homography based on structural redundancy of conditioned motion
CN103218820A (en) * 2013-04-22 2013-07-24 苏州科技学院 Camera calibration error compensation method based on multi-dimensional characteristics
US20160104323A1 (en) * 2014-10-10 2016-04-14 B-Core Inc. Image display device and image display method
CN104680535A (en) * 2015-03-06 2015-06-03 南京大学 Calibration target, calibration system and calibration method for binocular direct-vision camera
WO2017105964A1 (en) * 2015-12-16 2017-06-22 Lucasfilm Entertainment Company Ltd. Multi-channel tracking pattern

Also Published As

Publication number Publication date Type
US20070133841A1 (en) 2007-06-14 application
US7809194B2 (en) 2010-10-05 grant

Similar Documents

Publication Publication Date Title
Gordon et al. What and where: 3D object recognition with accurate pose
Shashua et al. Relative affine structure: Canonical model for 3D from 2D geometry and applications
Mohring et al. Video see-through AR on consumer cell-phones
US7103212B2 (en) Acquisition of three-dimensional images by an active stereo technique using locally unique patterns
US6975755B1 (en) Image processing method and apparatus
US6844871B1 (en) Method and apparatus for computer input using six degrees of freedom
Naimark et al. Circular data matrix fiducial system and robust image processing for a wearable vision-inertial self-tracker
US6597818B2 (en) Method and apparatus for performing geo-spatial registration of imagery
US6587601B1 (en) Method and apparatus for performing geo-spatial registration using a Euclidean representation
US6765569B2 (en) Augmented-reality tool employing scene-feature autocalibration during camera motion
US20050104849A1 (en) Device and method for calculating a location on a display
Vuylsteke et al. Range image acquisition with a single binary-encoded light pattern
You et al. Fusion of vision and gyro tracking for robust augmented reality registration
US6624833B1 (en) Gesture-based input interface system with shadow detection
US20120092329A1 (en) Text-based 3d augmented reality
Rekimoto Matrix: A realtime object identification and registration method for augmented reality
US6911995B2 (en) Computer vision depth segmentation using virtual surface
US7780084B2 (en) 2-D barcode recognition
US20070001950A1 (en) Embedding a pattern design onto a liquid crystal display
US20060204035A1 (en) Method and apparatus for tracking a movable object
US6512857B1 (en) Method and apparatus for performing geo-spatial registration
US7023472B1 (en) Camera calibration using off-axis illumination and vignetting effects
Harville Stereo person tracking with adaptive plan-view templates of height and occupancy statistics
Wagner et al. Robust and unobtrusive marker tracking on mobile phones
US6011558A (en) Intelligent stitcher for panoramic image-based virtual worlds

Legal Events

Date Code Title Description
AS Assignment

Owner name: SIEMENS CORPORATE RESEARCH, INC., NEW JERSEY

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:ZHANG, XIANG;NAVAB, NASSIR;REEL/FRAME:013632/0564

Effective date: 20021202