WO2019093959A1 - A method of generating a three-dimensional mapping of an object - Google Patents

A method of generating a three-dimensional mapping of an object Download PDF

Info

Publication number
WO2019093959A1
WO2019093959A1 PCT/SE2018/051148 SE2018051148W WO2019093959A1 WO 2019093959 A1 WO2019093959 A1 WO 2019093959A1 SE 2018051148 W SE2018051148 W SE 2018051148W WO 2019093959 A1 WO2019093959 A1 WO 2019093959A1
Authority
WO
WIPO (PCT)
Prior art keywords
light pattern
symbol
patch
symbols
values
Prior art date
Application number
PCT/SE2018/051148
Other languages
French (fr)
Inventor
Tomas Christiansson
Original Assignee
Flatfrog Laboratories Ab
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Flatfrog Laboratories Ab filed Critical Flatfrog Laboratories Ab
Publication of WO2019093959A1 publication Critical patent/WO2019093959A1/en

Links

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01BMEASURING LENGTH, THICKNESS OR SIMILAR LINEAR DIMENSIONS; MEASURING ANGLES; MEASURING AREAS; MEASURING IRREGULARITIES OF SURFACES OR CONTOURS
    • G01B11/00Measuring arrangements characterised by the use of optical techniques
    • G01B11/24Measuring arrangements characterised by the use of optical techniques for measuring contours or curvatures
    • G01B11/25Measuring arrangements characterised by the use of optical techniques for measuring contours or curvatures by projecting a pattern, e.g. one or more lines, moiré fringes on the object
    • G01B11/2513Measuring arrangements characterised by the use of optical techniques for measuring contours or curvatures by projecting a pattern, e.g. one or more lines, moiré fringes on the object with several lines being projected in more than one direction, e.g. grids, patterns
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery
    • G06T7/521Depth or shape recovery from laser ranging, e.g. using interferometry; from the projection of structured light

Definitions

  • the present invention relates to depth cameras and the real-time tracking of objects using depth cameras. More particularly the invention relates to a method of generating a three- dimensional mapping of an object and a related system.
  • a real-time depth camera can determine the distance to an object in a field of view of the camera, and to update the distance for every frame of the camera. Numerous applications exist including military, automotive, games and medical purposes.
  • One example of a realtime depth camera is a structured-light 3D scanner.
  • a structured-light 3D scanner works by illuminating the scene with a specially designed light pattern. Depth can then be determined using only a single image of the reflected light.
  • the structured light can be in the form of horizontal and/or vertical lines, points or checker board patterns.
  • WO2007043036 An example of a depth camera using structured light is shown in WO2007043036.
  • This disclosure describes a method of depth mapping by projecting, onto an object, a pattern of multiple spots having respective positions and shapes, such that the positions of the spots in the pattern are uncorrected, while the shapes share a common characteristic.
  • An image of the spots on the object is captured and processed so as to derive a three-dimensional (3D) map of the object.
  • 3D three-dimensional
  • WO2007043036 describes using a moving window to scan the captured image and correlating it to the reference image. This process is slow and processor intensive.
  • HyperDepth Learning Depth from Structured Light Without Matching by Fanello et al
  • Fanello et al describes the use of machine learning techniques to determine patch shifts instead of pattern searching to reduce computational cost and end result.
  • the processing required to perform patch matching remains high.
  • One object is to provide a method of generating a three-dimensional mapping of an object requiring less computational resources.
  • a method of generating a three-dimensional mapping of an object comprising directing a transmitted light pattern onto the object, the transmitted light pattern comprises a plurality of symbols, the symbols encoding at least two possible values.
  • the method comprises receiving, at an imaging device, light comprising a reflected light pattern being reflected by the object from the transmitted light pattern, generating an output image in dependence on the light received at the imaging device, processing the output image to generate a three-dimensional mapping of the object, by identifying a patch in the reflected light pattern, wherein a patch is a plurality of said symbols, being spatially interrelated, decoding the values of the symbols of the patch to determine a position of a least one symbol of the patch in the transmitted light pattern, determining a position of the least one symbol of the patch in the reflected light pattern, generating a displacement value corresponding to the displacement of said position of the at least one symbol of the patch in the reflected light pattern relative to said position of the at least one symbol of the patch in the transmitted light pattern, and
  • a system for generating a three-dimensional mapping of an object comprising a light source configured to direct a transmitted light pattern onto the object, the transmitted light pattern comprises a plurality of symbols, the symbols encoding at least two possible values.
  • the system comprises an imaging device configured to receive light comprising a reflected light pattern being reflected by the object from the transmitted light pattern, a processing unit configured to generate an output image in dependence on the light received at the imaging device, process the output image to generate a three-dimensional mapping of the object, by being configured to identify a patch in the reflected light pattern, wherein a patch is a plurality of said symbols, being spatially interrelated, decode the values of the symbols of the patch to determine a position of a least one symbol of the patch in the transmitted light pattern, determine a position of the least one symbol of the patch in the reflected light pattern, generate a displacement value
  • a computer program product comprising instructions which, when the program is executed by a computer, cause the computer to carry out the steps of the method according to the first aspect.
  • a fourth aspect use of the method according to the first aspect or the system according to the second aspect is provided, for real-time tracking of a touch input device, or of a user providing touch input, in a touch-based interaction system.
  • a fifth aspect use of the method according to the first aspect or the system according to the second aspect is provided, for determining a three-dimensional mapping of a warped glass surface.
  • Some examples of the disclosure provide for three-dimensional mapping of an object requiring less computational resources.
  • Some examples of the disclosure provide for quicker three-dimensional mapping of an object.
  • Some examples of the disclosure provide for a more robust three-dimensional mapping of an object.
  • Some examples of the disclosure provide for facilitated tracking of a three-dimensional object.
  • FIG. 1 is a schematic illustration of a system for generating a three-dimensional mapping of an object, according to examples of the disclosure
  • Fig. 2 is a schematic illustration of a transmitted light pattern in a system for generating a three-dimensional mapping of an object, according to examples of the disclosure
  • Fig. 3 is a schematic illustration of a set of patches in the transmitted light pattern in a system for generating a three-dimensional mapping of an object, according to examples of the disclosure
  • Fig. 4 is a magnified portion of a patch in the transmitted light pattern of Fig. 3, in a system for generating a three-dimensional mapping of an object, according to examples of the disclosure;
  • Fig. 5 is a schematic illustration of a transmitted light pattern, and a reflected light pattern, being reflected onto an object, for generating a three-dimensional mapping of the object, according to examples of the disclosure;
  • Fig. 6 is a schematic illustration of a reflected light pattern, having displaced symbols in relation to a transmitted light pattern, according to examples of the disclosure
  • Fig. 7a is a flowchart of a method of generating a three-dimensional mapping of an object, according to examples of the disclosure.
  • Fig. 7b is another flowchart of a method of generating a three-dimensional mapping of an object, according to examples of the disclosure.
  • Fig. 8a is a flowchart of a method for locating a symbol of a patch within the entire projected pattern.
  • Fig. 8b is a flowchart of another method for locating a symbol of a patch within the entire projected pattern.
  • Fig. 9a is a flowchart with example numbers of a method for locating a symbol of a patch within the entire projected pattern.
  • Fig. 9b is a flowchart with example numbers of another method for locating a symbol of a patch within the entire projected pattern.
  • Fig. 1 is a schematic illustration of a system 200 for generating a three-dimensional mapping of an object 301.
  • a related method 100 is provided, as described further with reference to the flow-chart of Figs. 7a-b.
  • the system 200 comprises a light source 208 configured to direct a transmitted light pattern 201 onto an object 301.
  • the transmitted light pattern 201 comprises a plurality of symbols 203, 203', 203", 203"' .
  • Fig. 2 is a schematic illustration of such transmitted light pattern 201 of symbols 203, 203', 203", 203"' (in short referred to as 'symbols 203' below).
  • the symbols 203 may assume various shapes and have different orientations in the transmitted light pattern 201, besides from the shapes illustrated in the example of Fig. 2.
  • the symbols 203 encode at least two possible values. I.e. by having at least two different symbols 203, e.g. as exemplified with symbols 203 and 203' in the magnified portion in Fig. 4, it is possible to associate each of the two symbols 203, 203', with a unique value, such as a binary value.
  • the two symbols 203, 203' are represented by diagonally arranged dots or squares, shifted 90 degrees to represent two different binary values 0 (00), and 1 (10).
  • the system 200 comprises an imaging device 204 configured to receive light comprising a reflected light pattern 202 being reflected by the object 301 from the transmitted light pattern 201.
  • Fig. 5 is a schematic illustration of the transmitted light pattern 201 being directed onto the object 301, and the resulting reflected light pattern 202 being reflected by the object 301, which is received by the imaging device 204.
  • the system 200 comprises a processing unit 209 configured to generate 103 an output image 205 in dependence on the light received at the imaging device 204.
  • the processing unit 209 is configured to process 104 the output image 205 to generate a three-dimensional mapping of the object 301, by being configured to identify 105 a patch 206 in the reflected light pattern 202.
  • a patch 206 is a collection of symbols 203, being spatially interrelated, in the reflected light pattern 202, as illustrated in Fig. 5.
  • the transmitted light pattern 201 has corresponding patches 207 of symbols 203, as further illustrated in the enlarged view of Fig. 6.
  • the plurality of symbols 203 uniquely identifies each patch 206, 207.
  • the processing unit 209 is configured to decode 106 the values of the symbols 203 of the patch 206 to determine a position (xi, yi) of at least one symbol (e.g. symbol 203' in Fig. 6) of the patch 206 in the transmitted light pattern 201.
  • the position (xi, yi) of the symbol 203' in the transmitted light pattern 201 can be identified by decoding the values (e.g. a series of unique binary values as exemplified above) associated with the symbols 203 of the reflected light 202.
  • the position (xi, yi) of the at least one symbol 203' in the transmitted light pattern 201 i.e.
  • the processing unit 209 is further configured to determine 107 the position (x 2 , y 2 ) of the least one symbol 203' of the patch 206 in the reflected light pattern 202, i.e. by determining the location of the symbol 203 in the retrieved image data.
  • the symbols 203 in the patch 206 of the reflected light pattern 202 will be displaced depending on the geometry of the object 301, where the light is reflected, as further exemplified in Fig. 6.
  • the processing unit 209 is configured to generate 108 a displacement value corresponding to the displacement of the position (x 2 , y 2 ) of the at least one symbol 203' of the patch 206 in the reflected light pattern 202 relative to the position (xi, yi) of the at least one symbol 203' of the patch 207 in the transmitted light pattern 201.
  • a first symbol 203' in the patches 206, 207, of Fig. 6 may be displaced a distance 211 as illustrated in the figure.
  • the displacement of the symbols 203 may include any change of position or geometry thereof.
  • the amount of displacement may vary between the symbols 203 of the patch 206.
  • the processing unit 209 is further configured to generate 109 the three- dimensional mapping of the object 301 in dependence on the displacement value.
  • the positions (xi, yi) of the symbols 203 in the non-displaced light pattern, i.e. in the transmitted light pattern 201 may be directly encoded into the symbols 203.
  • the displacement of the symbols 203 may thus be determined by decoding the values to retrieve the respective reference positions (xi, yi) and compare to the current positions (x 2 , y 2 ) seen in the reflected light pattern 202.
  • the computational resources needed for the three-dimensional mapping of the object 301 can thus be reduced significantly. This provides for a faster generation of the of the object 301, which allows e.g. for an improved tracking of a three- dimensional object in space in real time. A more readily implementable 3D mapping that can be realized in less complex systems. A more robust 3D mapping may also be provided since errors that would otherwise occur through the image recognition processes of multiple images can be avoided.
  • the light source 208 may emit light in near-infrared (NIR) wavelengths, and the imaging device 204 may be configured to detect such wavelengths.
  • NIR near-infrared
  • the imaging device 204 may be configured to only detect NIR wavelengths.
  • the light source 208 may be LCD based.
  • the symbols 203 may be generated by a fixed interference filter.
  • the LCD pixels in an LCD light source 208 may be selectively lit to generate the symbols 203 of various shapes.
  • Fig. 7a illustrates a flow chart of a method 100 of generating a three-dimensional mapping of an object 301.
  • the order in which the steps of the method 100 are described and illustrated should not be construed as limiting and it is conceivable that the steps can be performed in varying order.
  • the method 100 comprises directing 101 a transmitted light pattern 201 onto the object 301.
  • the transmitted light pattern 201 is designed to comprise a plurality of symbols 203, 203', 203", 203"', encoding at least two possible values.
  • the method 100 comprises receiving 102, at an imaging device 204, light comprising a reflected light pattern 202 being reflected by the object 301 from the transmitted light pattern 201.
  • the method 100 comprises generating 103 an output image 205 in dependence on the light received at the imaging device 204, and processing 104 the output image to generate a three-dimensional mapping of the object 301, by identifying 105 a patch 206 in the reflected light pattern 202, wherein a patch is a plurality of said symbols 203, being spatially interrelated.
  • the method 100 comprises decoding 106 the values of the symbols 203 of the patch 206 to determine a position of at least one symbol 203 of the patch 206 in the transmitted light pattern 201, and determining 107 a position of the least one symbol 203 of the patch 206 in the reflected light pattern 202.
  • the method 100 comprises generating 108 a displacement value corresponding to the displacement of said position of the at least one symbol 203 of the patch in the reflected light pattern 202 relative to said position of the at least one symbol 203 of the patch in the transmitted light pattern 201.
  • the method 100 comprises generating 109 the three-dimensional mapping of the object 301 in dependence on the displacement values. The method 100 thus provides for the advantageous benefits as described above in relation to the system 200 and Figs. 1 - 6.
  • Fig. 7b illustrates a further flow chart of a method 100 of generating a three-dimensional mapping of an object 301.
  • the order in which the steps of the method 100 are described and illustrated should not be construed as limiting and it is conceivable that the steps can be performed in varying order.
  • the method 100 may comprise decoding 107' address data for the at least one symbol 203 of the patch 206 in the reflected light pattern 202 based on the aforementioned values. I.e. each symbol 203 may be associated with a unique value, such as a binary value, amongst a plurality of different values that may be assigned to the symbol 203 in dependence on a shape and/or orientation of the symbol 203.
  • the unique values may thus be utilized as address data, uniquely identifying the position each symbol 203 in the transmitted light pattern 201 as described above.
  • the address data of a symbol 203 may be defined by the symbol 203 alone, in dependence of the shape and/or orientation thereof, or by a plurality of adjacent or surrounding symbols 203, as explained further below.
  • the method 100 may thus comprise determining the position of the least one symbol 203 of the patch 206 in the transmitted light pattern 201 based on the address data.
  • the method 100 may comprise determining 107" the address data for a first symbol 203' based on the values of a group of neighbouring symbols adjacent the first symbol 203'.
  • Fig. 4 shows an example where a first symbol 203' positioned in the center of the patch 207, is surrounded by a plurality of neighbouring symbols, e.g. 203, 203", 203" ' .
  • neighbouring symbol is associated with a value in dependence of its shape.
  • a symbol 203" to the left of the first symbol 210 is formed of two adjacent squares or dots arranged as a horizontal line, and a symbol 203" ' at the upper left is a vertical line, and a symbol 203 at the upper right corner in the figure assumes an intermediate angled orientation, being angled 90 degrees with respect to the first symbol 203' .
  • Each of the shapes of the symbols are in this example associated with a corresponding binary value, i.e.; 0 (00), 1 (10), 2 (01), and 3 (11), for symbols 203, 203', 203", and 203"' respectively.
  • a group of neighbouring symbols e.g.
  • first symbol 203' may define a unique address of the first symbol 203', that can be utilized for defining a reference position (xi, yi) (i.e. non- displaced position) of the first symbol 203' in the transmitted light pattern 201.
  • the respective addresses may subsequently be retrieved for the symbols 203 in the reflected light pattern 202, by decoding e.g. the binary values of the different shapes, and the reference positions (xi, yi) associated with the different addresses may be retrieved, e.g. from stored look-up tables, and compared to the new positions (x 2 , y 2 ) for determining the displacement resulting from the reflection by the object 301.
  • the number of neighbouring symbols utilized for defining an address of a particular symbol 203 may be varied depending on the application.
  • the addresses and positions may also be determined for a varying number of symbols 203. E.g. it may not be necessary to determine the address and positions for all symbols 203, in order to obtain sufficient displacement information to generate the three- dimensional mapping of the object 301.
  • the method 100 may comprise determining 107" ' the position of the at least one symbol 203 in the transmitted light pattern 201 in two dimensions based on the address data. This provides for conveniently defining the position of a particular symbol 203 in two- dimensional image data.
  • the method 100 may comprise decoding 107"" digital values such as 1- or 2-bit digital values from each symbol 203.
  • the example of Fig. 4 illustrates encoding of 2-bit digital values in the symbols 203, by utilizing the various shapes thereof. It is also conceivable that only two different shapes may be used to encode 1- bit values, as well as more complex variations in the shapes of the symbols for higher-bit values.
  • the method 100 may comprise decoding a 2-bit digital value from each symbol 203, and determine a position of a symbol 203 in a first dimension based on the first bit, and determine a position of a symbol 203 in a second dimension based on the second bit.
  • the method 100 may comprise compensating 110 the displacement value over the reflected light pattern 202 based on error correcting information in the transmitted light pattern 201. It is also conceivable that other data such as the encoded addresses of the symbols may be compensated based on the error correcting information. It is thus possible to encode secondary data in the transmitted light pattern 201, in addition to primary address- and position data for the symbols 203 as described above, such as data that may correct for any variances or deviations in the system 200, e.g. in the optical components thereof. This provides for embedded error correction in the transmitted light pattern 201, which may also be updated over time. The y-position may be used for error correction, since the displacement occurs in the same direction as the distance between the imaging device 204 and the light source 208.
  • the method 100 may comprise determining 108' the centre position of the at least one symbol 203 of the patch 206 in the reflected light pattern 202 for determining the
  • the displacement of the symbols 203 may result in shifting of the positions of the centre positions thereof, as exemplified in Fig. 6 where symbol 203' is shifted a distance 211.
  • determining of the centre positions provides for quantifying the displacement.
  • Displacements such as rotational movements of the symbols 203 may also be determined.
  • the centre positions may be determined by using sub-pixel algorithms by optimizing a known function (function fitting) or computing the centre-of-gravity of the symbols.
  • An example of a method for determining the centre of the symbol (or centroid) is the use of image moments, i.e. Determining a weighted average (moment) of the symbol pixels' intensities and using it to determine the centre of the symbol.
  • Orientation of the symbol can also be derived by first using the second order central moments by, for example, using the second order central moments to construct a covariance matrix and then determining orientation of the symbol from eigenvectors of this matrix.
  • the orientation of the imaged symbols is stored as a 2-bit value.
  • the method 100 may comprise calculating 109' the coordinates of the three-dimensional mapping of the object 301 based on displaced positions of the symbols 203 in the patch 206 of the reflected light pattern 202 relative calibrated positions of the symbols 203 in the transmitted light pattern 201.
  • Step 105 comprises identifying a patch 206 in the reflected light pattern 202 to be decoded, wherein the patch 206 is a 3 by 3 matrix of said symbols 203, as shown in figure 9a, for example.
  • Step 106 comprises decoding the values of the symbols 203 of the patch 206. This is achieved by computing a centre position and angle of each symbol, preferably in the manner described above.
  • step 107 comprises a sequence of sub-steps.
  • Step 310 comprises converting the angle of the respective symbol determined in step 106 to a 2-bit binary value.
  • Step 320a comprises extracting bit 0 of the angle of the respective symbol of the patch 206 to form a 'bit 0 patch' .
  • Step 330a comprises converting vertical columns of the 'bit 0 patch' bits from top to bottom to binary numbers.
  • Step 340a comprises determining the positions of the binary numbers within a repeating binary pattern.
  • the repeating pattern is "00010111".
  • the pattern may be "11101000”.
  • Step 350a comprises computing circular rotation between column 0 and column 1, and column 1 and column 2, each rotation can have 8 different values.
  • the numbers shown in figure 9a are modulo 8 (i.e. 0-7).
  • Step 360a comprises using the values from step 350a to compute a horizontal position of the top left symbol of patch 206. In this embodiment, this is achieved by determining the position of the values from step 350a in a repeating pattern.
  • the pattern used to determine the position values in figure 9a is "0, 0, 1, 0, 2, 0, 3, 0, 4, 0, 5, 0, 6, 0, 7, 1, 1, 2, 1, 3, 1, 4, 1, 5, 1,
  • steps 320b-350b are performed:
  • Step 320b comprises extracting bit 1 of the angle of the respective symbol of the patch 206 to form a 'bit 1 patch' .
  • Step 330b comprises converting horizontal rows of the 'bit 1 patch' bits from left to right to binary numbers.
  • Step 340b comprises determining the positions of the binary numbers within a repeating binary pattern.
  • the repeating pattern may be "00010111".
  • Step 350b comprises computing circular rotation between row 0 and row 1, and row 1 and row 2, each rotation can have 8 different values.
  • the numbers shown in figure 9a are modulo 8 (i.e. 0-7).
  • Step 360b comprises using the values from step 350b to compute a vertical position of the top left symbol of patch 206. In this embodiment, this is achieved by determining the position of the values from step 350b in a repeating pattern.
  • the pattern used to determine the position values in figure 9a may be "0, 0, 1, 0, 2, 0, 3, 0, 4, 0, 5, 0, 6, 0, 7, 1, 1, 2, 1, 3, 1, 4, 1, 5, 1, 6, 1, 7, 2, 2, 3, 2, 4, 2, 5, 2, 6, 2, 7, 3, 3, 4, 3, 5, 3, 6, 3, 7, 4, 4, 5, 4, 6, 4, 7, 5, 5, 6, 5, 7, 6, 6, 7, 7".
  • [0, 5] is located as position 9 in this pattern.
  • a lookup table is used to provide the corresponding values. The result is that the values from step 350b provide 64 different horizontal positions.
  • Step 370 comprises combining horizontal and vertical positions to determine absolute position.
  • Step 108 comprises determining the displacement of each symbol's absolute position against the projected position to determine the depth of each symbol.
  • Step 105 comprises identifying a patch 206 in the reflected light pattern 202 to be decoded, wherein the patch 206 is a 3 by 3 matrix of said symbols 203, as shown in figure 8b, for example.
  • Step 106 comprises decoding the values of the symbols 203 of the patch 206. This is achieved by computing a centre position and angle of each symbol, preferably in the manner described above.
  • step 107 comprises a sequence of sub-steps.
  • Step 430a comprises converting vertical columns of the 'bit 0 patch' bits from top to bottom to binary numbers.
  • Step 440a comprises determining the positions of the binary numbers within a repeating binary pattern.
  • the repeating pattern is "00010111".
  • the pattern may be "11101000”.
  • Step 450a comprises computing circular rotation between column 0 and column 1, and column 1 and column 2, each rotation can have 8 different values.
  • the numbers shown in figure 9b are modulo 8 (i.e. 0-7).
  • Step 460a comprises using the values from step 450a to compute a local horizontal position of the top left symbol of patch 206. In this embodiment, this is achieved by determining the position of the values from step 450a in a repeating pattern.
  • the pattern used to determine the position values in figure 9b may be "0, 0, 1, 0, 2, 0, 3, 0, 4, 0, 5, 0, 6, 0, 7, 1, 1, 2, 1, 3, 1, 4, 1, 5, 1, 6, 1, 7, 2, 2, 3, 2, 4, 2, 5, 2, 6, 2, 7, 3, 3, 4, 3, 5, 3, 6, 3, 7, 4, 4, 5, 4, 6, 4, 7, 5, 5, 6, 5, 7, 6, 6, 7, 7".
  • [6, 6] is located as position 60 in this pattern.
  • a lookup table is used to provide the corresponding values. The result is that the values from step 450a provide 64 different local horizontal positions.
  • Step 420b comprises extracting bit 1 of the angle of the respective symbol of the patch 206 to form a 'bit 1 patch' .
  • Step 430b comprises converting horizontal rows of the 'bit 1 patch' bits from left to right to binary numbers.
  • Step 440b comprises determining the positions of the binary numbers within a repeating binary pattern of 7 bits (unlike the 8 bit pattern of step 440a).
  • the repeating pattern may be "0001011".
  • Step 450b comprises computing circular rotation between row 0 and row 1, and row 1 and row 2, each rotation can have 7 different values.
  • the numbers shown in figure 9b are modulo 7 (i.e. 0-6).
  • Step 460b comprises using the values from step 450b to compute a local vertical position of the top left symbol of patch 206. In this embodiment, this is achieved by determining the position of the values from step 450b in a repeating pattern.
  • the pattern used to determine the position values may be "0, 0, 1, 0, 2, 0, 3, 0, 4, 0, 5, 0, 6, 1, 1, 2, 1, 3, 1, 4, 1, 5, 1, 6, 2, 2, 3, 2, 4, 2, 5, 2, 6, 3, 3, 4, 3, 5, 3, 6, 4, 4, 5, 4, 6, 5, 5, 6, 6".
  • [0, 4] is located as position 7 in this pattern.
  • a lookup table is used to provide the corresponding values. The result is that the values from step 350b provide 49 different local vertical positions.
  • Step 470a comprises calculating a global horizontal position:
  • Step 470b comprises, in a similar process to step 470a, calculating a global vertical position:
  • Step 480 comprises combining global horizontal and vertical positions of the top left symbol of patch 206 to determine absolute global position of the symbol.
  • Step 108 comprises determining the displacement of each symbol's absolute global position against the projected position to determine the depth of each symbol.
  • the symbols 203 may comprise elongated shapes, and the associated values may be defined by a direction in which the elongated shape extends, as described in relation to Fig. 4.
  • the elongated shape of a symbol 203 may comprise a continuous shape, such as a line or bar (see e.g. symbol 203" in Fig. 4), or a plurality of adjacent shapes, such as a plurality of dots or lines collectively forming a symbol 203.
  • the various shapes of the symbols may be optimized to facilitate distinguishing the shapes from one another, and thereby speed up the identification process.
  • the symbols 203 may comprise encoded address data, and the address of a first symbol 203' may defined by the values of a group of neighbouring symbols adjacent the first symbol 203'.
  • the group of neighbouring symbols may comprise sub-groups of adjacent duplicate symbols. Such redundancy provides for facilitating identification in case some symbols 203 are incorrectly analysed.
  • the values of the symbols 203 may be cyclically repeated over predefined groups of symbols 203 in the transmitted light pattern 201. For example, any 3x3 symbols 203 may be used as neighbouring groups of symbols.
  • a separation distance between the symbols 203 in the transmitted light pattern 201 may vary to define groups of symbols with reduced separation as the groups of neighbouring symbols.
  • the separation between the symbols 203 may be varied to facilitate the identification, e.g. by having 'islands' of symbols in the transmitted light pattern 201, where each island encodes the address of an associated symbol.
  • a computer program product comprising instructions which, when the program is executed by a computer, cause the computer to carry out the steps of the method 100
  • the method 100 or system 200 as described above in relation to Figs. 1 - 7 may be used for real-time tracking of a touch input device, or of a user providing touch input, in a touch- based interaction system. This provides for improving the real-time tracking and optimizing the touch performance in such touch-based systems.
  • the method 100 or system 200 as described above in relation to Figs. 1 - 7 may be used for determining a three-dimensional mapping of a warped glass surface, in e.g. touch-based interaction systems.
  • touch performance in such systems may be further enhanced by the improved method 100 and system 200 allowing for facilitated three-dimensional modelling and characterization of glass surfaces in touch systems.

Landscapes

  • Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Physics & Mathematics (AREA)
  • Optics & Photonics (AREA)
  • Theoretical Computer Science (AREA)
  • Length Measuring Devices By Optical Means (AREA)

Abstract

A method of generating a 3D mapping of an object is disclosed, comprising directing a transmitted light pattern onto the object, the transmitted light pattern comprises a plurality of symbols encoding at least two possible values, receiving light comprising a reflected light pattern being reflected by the object, identifying a patch of symbols in the reflected light pattern, decoding the values of the symbols of the patch to determine a position of a symbol of the patch in the transmitted light pattern, determining a position of the symbol of the patch in the reflected light pattern, generating a displacement value corresponding to the displacement of said position of the symbol of the patch in the reflected light pattern relative to said position of the symbol of the patch in the transmitted light pattern, and generating the three-dimensional mapping of the object in dependence on the displacement value.

Description

A method of generating a three-dimensional mapping of an object Technical Field
The present invention relates to depth cameras and the real-time tracking of objects using depth cameras. More particularly the invention relates to a method of generating a three- dimensional mapping of an object and a related system.
Background Art
A real-time depth camera can determine the distance to an object in a field of view of the camera, and to update the distance for every frame of the camera. Numerous applications exist including military, automotive, games and medical purposes. One example of a realtime depth camera is a structured-light 3D scanner. A structured-light 3D scanner works by illuminating the scene with a specially designed light pattern. Depth can then be determined using only a single image of the reflected light. The structured light can be in the form of horizontal and/or vertical lines, points or checker board patterns.
An example of a depth camera using structured light is shown in WO2007043036. This disclosure describes a method of depth mapping by projecting, onto an object, a pattern of multiple spots having respective positions and shapes, such that the positions of the spots in the pattern are uncorrected, while the shapes share a common characteristic. An image of the spots on the object is captured and processed so as to derive a three-dimensional (3D) map of the object. Essentially, this is the process of projecting a pseudo-random dot pattern to illuminate a scene. The illuminated scene is captured with a camera. Patches (i.e. 3x3, 4x4, or 6x6, etc. pixel portions) from the captured image are matched to patches in the structured light pattern. The disparity of the captured patch is then computed using the shift along the x- axis of the captured patch relative to the corresponding patch of the structured light pattern.
A problem identified with this type of depth mapping is the very high processing requirement needed to match patches from the captured image to the structured light pattern. WO2007043036 describes using a moving window to scan the captured image and correlating it to the reference image. This process is slow and processor intensive.
"HyperDepth: Learning Depth from Structured Light Without Matching by Fanello et al" describes the use of machine learning techniques to determine patch shifts instead of pattern searching to reduce computational cost and end result. However, the processing required to perform patch matching remains high.
What is needed is an optimised method of matching patches of the captured image to the structured light pattern in order to speed up depth map generation. Summary
It is an objective of the invention to at least partly overcome one or more limitations of the prior art.
One object is to provide a method of generating a three-dimensional mapping of an object requiring less computational resources.
One or more of these objectives, and other objectives that may appear from the description below, are at least partly achieved by a method of generating a three-dimensional mapping of an object and a related system according to the independent claims, embodiments thereof being defined by the dependent claims.
According to a first aspect a method of generating a three-dimensional mapping of an object is provided, comprising directing a transmitted light pattern onto the object, the transmitted light pattern comprises a plurality of symbols, the symbols encoding at least two possible values. The method comprises receiving, at an imaging device, light comprising a reflected light pattern being reflected by the object from the transmitted light pattern, generating an output image in dependence on the light received at the imaging device, processing the output image to generate a three-dimensional mapping of the object, by identifying a patch in the reflected light pattern, wherein a patch is a plurality of said symbols, being spatially interrelated, decoding the values of the symbols of the patch to determine a position of a least one symbol of the patch in the transmitted light pattern, determining a position of the least one symbol of the patch in the reflected light pattern, generating a displacement value corresponding to the displacement of said position of the at least one symbol of the patch in the reflected light pattern relative to said position of the at least one symbol of the patch in the transmitted light pattern, and generating the three- dimensional mapping of the object in dependence on the displacement value.
According to a second aspect a system for generating a three-dimensional mapping of an object is provided, comprising a light source configured to direct a transmitted light pattern onto the object, the transmitted light pattern comprises a plurality of symbols, the symbols encoding at least two possible values. The system comprises an imaging device configured to receive light comprising a reflected light pattern being reflected by the object from the transmitted light pattern, a processing unit configured to generate an output image in dependence on the light received at the imaging device, process the output image to generate a three-dimensional mapping of the object, by being configured to identify a patch in the reflected light pattern, wherein a patch is a plurality of said symbols, being spatially interrelated, decode the values of the symbols of the patch to determine a position of a least one symbol of the patch in the transmitted light pattern, determine a position of the least one symbol of the patch in the reflected light pattern, generate a displacement value
corresponding to the displacement of said position of the at least one symbol of the patch in the reflected light pattern relative to said position of the at least one symbol of the patch in the transmitted light pattern, and generate the three-dimensional mapping of the object in dependence on the displacement value.
According to a third aspect a computer program product is provided comprising instructions which, when the program is executed by a computer, cause the computer to carry out the steps of the method according to the first aspect.
According to a fourth aspect use of the method according to the first aspect or the system according to the second aspect is provided, for real-time tracking of a touch input device, or of a user providing touch input, in a touch-based interaction system.
According to a fifth aspect use of the method according to the first aspect or the system according to the second aspect is provided, for determining a three-dimensional mapping of a warped glass surface.
Further examples of the invention are defined in the dependent claims, wherein features for the second and subsequent aspects of the disclosure are as for the first aspect mutatis mutandis.
Some examples of the disclosure provide for three-dimensional mapping of an object requiring less computational resources.
Some examples of the disclosure provide for quicker three-dimensional mapping of an object.
Some examples of the disclosure provide for a more robust three-dimensional mapping of an object.
Some examples of the disclosure provide for facilitated tracking of a three-dimensional object.
It should be emphasized that the term "comprises/comprising" when used in this specification is taken to specify the presence of stated features, integers, steps or components but does not preclude the presence or addition of one or more other features, integers, steps, components or groups thereof.
Brief Description of Drawings
These and other aspects, features and advantages of which examples of the invention are capable of will be apparent and elucidated from the following description of examples of the present invention, reference being made to the accompanying schematic drawings, in which; Fig. 1 is a schematic illustration of a system for generating a three-dimensional mapping of an object, according to examples of the disclosure;
Fig. 2 is a schematic illustration of a transmitted light pattern in a system for generating a three-dimensional mapping of an object, according to examples of the disclosure;
Fig. 3 is a schematic illustration of a set of patches in the transmitted light pattern in a system for generating a three-dimensional mapping of an object, according to examples of the disclosure;
Fig. 4 is a magnified portion of a patch in the transmitted light pattern of Fig. 3, in a system for generating a three-dimensional mapping of an object, according to examples of the disclosure;
Fig. 5 is a schematic illustration of a transmitted light pattern, and a reflected light pattern, being reflected onto an object, for generating a three-dimensional mapping of the object, according to examples of the disclosure;
Fig. 6 is a schematic illustration of a reflected light pattern, having displaced symbols in relation to a transmitted light pattern, according to examples of the disclosure;
Fig. 7a is a flowchart of a method of generating a three-dimensional mapping of an object, according to examples of the disclosure; and
Fig. 7b is another flowchart of a method of generating a three-dimensional mapping of an object, according to examples of the disclosure.
Fig. 8a is a flowchart of a method for locating a symbol of a patch within the entire projected pattern.
Fig. 8b is a flowchart of another method for locating a symbol of a patch within the entire projected pattern.
Fig. 9a is a flowchart with example numbers of a method for locating a symbol of a patch within the entire projected pattern.
Fig. 9b is a flowchart with example numbers of another method for locating a symbol of a patch within the entire projected pattern.
Detailed Description of Example Embodiments
Specific examples of the invention will now be described with reference to the accompanying drawings. This invention may, however, be embodied in many different forms and should not be construed as limited to the examples set forth herein; rather, these examples are provided so that this disclosure will be thorough and complete, and will fully convey the scope of the invention to those skilled in the art. The terminology used in the detailed description of the examples illustrated in the accompanying drawings is not intended to be limiting of the invention. In the drawings, like numbers refer to like elements.
Fig. 1 is a schematic illustration of a system 200 for generating a three-dimensional mapping of an object 301. A related method 100 is provided, as described further with reference to the flow-chart of Figs. 7a-b. The system 200 comprises a light source 208 configured to direct a transmitted light pattern 201 onto an object 301. The transmitted light pattern 201 comprises a plurality of symbols 203, 203', 203", 203"' . Fig. 2 is a schematic illustration of such transmitted light pattern 201 of symbols 203, 203', 203", 203"' (in short referred to as 'symbols 203' below). The symbols 203 may assume various shapes and have different orientations in the transmitted light pattern 201, besides from the shapes illustrated in the example of Fig. 2. The symbols 203 encode at least two possible values. I.e. by having at least two different symbols 203, e.g. as exemplified with symbols 203 and 203' in the magnified portion in Fig. 4, it is possible to associate each of the two symbols 203, 203', with a unique value, such as a binary value. In the example of Fig. 4, which will be described in more detail below, the two symbols 203, 203', are represented by diagonally arranged dots or squares, shifted 90 degrees to represent two different binary values 0 (00), and 1 (10). The system 200 comprises an imaging device 204 configured to receive light comprising a reflected light pattern 202 being reflected by the object 301 from the transmitted light pattern 201. Fig. 5 is a schematic illustration of the transmitted light pattern 201 being directed onto the object 301, and the resulting reflected light pattern 202 being reflected by the object 301, which is received by the imaging device 204. The system 200 comprises a processing unit 209 configured to generate 103 an output image 205 in dependence on the light received at the imaging device 204. The processing unit 209 is configured to process 104 the output image 205 to generate a three-dimensional mapping of the object 301, by being configured to identify 105 a patch 206 in the reflected light pattern 202. A patch 206 is a collection of symbols 203, being spatially interrelated, in the reflected light pattern 202, as illustrated in Fig. 5. The transmitted light pattern 201 has corresponding patches 207 of symbols 203, as further illustrated in the enlarged view of Fig. 6. The plurality of symbols 203 uniquely identifies each patch 206, 207.
The processing unit 209 is configured to decode 106 the values of the symbols 203 of the patch 206 to determine a position (xi, yi) of at least one symbol (e.g. symbol 203' in Fig. 6) of the patch 206 in the transmitted light pattern 201. Hence, the position (xi, yi) of the symbol 203' in the transmitted light pattern 201 can be identified by decoding the values (e.g. a series of unique binary values as exemplified above) associated with the symbols 203 of the reflected light 202. The position (xi, yi) of the at least one symbol 203' in the transmitted light pattern 201, i.e. in the non-displaced reference pattern, may thus be encoded into the values of the symbols 203, either by the value of the at least one symbol 203' itself and/or by the values of a plurality of neighboring symbols 203 as described further below. Thus, when subsequently retrieving the reflected light pattern 202, it is possible to determine the respective positions (xi, yi) of the symbols 203 in the transmitted light pattern 201, by decoding the aforementioned values, irrespectively of the displacement of the symbols 203 in the reflected light pattern 202. The processing unit 209 is further configured to determine 107 the position (x2, y2) of the least one symbol 203' of the patch 206 in the reflected light pattern 202, i.e. by determining the location of the symbol 203 in the retrieved image data.
The symbols 203 in the patch 206 of the reflected light pattern 202 will be displaced depending on the geometry of the object 301, where the light is reflected, as further exemplified in Fig. 6. The processing unit 209 is configured to generate 108 a displacement value corresponding to the displacement of the position (x2, y2) of the at least one symbol 203' of the patch 206 in the reflected light pattern 202 relative to the position (xi, yi) of the at least one symbol 203' of the patch 207 in the transmitted light pattern 201. For example, a first symbol 203' in the patches 206, 207, of Fig. 6 may be displaced a distance 211 as illustrated in the figure. The displacement of the symbols 203 may include any change of position or geometry thereof. The amount of displacement may vary between the symbols 203 of the patch 206. The processing unit 209 is further configured to generate 109 the three- dimensional mapping of the object 301 in dependence on the displacement value. Hence, by having the symbols 203 encoding at least two possible values, the positions (xi, yi) of the symbols 203 in the non-displaced light pattern, i.e. in the transmitted light pattern 201, may be directly encoded into the symbols 203. The displacement of the symbols 203 may thus be determined by decoding the values to retrieve the respective reference positions (xi, yi) and compare to the current positions (x2, y2) seen in the reflected light pattern 202. Accordingly, it is not necessary to compare actual image data of the transmitted and reflected light in order to find corresponding symbols to identify how much displacement has occurred, which is the case in prior art systems. The computational resources needed for the three-dimensional mapping of the object 301 can thus be reduced significantly. This provides for a faster generation of the of the object 301, which allows e.g. for an improved tracking of a three- dimensional object in space in real time. A more readily implementable 3D mapping that can be realized in less complex systems. A more robust 3D mapping may also be provided since errors that would otherwise occur through the image recognition processes of multiple images can be avoided. The light source 208 may emit light in near-infrared (NIR) wavelengths, and the imaging device 204 may be configured to detect such wavelengths. The imaging device 204 may be configured to only detect NIR wavelengths. The light source 208 may be LCD based. The symbols 203 may be generated by a fixed interference filter. The LCD pixels in an LCD light source 208 may be selectively lit to generate the symbols 203 of various shapes.
Fig. 7a illustrates a flow chart of a method 100 of generating a three-dimensional mapping of an object 301. The order in which the steps of the method 100 are described and illustrated should not be construed as limiting and it is conceivable that the steps can be performed in varying order. The method 100 comprises directing 101 a transmitted light pattern 201 onto the object 301. As mentioned, the transmitted light pattern 201 is designed to comprise a plurality of symbols 203, 203', 203", 203"', encoding at least two possible values. The method 100 comprises receiving 102, at an imaging device 204, light comprising a reflected light pattern 202 being reflected by the object 301 from the transmitted light pattern 201. The method 100 comprises generating 103 an output image 205 in dependence on the light received at the imaging device 204, and processing 104 the output image to generate a three-dimensional mapping of the object 301, by identifying 105 a patch 206 in the reflected light pattern 202, wherein a patch is a plurality of said symbols 203, being spatially interrelated. The method 100 comprises decoding 106 the values of the symbols 203 of the patch 206 to determine a position of at least one symbol 203 of the patch 206 in the transmitted light pattern 201, and determining 107 a position of the least one symbol 203 of the patch 206 in the reflected light pattern 202. The method 100 comprises generating 108 a displacement value corresponding to the displacement of said position of the at least one symbol 203 of the patch in the reflected light pattern 202 relative to said position of the at least one symbol 203 of the patch in the transmitted light pattern 201. The method 100 comprises generating 109 the three-dimensional mapping of the object 301 in dependence on the displacement values. The method 100 thus provides for the advantageous benefits as described above in relation to the system 200 and Figs. 1 - 6.
Fig. 7b illustrates a further flow chart of a method 100 of generating a three-dimensional mapping of an object 301. The order in which the steps of the method 100 are described and illustrated should not be construed as limiting and it is conceivable that the steps can be performed in varying order. The method 100 may comprise decoding 107' address data for the at least one symbol 203 of the patch 206 in the reflected light pattern 202 based on the aforementioned values. I.e. each symbol 203 may be associated with a unique value, such as a binary value, amongst a plurality of different values that may be assigned to the symbol 203 in dependence on a shape and/or orientation of the symbol 203. The unique values may thus be utilized as address data, uniquely identifying the position each symbol 203 in the transmitted light pattern 201 as described above. The address data of a symbol 203 may be defined by the symbol 203 alone, in dependence of the shape and/or orientation thereof, or by a plurality of adjacent or surrounding symbols 203, as explained further below. The method 100 may thus comprise determining the position of the least one symbol 203 of the patch 206 in the transmitted light pattern 201 based on the address data.
The method 100 may comprise determining 107" the address data for a first symbol 203' based on the values of a group of neighbouring symbols adjacent the first symbol 203'. Fig. 4 shows an example where a first symbol 203' positioned in the center of the patch 207, is surrounded by a plurality of neighbouring symbols, e.g. 203, 203", 203" ' . Each
neighbouring symbol is associated with a value in dependence of its shape. For example, a symbol 203" to the left of the first symbol 210 is formed of two adjacent squares or dots arranged as a horizontal line, and a symbol 203" ' at the upper left is a vertical line, and a symbol 203 at the upper right corner in the figure assumes an intermediate angled orientation, being angled 90 degrees with respect to the first symbol 203' . Each of the shapes of the symbols are in this example associated with a corresponding binary value, i.e.; 0 (00), 1 (10), 2 (01), and 3 (11), for symbols 203, 203', 203", and 203"' respectively. Thus, a group of neighbouring symbols, e.g. surrounding the first symbol 203', may define a unique address of the first symbol 203', that can be utilized for defining a reference position (xi, yi) (i.e. non- displaced position) of the first symbol 203' in the transmitted light pattern 201. The respective addresses may subsequently be retrieved for the symbols 203 in the reflected light pattern 202, by decoding e.g. the binary values of the different shapes, and the reference positions (xi, yi) associated with the different addresses may be retrieved, e.g. from stored look-up tables, and compared to the new positions (x2, y2) for determining the displacement resulting from the reflection by the object 301. The number of neighbouring symbols utilized for defining an address of a particular symbol 203 may be varied depending on the application. The addresses and positions may also be determined for a varying number of symbols 203. E.g. it may not be necessary to determine the address and positions for all symbols 203, in order to obtain sufficient displacement information to generate the three- dimensional mapping of the object 301.
The method 100 may comprise determining 107" ' the position of the at least one symbol 203 in the transmitted light pattern 201 in two dimensions based on the address data. This provides for conveniently defining the position of a particular symbol 203 in two- dimensional image data. As elucidated above, the method 100 may comprise decoding 107"" digital values such as 1- or 2-bit digital values from each symbol 203. The example of Fig. 4 illustrates encoding of 2-bit digital values in the symbols 203, by utilizing the various shapes thereof. It is also conceivable that only two different shapes may be used to encode 1- bit values, as well as more complex variations in the shapes of the symbols for higher-bit values. The method 100 may comprise decoding a 2-bit digital value from each symbol 203, and determine a position of a symbol 203 in a first dimension based on the first bit, and determine a position of a symbol 203 in a second dimension based on the second bit.
The method 100 may comprise compensating 110 the displacement value over the reflected light pattern 202 based on error correcting information in the transmitted light pattern 201. It is also conceivable that other data such as the encoded addresses of the symbols may be compensated based on the error correcting information. It is thus possible to encode secondary data in the transmitted light pattern 201, in addition to primary address- and position data for the symbols 203 as described above, such as data that may correct for any variances or deviations in the system 200, e.g. in the optical components thereof. This provides for embedded error correction in the transmitted light pattern 201, which may also be updated over time. The y-position may be used for error correction, since the displacement occurs in the same direction as the distance between the imaging device 204 and the light source 208.
The method 100 may comprise determining 108' the centre position of the at least one symbol 203 of the patch 206 in the reflected light pattern 202 for determining the
displacement value based displacement of said centre position. The displacement of the symbols 203 may result in shifting of the positions of the centre positions thereof, as exemplified in Fig. 6 where symbol 203' is shifted a distance 211. Hence, determining of the centre positions provides for quantifying the displacement. Displacements such as rotational movements of the symbols 203 may also be determined. The centre positions may be determined by using sub-pixel algorithms by optimizing a known function (function fitting) or computing the centre-of-gravity of the symbols. An example of a method for determining the centre of the symbol (or centroid) is the use of image moments, i.e. Determining a weighted average (moment) of the symbol pixels' intensities and using it to determine the centre of the symbol. Orientation of the symbol can also be derived by first using the second order central moments by, for example, using the second order central moments to construct a covariance matrix and then determining orientation of the symbol from eigenvectors of this matrix. Preferably, the orientation of the imaged symbols is stored as a 2-bit value. The method 100 may comprise calculating 109' the coordinates of the three-dimensional mapping of the object 301 based on displaced positions of the symbols 203 in the patch 206 of the reflected light pattern 202 relative calibrated positions of the symbols 203 in the transmitted light pattern 201.
The following is a more detailed description of an embodiment of steps 105, 106, 107, and 108. All the mathematical steps are described in a manner easily understood to the reader but may be implemented in other mathematically equivalent ways. In the following embodiment, a flow chart of the steps is provided in figure 8a and an example instance of patch 206 and the resulting calculations is figure 9a:
Step 105 comprises identifying a patch 206 in the reflected light pattern 202 to be decoded, wherein the patch 206 is a 3 by 3 matrix of said symbols 203, as shown in figure 9a, for example.
Step 106 comprises decoding the values of the symbols 203 of the patch 206. This is achieved by computing a centre position and angle of each symbol, preferably in the manner described above.
In this embodiment, step 107 comprises a sequence of sub-steps.
Step 310 comprises converting the angle of the respective symbol determined in step 106 to a 2-bit binary value.
Step 320a comprises extracting bit 0 of the angle of the respective symbol of the patch 206 to form a 'bit 0 patch' .
Step 330a comprises converting vertical columns of the 'bit 0 patch' bits from top to bottom to binary numbers.
Step 340a comprises determining the positions of the binary numbers within a repeating binary pattern. In the example embodiment, the repeating pattern is "00010111".
e.g 000->0
001->1
010- >2
011- >4
100- >7
101- >3
110- >6
111- >5
In another embodiment, the pattern may be "11101000".
Step 350a comprises computing circular rotation between column 0 and column 1, and column 1 and column 2, each rotation can have 8 different values. The numbers shown in figure 9a are modulo 8 (i.e. 0-7). Step 360a comprises using the values from step 350a to compute a horizontal position of the top left symbol of patch 206. In this embodiment, this is achieved by determining the position of the values from step 350a in a repeating pattern. The pattern used to determine the position values in figure 9a is "0, 0, 1, 0, 2, 0, 3, 0, 4, 0, 5, 0, 6, 0, 7, 1, 1, 2, 1, 3, 1, 4, 1, 5, 1,
6, 1, 7, 2, 2, 3, 2, 4, 2, 5, 2, 6, 2, 7, 3, 3, 4, 3, 5, 3, 6, 3, 7, 4, 4, 5, 4, 6, 4, 7, 5, 5, 6, 5, 7, 6, 6,
7, 7". In the example of figure 9a, [6, 6] is located as position 60 in this pattern. In another embodiment, a lookup table is used to provide the corresponding values. The result is that the values from step 350a provide 64 different horizontal positions.
Sequentially or in parallel to steps 320a-350a, the following steps 320b-350b are performed:
Step 320b comprises extracting bit 1 of the angle of the respective symbol of the patch 206 to form a 'bit 1 patch' .
Step 330b comprises converting horizontal rows of the 'bit 1 patch' bits from left to right to binary numbers.
Step 340b comprises determining the positions of the binary numbers within a repeating binary pattern. As in step 340a, the repeating pattern may be "00010111".
Step 350b comprises computing circular rotation between row 0 and row 1, and row 1 and row 2, each rotation can have 8 different values. The numbers shown in figure 9a are modulo 8 (i.e. 0-7).
Step 360b comprises using the values from step 350b to compute a vertical position of the top left symbol of patch 206. In this embodiment, this is achieved by determining the position of the values from step 350b in a repeating pattern. As in step 360a, the pattern used to determine the position values in figure 9a may be "0, 0, 1, 0, 2, 0, 3, 0, 4, 0, 5, 0, 6, 0, 7, 1, 1, 2, 1, 3, 1, 4, 1, 5, 1, 6, 1, 7, 2, 2, 3, 2, 4, 2, 5, 2, 6, 2, 7, 3, 3, 4, 3, 5, 3, 6, 3, 7, 4, 4, 5, 4, 6, 4, 7, 5, 5, 6, 5, 7, 6, 6, 7, 7". In the example of figure 9a, [0, 5] is located as position 9 in this pattern. In another embodiment, a lookup table is used to provide the corresponding values. The result is that the values from step 350b provide 64 different horizontal positions.
Step 370 comprises combining horizontal and vertical positions to determine absolute position.
Step 108 comprises determining the displacement of each symbol's absolute position against the projected position to determine the depth of each symbol.
An alternative embodiment to the above embodiment is now provided, in combination with the flow chart of the steps provided in figure 8b and an example instance of patch 206 and the resulting calculations in figure 9b: Step 105 comprises identifying a patch 206 in the reflected light pattern 202 to be decoded, wherein the patch 206 is a 3 by 3 matrix of said symbols 203, as shown in figure 8b, for example.
Step 106 comprises decoding the values of the symbols 203 of the patch 206. This is achieved by computing a centre position and angle of each symbol, preferably in the manner described above.
In this embodiment, step 107 comprises a sequence of sub-steps.
Step 430a comprises converting vertical columns of the 'bit 0 patch' bits from top to bottom to binary numbers.
Step 440a comprises determining the positions of the binary numbers within a repeating binary pattern. In the example embodiment, the repeating pattern is "00010111".
e.g 000->0
001->1
010- >2
011- >4
100- >7
101- >3
110- >6
111- >5
In another embodiment, the pattern may be "11101000".
Step 450a comprises computing circular rotation between column 0 and column 1, and column 1 and column 2, each rotation can have 8 different values. The numbers shown in figure 9b are modulo 8 (i.e. 0-7).
Step 460a comprises using the values from step 450a to compute a local horizontal position of the top left symbol of patch 206. In this embodiment, this is achieved by determining the position of the values from step 450a in a repeating pattern. As with figure 9a, the pattern used to determine the position values in figure 9b may be "0, 0, 1, 0, 2, 0, 3, 0, 4, 0, 5, 0, 6, 0, 7, 1, 1, 2, 1, 3, 1, 4, 1, 5, 1, 6, 1, 7, 2, 2, 3, 2, 4, 2, 5, 2, 6, 2, 7, 3, 3, 4, 3, 5, 3, 6, 3, 7, 4, 4, 5, 4, 6, 4, 7, 5, 5, 6, 5, 7, 6, 6, 7, 7". In the example of figure 9b, [6, 6] is located as position 60 in this pattern. In another embodiment, a lookup table is used to provide the corresponding values. The result is that the values from step 450a provide 64 different local horizontal positions.
Sequentially or in parallel to steps 420a-460a, the following steps 420b-460b are performed: Step 420b comprises extracting bit 1 of the angle of the respective symbol of the patch 206 to form a 'bit 1 patch' .
Step 430b comprises converting horizontal rows of the 'bit 1 patch' bits from left to right to binary numbers.
Step 440b comprises determining the positions of the binary numbers within a repeating binary pattern of 7 bits (unlike the 8 bit pattern of step 440a). In one embodiment, the repeating pattern may be "0001011".
e.g 000->0
001->1
010- >2
011- >4
100- >6
101- >3
110->5
Step 450b comprises computing circular rotation between row 0 and row 1, and row 1 and row 2, each rotation can have 7 different values. The numbers shown in figure 9b are modulo 7 (i.e. 0-6).
Step 460b comprises using the values from step 450b to compute a local vertical position of the top left symbol of patch 206. In this embodiment, this is achieved by determining the position of the values from step 450b in a repeating pattern. In one embodiment, the pattern used to determine the position values may be "0, 0, 1, 0, 2, 0, 3, 0, 4, 0, 5, 0, 6, 1, 1, 2, 1, 3, 1, 4, 1, 5, 1, 6, 2, 2, 3, 2, 4, 2, 5, 2, 6, 3, 3, 4, 3, 5, 3, 6, 4, 4, 5, 4, 6, 5, 5, 6, 6". In the example of figure 9b, [0, 4] is located as position 7 in this pattern. In another embodiment, a lookup table is used to provide the corresponding values. The result is that the values from step 350b provide 49 different local vertical positions.
Step 470a comprises calculating a global horizontal position:
- For each patch, computing the cumulative horizontal rotation for row 0 in dependence on the local vertical position determined in step 460b. In the example of figure 9b, local vertical position 7 has value 6 in cumulative rotation. Cumulative rotation can be determined from the position in the pattern of step 460b. E.g. position 7 has value 6.
- computing the expected local horizontal position for row 0 of the in the repeating pattern. E.g. (The local horizontal position + the cumulative rotation) modulo 7 = (60 + 6) modulo 7 = 3. As seen in the example of figure 9b, the actual horizontal position of row 0 of the patch was 3. - Since the cumulative rotations are different for different rows and columns as the underlying patterns are 7-bit vs 8-bit, the difference between the expected position and the actual position of the row is used to determine in which repetition of the circular rotation pattern the patch is located. Therefore, a global horizontal position is determined = local horizontal position + ((local horizontal position - expected local horizontal position) modulo 7) * 64. In the example of figure 9b. The global horizontal position is calculated as: 60 + ((3 - 3) modulo 7) * 64 = 60. Therefore, in this example, the row can be found in the first (0 repeats) of the circular rotations.
Step 470b comprises, in a similar process to step 470a, calculating a global vertical position:
- Computing the cumulative horizontal rotation for local horizontal position. Cumulative rotation can be determined from the position in the pattern of step 460b. For local horizontal position 60, we also have cumulative rotation 6 (of columns) of column 0. The expected column value (for 0 repeat circular rotation) is then (7 + 6) modulo 8=5.
- Determine the difference between actual and expected rotations. In the example of figure 9b, the value of the 0 column is 6. Therefore, the difference is +1 (i.e expected rotations-actual rotation modulo 8).
- Use the difference to determine the global position of the patch. In the example of figure 9b, the patch can be found in repetition 1 of the vertical rotations. E.g. Global vertical position = 7 + 1 x 49 = 56
Step 480 comprises combining global horizontal and vertical positions of the top left symbol of patch 206 to determine absolute global position of the symbol.
Step 108 comprises determining the displacement of each symbol's absolute global position against the projected position to determine the depth of each symbol.
The symbols 203 may comprise elongated shapes, and the associated values may be defined by a direction in which the elongated shape extends, as described in relation to Fig. 4.
The elongated shape of a symbol 203 may comprise a continuous shape, such as a line or bar (see e.g. symbol 203" in Fig. 4), or a plurality of adjacent shapes, such as a plurality of dots or lines collectively forming a symbol 203. The various shapes of the symbols may be optimized to facilitate distinguishing the shapes from one another, and thereby speed up the identification process.
As mentioned, the symbols 203 may comprise encoded address data, and the address of a first symbol 203' may defined by the values of a group of neighbouring symbols adjacent the first symbol 203'. The group of neighbouring symbols may comprise sub-groups of adjacent duplicate symbols. Such redundancy provides for facilitating identification in case some symbols 203 are incorrectly analysed. The values of the symbols 203 may be cyclically repeated over predefined groups of symbols 203 in the transmitted light pattern 201. For example, any 3x3 symbols 203 may be used as neighbouring groups of symbols.
A separation distance between the symbols 203 in the transmitted light pattern 201 may vary to define groups of symbols with reduced separation as the groups of neighbouring symbols. Hence the separation between the symbols 203 may be varied to facilitate the identification, e.g. by having 'islands' of symbols in the transmitted light pattern 201, where each island encodes the address of an associated symbol.
A computer program product is provided comprising instructions which, when the program is executed by a computer, cause the computer to carry out the steps of the method 100
The method 100 or system 200 as described above in relation to Figs. 1 - 7 may be used for real-time tracking of a touch input device, or of a user providing touch input, in a touch- based interaction system. This provides for improving the real-time tracking and optimizing the touch performance in such touch-based systems.
The method 100 or system 200 as described above in relation to Figs. 1 - 7 may be used for determining a three-dimensional mapping of a warped glass surface, in e.g. touch-based interaction systems. Hence, the touch performance in such systems may be further enhanced by the improved method 100 and system 200 allowing for facilitated three-dimensional modelling and characterization of glass surfaces in touch systems.
The present invention has been described above with reference to specific examples. However, other examples than the above described are equally possible within the scope of the invention. The different features and steps of the invention may be combined in other combinations than those described. The scope of the invention is only limited by the appended patent claims.
More generally, those skilled in the art will readily appreciate that all parameters, dimensions, materials, and configurations described herein are meant to be exemplary and that the actual parameters, dimensions, materials, and/or configurations will depend upon the specific application or applications for which the teachings of the present invention is/are used.

Claims

1. A method (100) of generating a three-dimensional mapping of an object (301), comprising:
directing (101) a transmitted light pattern (201) onto the object, the transmitted light pattern comprises a plurality of symbols (203, 203', 203", 203"'), the symbols encoding at least two possible values,
receiving (102), at an imaging device (204), light comprising a reflected light pattern (202) being reflected by the object from the transmitted light pattern,
generating (103) an output image (205) in dependence on the light received at the imaging device,
processing (104) the output image to generate a three-dimensional mapping of the object, by
identifying (105) a patch (206) in the reflected light pattern, wherein a patch is a plurality of said symbols, being spatially interrelated,
decoding (106) the values of the symbols of the patch to determine a position of at least one symbol of the patch in the transmitted light pattern,
determining (107) a position of the least one symbol of the patch in the reflected light pattern,
generating (108) a displacement value corresponding to the displacement of said position of the at least one symbol of the patch in the reflected light pattern relative to said position of the at least one symbol of the patch in the transmitted light pattern, and
generating (109) the three-dimensional mapping of the object in dependence on the displacement value.
2. Method according to claim 1, comprising
decoding (107') address data for the at least one symbol of the patch in the reflected light pattern based on said values, and
determining the position of the least one symbol of the patch in the transmitted light pattern based on the address data.
3. Method according to claim 2, comprising determining (107") the address data for a first symbol (210) based on the values of a group of neighbouring symbols adjacent the first symbol.
4. Method according to claim 2 or 3, comprising determining (107" ') the position of the at least one symbol in the transmitted light pattern in two dimensions based on the address data.
5. Method according to any of claims 1 - 4, comprising decoding (107" ") digital values such as 1- or 2-bit digital values from each symbol.
6. Method according to claim 5, comprising decoding a 2-bit digital value from each symbol, and
determining a position of a symbol in a first dimension based on the first bit, and determining a position of a symbol in a second dimension based on the second bit.
7. Method according to any of claims 1 - 6, comprising compensating (110) the displacement value over the reflected light pattern based on error correcting information in the transmitted light pattern.
8. Method according to any of claims 1 - 7, comprising determining (108') the center position of the at least one symbol of the patch in the reflected light pattern for determining the displacement value based on displacement of said center position.
9. Method according to any of claims 1 - 8, comprising calculating (109') the coordinates of the three-dimensional mapping of the object based on displaced positions of the symbols in the patch of the reflected light pattern relative calibrated positions of the symbols in the transmitted light pattern.
10. Method according to any of claims 1 - 9, wherein determining (107) a position of the least one symbol of the patch in the reflected light pattern, comprises the steps of:
determining a plurality of combined values, each combined value formed from a set of symbol values within the patch,
determining a position value of each combined value within a repeating value pattern, determining a set of relative values, each relative value determined as a difference between pairs of position values,
determining a position of the patch along a first axis in dependence on the relative values.
11. System (200) for generating a three-dimensional mapping of an object (301), comprising:
a light source (208) configured to direct a transmitted light pattern (201) onto the object, the transmitted light pattern comprises a plurality of symbols (203, 203', 203", 203"'), the symbols encoding at least two possible values,
an imaging device (204) configured to receive light comprising a reflected light pattern (202) being reflected by the object from the transmitted light pattern,
a processing unit (209) configured to
generate (103) an output image (205) in dependence on the light received at the imaging device,
process (104) the output image to generate a three-dimensional mapping of the object, by being configured to
identify (105) a patch (206) in the reflected light pattern, wherein a patch is a plurality of said symbols, being spatially interrelated,
decode (106) the values of the symbols of the patch to determine a position of at least one symbol of the patch in the transmitted light pattern,
determine (107) a position of the least one symbol of the patch in the reflected light pattern,
generate (108) a displacement value corresponding to the displacement of said position of the at least one symbol of the patch in the reflected light pattern relative to said position of the at least one symbol of the patch in the transmitted light pattern, and
generate (109) the three-dimensional mapping of the object in dependence on the displacement value.
12. System according to claim 11, wherein the symbols comprise elongated shapes, and wherein said value is defined by a direction in which the elongated shape extends.
13. System according to claim 12, wherein the elongated shape comprises a continuous shape or a plurality of adjacent shapes.
14. System according to any of claims 11 - 13, wherein the symbols comprise encoded address data, wherein the address of a first symbol (203') is defined by the values of a group of neighbouring symbols adjacent the first symbol, wherein the group of neighbouring symbols comprise sub-groups of adjacent duplicate symbols.
15. System according to any of claims 11 - 14, wherein the values of the symbols are cyclically repeated over predefined groups of symbols in the transmitted light pattern.
16. A computer program product comprising instructions which, when the program is executed by a computer, cause the computer to carry out the steps of the method according to any of claims 1 - 10.
17. Use of the method according to any of claims 1 - 10 or the system according to any of claims 11 - 15, for real-time tracking of a touch input device, or of a user providing touch input, in a touch-based interaction system.
18. Use of the method according to any of claims 1 - 10 or the system according to any of claims 11 - 15, for determining a three-dimensional mapping of a warped glass surface.
PCT/SE2018/051148 2017-11-10 2018-11-09 A method of generating a three-dimensional mapping of an object WO2019093959A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
SE1730315-7 2017-11-10
SE1730315 2017-11-10

Publications (1)

Publication Number Publication Date
WO2019093959A1 true WO2019093959A1 (en) 2019-05-16

Family

ID=66437999

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/SE2018/051148 WO2019093959A1 (en) 2017-11-10 2018-11-09 A method of generating a three-dimensional mapping of an object

Country Status (1)

Country Link
WO (1) WO2019093959A1 (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112985258A (en) * 2021-01-18 2021-06-18 深圳市菲森科技有限公司 Calibration method and measurement method of three-dimensional measurement system
US20240125594A1 (en) * 2019-10-16 2024-04-18 Virelux Inspection Systems Sàrl Method and system for determining a three-dimensional definition of an object by reflectometry
US12031812B2 (en) * 2019-10-16 2024-07-09 Virelux Inspection Systems Sàrl Method and system for determining a three-dimensional definition of an object by reflectometry

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130314696A1 (en) * 2012-05-24 2013-11-28 Qualcomm Incorporated Transmission of Affine-Invariant Spatial Mask for Active Depth Sensing
US20160178355A1 (en) * 2014-12-23 2016-06-23 RGBDsense Information Technology Ltd. Depth sensing method, device and system based on symbols array plane structured light

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130314696A1 (en) * 2012-05-24 2013-11-28 Qualcomm Incorporated Transmission of Affine-Invariant Spatial Mask for Active Depth Sensing
US20160178355A1 (en) * 2014-12-23 2016-06-23 RGBDsense Information Technology Ltd. Depth sensing method, device and system based on symbols array plane structured light

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
IMPEDOVO SEBASTIANO ET AL.: "Pattern codification strategies in structured light systems", PATTERN RECOGNIT, vol. 37, no. 4, 1 April 2004 (2004-04-01), GB, pages 827 - 849, XP004491495, ISSN: 0031-3203, DOI: doi:10.1016/j.patcog.2003.10.002 *
MOTLEY, DARRYL, HOW TO SCAN DARK, SHINY, OR CLEAR SURFACES WITH A 3D SCANNER [WITH VIDEO DEMO, 27 July 2017 (2017-07-27), Retrieved from the Internet <URL:https://gomeasure3d.com/blog/scan-dark-shiny-clear-surfaces-3d-scanner-video-demo> *
SHAHRAM IZADI ET AL.: "KinectFusion: Real-time 3D Reconstruction and Interaction Using a Moving Depth Camera", PROCEEDINGS OF THE 24TH ANNUAL ACM SYMPOSIUM ON USER INTERFACE SOFTWARE AND TECHNOLOGY, 16 October 2011 (2011-10-16), Santa Barbara, CA , USA, pages 559 - 568, XP002717116, ISBN: 978-1-4503-0716-1, Retrieved from the Internet <URL:http://research.microsoft.com/pubs/155416/kinectfusion-uist-comp.pdf> *

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20240125594A1 (en) * 2019-10-16 2024-04-18 Virelux Inspection Systems Sàrl Method and system for determining a three-dimensional definition of an object by reflectometry
US12031812B2 (en) * 2019-10-16 2024-07-09 Virelux Inspection Systems Sàrl Method and system for determining a three-dimensional definition of an object by reflectometry
CN112985258A (en) * 2021-01-18 2021-06-18 深圳市菲森科技有限公司 Calibration method and measurement method of three-dimensional measurement system

Similar Documents

Publication Publication Date Title
Pages et al. Overview of coded light projection techniques for automatic 3D profiling
Morano et al. Structured light using pseudorandom codes
CN109427066B (en) Edge detection method for any angle
US9829309B2 (en) Depth sensing method, device and system based on symbols array plane structured light
EP2838069B1 (en) Apparatus and method for analyzing image including event information
US9030470B2 (en) Method and system for rapid three-dimensional shape measurement
CN102592124B (en) Geometrical correction method, device and binocular stereoscopic vision system of text image
US9117103B2 (en) Image information processing apparatus and method for controlling the same
CN108592823B (en) Decoding method based on binocular vision color stripe coding
Otori et al. Data-embeddable texture synthesis
CN108592822B (en) Measuring system and method based on binocular camera and structured light coding and decoding
KR102507101B1 (en) Memory-efficient coded optical error correction
US9769454B2 (en) Method for generating a depth map, related system and computer program product
JP2011503748A (en) System and method for reading a pattern using a plurality of image frames
EP3444782B1 (en) Coding distance topologies for structured light patterns for 3d reconstruction
Xu et al. Fourier tag: A smoothly degradable fiducial marker system with configurable payload capacity
JP4147528B2 (en) Method and device for decoding position coding patterns
WO2019093959A1 (en) A method of generating a three-dimensional mapping of an object
CN102763121A (en) Method for decoding a linear bar code
CN112257721A (en) Image target region matching method based on Fast ICP
CN112184825B (en) Calibration plate and calibration method
CN113129385A (en) Binocular camera internal and external parameter calibration method based on multi-coding plane target in space
CN107545259A (en) A kind of Quick Response Code reconstructing method based on Da-Jin algorithm
CN105046187B (en) System for recognizing vehicle identification number
CN109635619A (en) The coding distance topology of structured light pattern for three-dimensional reconstruction

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 18875588

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 18875588

Country of ref document: EP

Kind code of ref document: A1