CN106524909B - Three-dimensional image acquisition method and device - Google Patents

Three-dimensional image acquisition method and device Download PDF

Info

Publication number
CN106524909B
CN106524909B CN201610917877.6A CN201610917877A CN106524909B CN 106524909 B CN106524909 B CN 106524909B CN 201610917877 A CN201610917877 A CN 201610917877A CN 106524909 B CN106524909 B CN 106524909B
Authority
CN
China
Prior art keywords
image
light sources
dimensional
dimensional image
photometric
Prior art date
Application number
CN201610917877.6A
Other languages
Chinese (zh)
Other versions
CN106524909A (en
Inventor
范浩强
Original Assignee
北京旷视科技有限公司
北京迈格威科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 北京旷视科技有限公司, 北京迈格威科技有限公司 filed Critical 北京旷视科技有限公司
Priority to CN201610917877.6A priority Critical patent/CN106524909B/en
Publication of CN106524909A publication Critical patent/CN106524909A/en
Application granted granted Critical
Publication of CN106524909B publication Critical patent/CN106524909B/en

Links

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01BMEASURING LENGTH, THICKNESS OR SIMILAR LINEAR DIMENSIONS; MEASURING ANGLES; MEASURING AREAS; MEASURING IRREGULARITIES OF SURFACES OR CONTOURS
    • G01B11/00Measuring arrangements characterised by the use of optical means
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01BMEASURING LENGTH, THICKNESS OR SIMILAR LINEAR DIMENSIONS; MEASURING ANGLES; MEASURING AREAS; MEASURING IRREGULARITIES OF SURFACES OR CONTOURS
    • G01B11/00Measuring arrangements characterised by the use of optical means
    • G01B11/24Measuring arrangements characterised by the use of optical means for measuring contours or curvatures
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C11/00Photogrammetry or videogrammetry, e.g. stereogrammetry; Photographic surveying
    • G01C11/04Interpretation of pictures

Abstract

The invention provides a three-dimensional image acquisition method and a device, wherein the three-dimensional image acquisition method comprises the following steps: receiving images which are respectively acquired by a plurality of image sensors under a plurality of groups of light sources distributed differently aiming at the same object; carrying out image matching on images acquired by different image sensors under the same group of light sources; performing photometric normal calculation on images acquired by the same image sensor under different groups of light sources based on the image matching result; and obtaining a three-dimensional image of the object based on the result of the image matching and the result of the photometric method calculation. According to the three-dimensional image acquisition method and device provided by the embodiment of the invention, based on the images acquired by two or more image sensors under two or more light source conditions, the three-dimensional image acquisition with high acquisition speed, high precision and low cost can be realized by combining the photometric stereo vision and the multi-view vision method.

Description

Three-dimensional image acquisition method and device

Technical Field

The invention relates to the technical field of three-dimensional images, in particular to a three-dimensional image acquisition method and a three-dimensional image acquisition device.

Background

With the continuous improvement of the image processing hardware level and the continuous development of machine vision technology, image processing technology, computer technology and the like, the vision measurement technology is also correspondingly greatly improved. The vision measurement technology is to take an image as a carrier for detecting and transmitting information, extract useful information from a two-dimensional image through an image processing algorithm, and further obtain the three-dimensional geometric size and the spatial position of a measured object.

The photometric stereo technology adopts a camera and a plurality of light sources with the same luminous intensity, keeps the camera and a shot object still, simultaneously shoots a group of images of the object under different light source irradiation conditions by changing the direction of the light sources, then calculates the surface normal direction of the object according to the images, and solves the three-dimensional shape of the surface of the object according to the calculated surface normal direction. However, the image acquisition method based on the photometric stereo measurement technique has the phenomenon of overall deformation of the obtained image.

Disclosure of Invention

The present invention has been made in view of the above problems. The invention provides a three-dimensional image acquisition method and a three-dimensional image acquisition device, which overcome the defects of integral deviation or low precision of photometric stereo vision and multi-view vision by combining the photometric stereo vision and the multi-view vision, and can obtain a quick and high-precision three-dimensional acquisition effect.

According to an aspect of the present invention, there is provided a three-dimensional image acquisition method including: receiving images which are respectively acquired by a plurality of image sensors under a plurality of groups of light sources distributed differently aiming at the same object; carrying out image matching on images acquired by different image sensors under the same group of light sources; performing photometric normal calculation on images acquired by the same image sensor under different groups of light sources based on the image matching result; and obtaining a three-dimensional image of the object based on the result of the image matching and the result of the photometric method calculation.

In one embodiment of the invention, the plurality of sets of light sources comprises two sets of light sources and the plurality of image sensors comprises two image sensors.

In one embodiment of the invention, a first of the two sets of light sources comprises white light emitting means distributed between the two image sensors and a second of the two sets of light sources comprises red light emitting means, green light emitting means and blue light emitting means divergently distributed around the two image sensors.

In an embodiment of the present invention, the three-dimensional image acquisition method further includes: calculating the received image as an image with background light removed for the image matching and the photometric method calculation.

In an embodiment of the present invention, the object is a human face, and the three-dimensional image acquisition method further includes: and carrying out face detection on the received image to obtain a face region image, so as to obtain a three-dimensional image of the face according to the image matching and the result of the photometric normal calculation.

In an embodiment of the present invention, the performing photometric calculations on images acquired by the same image sensor under different sets of light sources based on the result of the image matching includes: and calculating a preliminary depth value and a preliminary three-dimensional coordinate of a pixel in an image acquired by the same sensor under a group of light sources based on the image matching result, and calculating a photometric normal corresponding to the pixel according to the preliminary depth value and the preliminary three-dimensional coordinate of the pixel.

According to another aspect of the present invention, there is provided a three-dimensional image capturing apparatus including: the receiving module is used for receiving images which are acquired by a plurality of image sensors under a plurality of groups of light sources which are distributed differently aiming at the same object; the image matching module is used for carrying out image matching on images acquired by different image sensors under the same group of light sources; the photometric normal calculation module is used for carrying out photometric normal calculation on images acquired by the same image sensor under different groups of light sources based on the image matching result; and a three-dimensional image obtaining module for obtaining a three-dimensional image of the object based on a result of the image matching and a result of the photometric method calculation.

In one embodiment of the invention, the plurality of sets of light sources comprises two sets of light sources and the plurality of image sensors comprises two image sensors.

In one embodiment of the invention, a first of the two sets of light sources comprises white light emitting means distributed between the two image sensors and a second of the two sets of light sources comprises red light emitting means, green light emitting means and blue light emitting means divergently distributed around the two image sensors.

In one embodiment of the present invention, the three-dimensional image capturing apparatus further includes: and the background light removing module is used for calculating the image received by the receiving module into a background light removing image for image matching and photometric calculation.

In one embodiment of the present invention, the object is a human face, and the three-dimensional image capturing apparatus further includes: and the face detection module is used for carrying out face detection on the image received by the receiving module to obtain a face region image so as to obtain a three-dimensional image of the face according to the image matching and the photometric normal calculation result.

In one embodiment of the present invention, the photometric normal calculation module further calculates a preliminary depth value and a preliminary three-dimensional coordinate of a pixel in an image acquired by the same sensor under a set of light sources based on the result of the image matching, and calculates the photometric normal corresponding to the pixel according to the preliminary depth value and the preliminary three-dimensional coordinate of the pixel.

According to the three-dimensional image acquisition method and device provided by the embodiment of the invention, based on the images acquired by two or more image sensors under two or more light source conditions, the three-dimensional image acquisition with high acquisition speed, high precision and low cost can be realized by combining the photometric stereo vision and the multi-view vision method.

Drawings

The above and other objects, features and advantages of the present invention will become more apparent by describing in more detail embodiments of the present invention with reference to the attached drawings. The accompanying drawings are included to provide a further understanding of the embodiments of the invention and are incorporated in and constitute a part of this specification, illustrate embodiments of the invention and together with the description serve to explain the principles of the invention and not to limit the invention. In the drawings, like reference numbers generally represent like parts or steps.

FIG. 1 is a schematic block diagram of an example electronic device for implementing a three-dimensional image acquisition method and apparatus in accordance with embodiments of the present invention;

FIG. 2 is a schematic flow chart diagram of a three-dimensional image acquisition method according to an embodiment of the invention;

FIG. 3 is an exemplary arrangement of image sensors and light source groups during acquisition of an image upon which a three-dimensional image acquisition method, apparatus, system, and storage medium are based, in accordance with embodiments of the present invention;

FIG. 4 is a schematic block diagram of a three-dimensional image acquisition device according to an embodiment of the present invention; and

FIG. 5 is a schematic block diagram of a three-dimensional image acquisition system according to an embodiment of the present invention.

Detailed Description

In order to make the objects, technical solutions and advantages of the present invention more apparent, exemplary embodiments according to the present invention will be described in detail below with reference to the accompanying drawings. It is to be understood that the described embodiments are merely a subset of embodiments of the invention and not all embodiments of the invention, with the understanding that the invention is not limited to the example embodiments described herein. All other embodiments, which can be derived by a person skilled in the art from the embodiments of the invention described herein without inventive step, shall fall within the scope of protection of the invention.

First, an exemplary electronic device 100 for implementing the three-dimensional image capturing method and apparatus according to the embodiment of the present invention is described with reference to fig. 1.

As shown in FIG. 1, electronic device 100 includes one or more processors 102, one or more memory devices 104, an input device 106, an output device 108, and an image sensor 110, which are interconnected via a bus system 112 and/or other form of connection mechanism (not shown). It should be noted that the components and structure of the electronic device 100 shown in fig. 1 are exemplary only, and not limiting, and the electronic device may have other components and structures as desired.

The processor 102 may be a Central Processing Unit (CPU) or other form of processing unit having data processing capabilities and/or instruction execution capabilities, and may control other components in the electronic device 100 to perform desired functions.

The storage 104 may include one or more computer program products that may include various forms of computer-readable storage media, such as volatile memory and/or non-volatile memory. The volatile memory may include, for example, Random Access Memory (RAM), cache memory (cache), and/or the like. The non-volatile memory may include, for example, Read Only Memory (ROM), hard disk, flash memory, etc. On which one or more computer program instructions may be stored that may be executed by processor 102 to implement client-side functionality (implemented by the processor) and/or other desired functionality in embodiments of the invention described below. Various applications and various data, such as various data used and/or generated by the applications, may also be stored in the computer-readable storage medium.

The input device 106 may be a device used by a user to input instructions and may include one or more of a keyboard, a mouse, a microphone, a touch screen, and the like.

The output device 108 may output various information (e.g., images or sounds) to an external (e.g., user), and may include one or more of a display, a speaker, and the like.

The image sensor 110 may take images (e.g., photographs, videos, etc.) desired by the user and store the taken images in the storage device 104 for use by other components.

Exemplarily, an exemplary electronic device for implementing the three-dimensional image capturing method and apparatus according to the embodiment of the present invention may be implemented as a smart phone, a tablet computer, or the like.

Next, a three-dimensional image acquisition method 200 according to an embodiment of the present invention will be described with reference to fig. 2.

In step S210, images respectively acquired by a plurality of image sensors under a plurality of sets of light sources distributed differently for the same object are received.

In one embodiment, the received images may be from two or more image sensors (e.g., binocular camera, binocular or multi-view camera, etc.) that respectively capture images of the same object under two or more sets of light sources that are distributed differently. For example, only two image sensors may be used to capture images under two sets of light sources that are distributed differently.

In one example, the first group of light sources may include red, green, and blue light emitting devices distributed between two image sensors and relatively close to each other, and the second group of light sources may include red, green, and blue light emitting devices divergently distributed around the two image sensors.

In another example, the first set of light sources may comprise white light emitting devices distributed between the two image sensors, and the second set of light sources may comprise red, green and blue light emitting devices divergently distributed around the two image sensors, e.g. as shown in fig. 3, the image sensors are exemplarily shown as cameras (camera 1 and camera 2) in fig. 3, and all light emitting devices are exemplarily shown as light emitting diodes of different colors (LED _ white, LED _ red, LED _ green and LED _ blue). It should be noted that the arrangement method of the image sensor and the light source set shown in fig. 3 is only an example, and other arrangement methods may be adopted to acquire an image according to needs.

In this example, two cameras may capture a first set of images I1_1 (from camera 1) and I1_2 (from camera 2), respectively, when only the first set of LEDs are on, and a second set of images I2_1 (from camera 1) and I2_2 (from camera 2), respectively, when only the second set of LEDs are on.

Further, in one example, the plurality of image sensors may also respectively capture images without the plurality of sets of light sources described above as a backlight image for removing the backlight of the images captured with the plurality of sets of light sources. For example, in the above example, the two cameras may also capture a third set of images I3_1 (from camera 1) and I3_2 (from camera 2) without both the first and second sets of LEDs being on. Therefore, the images a1, a2, B1, and B2 from which the background light is removed can be calculated, in which:

A1=I1_1-I3_1,

A2=I1_2-I3_2,

B1=I2_1-I3_1,

B2=I2_2-I3_2。

if the background light is dark enough, the collection of the background light image can be omitted, and the images respectively collected by a plurality of image sensors under the condition of a plurality of groups of light sources are directly used for subsequent processing.

In step S220, image matching is performed on images acquired by different image sensors under the same set of light sources.

Following the above example, image matching may be performed on images captured by cameras 1 and 2 under a first set of light sources (e.g., background light removed images a1 and a2), or on images captured by cameras 1 and 2 under a second set of light sources (e.g., background light removed images B1 and B2). The matching of the images captured by camera 1 and camera 2 under the first set of light sources is described as an example below.

For example, the pixel value of the image a1 at (x, y) is represented with notation a1(x, y). Using A1(x, y)r、A1(x,y)g、A1(x,y)bTo sequentially represent the red, green, and blue components thereof. For each pixel of the image a1 located at (x, y), an image block (patch) within a small region of L x L centered on the pixel may be extracted, where L is the side length of the extracted image block region. The values of the corresponding colors at the center can be subtracted from the pixels elsewhere in the image block, arranged as a vector F1(x, y) in dimensions L x 3.

Similarly, the pixel value of image a2 at (x, y2) may be represented using notation a2(x, y2), and image a2 may be processed in the same manner to yield vector F2(x, y 2). The match error of (x, y) to (x, y2) can be defined as:

for each (x, y), y2 is calculated that minimizes the above equation, given as ym, then the preliminary match result for the pixel at point (x, y) in image A1 is considered to be:

D(x,y)=ym-0.5*(C(x,y,ym+1)-C(x,y,ym-1))/(C(x,y,ym+1)+C(x,y,ym-1)-C(x,y,ym)*2)

therefore, the pixel at (x, y) corresponds to the first pixelStep depth value is Z (x, y) ═ Z0V (D (x, y) -y), wherein Z0Is a parameter related to camera distance, lens focal length, pixel size, etc.

It should be appreciated that the image matching process described above is merely exemplary. According to actual needs, other methods can be adopted to realize image matching. It should also be understood that the present invention is not limited by the image matching method specifically adopted, and that the image matching method, whether existing or developed in the future, can be applied to the three-dimensional image acquisition method according to the embodiment of the present invention, and is also included in the scope of the present invention.

In step S230, photometric normal calculation is performed on images acquired by the same image sensor under different sets of light sources based on the result of the image matching.

Following the above example, photometric normal calculations may be performed on images captured by camera 1 under the first and second sets of light sources, respectively (e.g., background light removed images a1 and B1), or on images captured by camera 2 under the first and second sets of light sources, respectively (e.g., background light removed images a2 and B2). In one embodiment, a preliminary depth value and a preliminary three-dimensional coordinate of a pixel in an image acquired by the same sensor (camera 1 or camera 2) under a set of light sources may be calculated based on a result of the image matching, and a photometric normal corresponding to the pixel may be calculated from the preliminary depth value and the preliminary three-dimensional coordinate of the pixel.

The following description will be given taking as an example the photometric calculation of the images respectively acquired by the camera 1 under the first and second groups of light sources.

For a pixel located at (x, y) in the image captured by the camera 1 under the first set of light sources (e.g., the background-removed image a1), a preliminary depth value and preliminary three-dimensional coordinates of the pixel (x, y) are calculated according to the image matching result calculated in step S220, and if Z is Z (x, y) is the preliminary depth value, its corresponding preliminary position value in three-dimensional space is u is (x/f Z, y/f Z, Z), where f is a parameter related to the lens focal length, the pixel size, and the like. The photometric method corresponding to the pixel can be calculated according to the preliminary depth value and the preliminary three-dimensional coordinate of the pixel by adopting the following method:

v. the0、vr、vg、vbThree-dimensional positions of white light LED, red light LED, green light LED and blue light LED, Lr0、Lg0、Lb0Is the red, green and blue component of a white LED, Lr、Lg、LbSolving a least squares solution of the following equations for the relative brightness values of the red, green, and blue LEDs:

wherein n ═ n (n)x,ny,nz) Satisfies nx 2+ny 2+nz 21 is the normal to be solved.

It should be understood that the photometric method calculation process described above is merely exemplary. Other methods can be adopted to realize the photometric method calculation according to the actual requirement. It should also be understood that the present invention is not limited by the specific photometric method employed, and that existing photometric methods or photometric methods developed in the future can be applied to the three-dimensional image capturing method according to the embodiments of the present invention, and shall also be included in the scope of the present invention.

In step S240, a three-dimensional image of the object is obtained based on the result of the image matching and the result of the photometric method calculation.

Following the above example, the final depth image R can be found by taking the solved vector as N (x, y) and solving the following least squares problem:

argminR(w∑x,y(R(x,y)-Z(x,y))2+(Nz(x,y)(R(x,y)-R(x+1,y))-Nx(x,y))2+

(Nz(x,y)(R(x,y)-R(x,y+1))-Ny(x,y))2)

for a pixel located at (x, y), it corresponds to a three-dimensional point (x/f R (x, y), y/f R (x, y), R (x, y)).

Based on the above calculations, three-dimensional points for each point pixel in the image can be derived, and the point cloud representation of the three-dimensional image of the acquired object can be obtained by aggregating these three-dimensional points together.

Based on the above description, the three-dimensional image capturing method according to the embodiment of the present invention is based on images captured by two or more image sensors under two or more light source conditions, and by combining the photometric stereo vision method and the multi-view vision method, it is possible to achieve three-dimensional image capturing with fast capturing speed (e.g., only three or two consecutive frames), high precision, and low cost (e.g., only two or more camera arrays and LEDs are needed).

Illustratively, the three-dimensional image acquisition method according to the embodiments of the present invention may be implemented in a device, apparatus or system having a memory and a processor.

The three-dimensional image acquisition method according to the embodiment of the invention can be deployed at a personal terminal such as a smart phone, a tablet computer, a personal computer, and the like. Alternatively, the three-dimensional image acquisition method according to the embodiment of the present invention may also be deployed at a server side (or a cloud side). Alternatively, the three-dimensional image acquisition method according to the embodiment of the invention may also be distributively deployed at a server side (or a cloud side) and a personal terminal side.

In one embodiment, the three-dimensional image acquisition method 200 may be used for three-dimensional face image acquisition, that is, the acquired object in the three-dimensional image acquisition method 200 may be a face. When the three-dimensional image acquisition method 200 is used for three-dimensional face image acquisition, the face detection may be advanced (for example, based on a convolutional neural network) on the image received in step S210 to obtain a face region image for subsequent image matching, photometric normal calculation and three-dimensional image calculation. In another embodiment, the received image is subjected to face detection to obtain a face region image, which is used for obtaining a three-dimensional image of the face according to the result of the image matching and the photometric method. For example, after obtaining the three-dimensional point coordinates of each pixel according to the results of image matching and photometric method calculation in step S240, face detection may be performed on the received image to obtain a pixel set belonging to a face region, and the pixel point coordinates corresponding to the pixels in the face region are aggregated together to obtain a point cloud representation of the face, thereby further obtaining a three-dimensional face image.

The existing high-precision face acquisition system has high cost and large volume (such as laser scanning), or has long acquisition time and needs to be matched by an acquirer. The accuracy of camera array based methods is limited by the baseline length and camera resolution and is difficult to improve further. The method based on photometric stereo has the phenomenon of overall deformation. The three-dimensional face image acquisition method provided by the embodiment of the invention can improve the accuracy and speed of face acquisition and reduce the cost.

Fig. 4 shows a schematic block diagram of a three-dimensional image acquisition apparatus 400 according to an embodiment of the present invention.

As shown in fig. 4, the three-dimensional image capturing apparatus 400 according to the embodiment of the present invention includes a receiving module 410, an image matching module 420, a photometric method calculating module 430, and a three-dimensional image obtaining module 440. The respective modules may perform the respective steps/functions of the three-dimensional image acquisition method described above in connection with fig. 2, respectively. Only the main functions of the units of the three-dimensional image capturing apparatus 400 will be described below, and details that have been described above will be omitted.

The receiving module 410 is configured to receive images acquired by multiple image sensors under multiple sets of light sources distributed differently for a same object. The image matching module 420 is used for performing image matching on images acquired by different image sensors under the same set of light sources. The photometric method calculating module 430 is used for performing photometric method calculation on images acquired by the same image sensor under different groups of light sources based on the result of the image matching. The three-dimensional image obtaining module 440 is for obtaining a three-dimensional image of the object based on the result of the image matching and the result of the photometric method calculation. The receiving module 410, the image matching module 420, the photometric method calculating module 430, and the three-dimensional image obtaining module 440 may all be implemented by the processor 102 in the electronic device shown in fig. 1 executing program instructions stored in the storage 104.

According to the embodiment of the present invention, the images received by the receiving module 410 may be from two or more image sensors (e.g., a binocular camera, a binocular or multi-view camera, etc.) which respectively capture images of the same object under two or more sets of light sources with different distributions. For example, only two image sensors may be used to capture images under two sets of light sources that are distributed differently.

In one example, the first group of light sources may include red, green, and blue light emitting devices distributed between two image sensors and relatively close to each other, and the second group of light sources may include red, green, and blue light emitting devices divergently distributed around the two image sensors.

In another example, the first set of light sources may comprise white light emitting devices distributed between the two image sensors, and the second set of light sources may comprise red, green and blue light emitting devices divergently distributed around the two image sensors, e.g. as shown in fig. 3, the image sensors are exemplarily shown as cameras (camera 1 and camera 2) in fig. 3, and all light emitting devices are exemplarily shown as light emitting diodes of different colors (LED _ white, LED _ red, LED _ green and LED _ blue). It should be noted that the arrangement method of the image sensor and the light source set shown in fig. 3 is only an example, and other arrangement methods may be adopted to acquire an image according to needs.

In this example, two cameras may capture a first set of images I1_1 (from camera 1) and I1_2 (from camera 2), respectively, when only the first set of LEDs are on, and a second set of images I2_1 (from camera 1) and I2_2 (from camera 2), respectively, when only the second set of LEDs are on.

Further, in one example, the plurality of image sensors may also respectively capture images without the plurality of sets of light sources described above as a backlight image for removing the backlight of the images captured with the plurality of sets of light sources. At this time, the three-dimensional image capturing apparatus 400 may further include a background light removal module (not shown in fig. 4), which may be used to calculate the image received by the receiving module 410 as a background light removed image for subsequent image matching and photometric method calculation. For example, in the above example, the two cameras may also capture a third set of images I3_1 (from camera 1) and I3_2 (from camera 2) without both the first and second sets of LEDs being on. Accordingly, the background-light-removed image a1, a2, B1, and B2 may be calculated, in which:

A1=I1_1-I3_1,

A2=I1_2-I3_2,

B1=I2_1-I3_1,

B2=I2_2-I3_2。

if the background light is dark enough, the collection of the background light image can be omitted, and the images respectively collected by a plurality of image sensors under the condition of a plurality of groups of light sources are directly used for subsequent processing.

According to the embodiment of the present invention, the image matching module 420 may perform image matching on the images captured by the cameras 1 and 2 under the first set of light sources (for example, the images a1 and a2 with the background light removed), or perform image matching on the images captured by the cameras 1 and 2 under the second set of light sources (for example, the images B1 and B2 with the background light removed). The matching of the images captured by camera 1 and camera 2 under the first set of light sources is described as an example below.

For example, the pixel value of the image a1 at (x, y) is represented with notation a1(x, y). Using A1(x, y)r、A1(x,y)g、A1(x,y)bTo sequentially represent the red, green, and blue components thereof. For each pixel in image a1 located at (x, y), a cell of L x L centered on the pixel may be extractedAn image block (patch) within the domain, where L is a side length of the extracted image block region. The values of the corresponding colors at the center can be subtracted from the pixels elsewhere in the image block, arranged as a vector F1(x, y) in dimensions L x 3.

Similarly, the pixel value of image a2 at (x, y2) may be represented using notation a2(x, y2), and image a2 may be processed in the same manner to yield vector F2(x, y 2). The match error of (x, y) to (x, y2) can be defined as:

for each (x, y), y2 is calculated that minimizes the above equation, given as ym, then the preliminary match result for the pixel at point (x, y) in image A1 is considered to be:

D(x,y)=ym-0.5*(C(x,y,ym+1)-C(x,y,ym-1))/(C(x,y,ym+1)+C(x,y,ym-1)-C(x,y,ym)*2)

therefore, the preliminary depth value corresponding to the pixel at (x, y) is Z (x, y) ═ Z0V (D (x, y) -y), wherein Z0Is a parameter related to camera distance, lens focal length, pixel size, etc.

The photometric method calculating module 430 may perform photometric method calculation on images respectively acquired by the camera 1 under the first and second light sources (for example, the images a1 and B1 after removing the background light), or perform photometric method calculation on images respectively acquired by the camera 2 under the first and second light sources (for example, the images a2 and B2 after removing the background light), according to an embodiment of the present invention. For example, the photometric normal calculation module 430 may calculate a preliminary depth value and a preliminary three-dimensional coordinate of a pixel in an image acquired by the same sensor (camera 1 or camera 2) under a set of light sources based on the result of the image matching, and calculate a photometric normal corresponding to the pixel according to the preliminary depth value and the preliminary three-dimensional coordinate of the pixel. The following description will be given taking as an example the photometric calculation of the images respectively acquired by the camera 1 under the first and second groups of light sources.

For a pixel located at (x, y) in an image (e.g., the background-removed image a1) captured by the camera 1 under the first set of light sources, a preliminary depth value and preliminary three-dimensional coordinates of the pixel (x, y) are calculated according to the image matching result, where Z ═ Z (x, y) is the preliminary depth value, and its corresponding preliminary location value in three-dimensional space is u ═ Z (x/f ═ Z, y/f ═ Z, Z), where f is a parameter related to lens focal length, pixel size, and the like. The photometric method corresponding to the pixel can be calculated according to the preliminary depth value and the preliminary three-dimensional coordinate of the pixel by adopting the following method:

v. the0、vr、vg、vbThree-dimensional positions of white light LED, red light LED, green light LED and blue light LED, Lr0、Lg0、Lb0Is the red, green and blue component of a white LED, Lr、Lg、LbSolving a least squares solution of the following equations for the relative brightness values of the red, green, and blue LEDs:

wherein n ═ n (n)x,ny,nz) Satisfies nx 2+ny 2+nz 21 is the normal to be solved.

According to the embodiment of the present invention, the three-dimensional image obtaining module 440 may determine the vector to be N (x, y), and solve the following least square problem to obtain the final depth image R:

argminR(w∑x,y(R(x,y)-Z(x,y))2+(Nz(x,y)(R(x,y)-R(x+1,y))-Nx(x,y))2+

(Nz(x,y)(R(x,y)-R(x,y+1))-Ny(x,y))2)

for a pixel located at (x, y), it corresponds to a three-dimensional point (x/f R (x, y), y/f R (x, y), R (x, y)).

Based on the above calculations, the three-dimensional image obtaining module 440 may obtain three-dimensional points of each pixel in the image, and aggregate the three-dimensional points together may obtain a point cloud representation of the three-dimensional image of the captured object.

Based on the above description, the three-dimensional image capturing device according to the embodiment of the present invention can achieve three-dimensional image capturing with fast capturing speed (e.g., only three or two consecutive frames), high precision, and low cost (e.g., only two or more camera arrays and LEDs) by combining the photometric stereo method and the multi-view visual method based on images captured by two or more image sensors under two or more light sources.

According to an embodiment of the present invention, the three-dimensional image capturing apparatus 400 may be used for capturing a three-dimensional face image, that is, the captured object in the three-dimensional image capturing apparatus 400 may be a face. When the three-dimensional image capturing apparatus 400 is used for three-dimensional face image capturing, the three-dimensional image capturing apparatus 400 may further include a face detection module (e.g., a face detector based on a convolutional neural network, not shown in fig. 4), which may be used to perform advanced face detection on the image received by the receiving module 410 to obtain a face region image for subsequent image matching, photometric normal calculation and three-dimensional image calculation. In another embodiment, the face detection module may further perform face detection on the received image to obtain a face region image, and obtain a three-dimensional image of the face according to the result of the image matching and the photometric method. For example, after obtaining the three-dimensional point coordinates of each pixel according to the results of image matching and photometric method calculation, the face detection module performs face detection on the received image to obtain a pixel set belonging to a face region, and gathers pixel point coordinates corresponding to pixels in the face region together to obtain a point cloud representation of the face, thereby further obtaining a three-dimensional face image. The three-dimensional face image acquisition device provided by the embodiment of the invention can improve the accuracy and speed of face acquisition and reduce the cost.

Those of ordinary skill in the art will appreciate that the various illustrative modules and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present invention.

Fig. 5 shows a schematic block diagram of a three-dimensional image acquisition system 500 according to an embodiment of the invention. Three-dimensional image acquisition system 500 includes an image acquisition device 510, a storage device 520, and a processor 530.

The image capturing device 510 is configured to capture images of the same object under multiple sets of light sources distributed differently.

In one example, image capture device 510 may include a plurality of image sensors. Illustratively, the image capture device 510 may be a binocular camera, a binocular or multi-view camera, or the like.

In another example, the image capture device 510 may include multiple image sensors (e.g., binocular cameras, binocular or multi-view cameras, etc.) and multiple sets of light sources. Wherein the plurality of image sensors respectively acquire images under the plurality of groups of light sources distributed differently for the same object.

In one example, the first group of light sources may include red, green, and blue light emitting devices distributed between two image sensors and relatively close to each other, and the second group of light sources may include red, green, and blue light emitting devices divergently distributed around the two image sensors.

In another example, the first set of light sources may include white light emitting devices distributed between the two image sensors, and the second set of light sources may include red light emitting devices, green light emitting devices, and blue light emitting devices divergently distributed around the two image sensors, for example as shown in fig. 3. It should be noted that the arrangement method of the image sensor and the light source set shown in fig. 3 is only an example, and other arrangement methods may be adopted to acquire an image according to needs.

Furthermore, in one example, the image capturing device 510 may also capture images without the plurality of light sources, respectively, as a background light image for removing background light of the captured images with the plurality of light sources. If the background light is dark enough, the collection of the background light image can be omitted, and the images respectively collected by the image collecting device 510 under the condition of multiple sets of light sources are directly used for subsequent processing.

The storage 520 stores program codes for implementing respective steps in the three-dimensional image acquisition method according to the embodiment of the present invention. The processor 530 is configured to run the program code stored in the storage device 520 to perform the corresponding steps of the three-dimensional image acquisition method according to the embodiment of the present invention, and to implement the corresponding modules in the three-dimensional image acquisition device according to the embodiment of the present invention.

In one embodiment, the program code, when executed by processor 530, causes three-dimensional image acquisition system 500 to perform the steps of: receiving images which are respectively acquired by a plurality of image sensors under a plurality of groups of light sources distributed differently aiming at the same object; carrying out image matching on images acquired by different image sensors under the same group of light sources; performing photometric normal calculation on images acquired by the same image sensor under different groups of light sources based on the image matching result; and obtaining a three-dimensional image of the object based on the result of the image matching and the result of the photometric method calculation.

In one embodiment, the plurality of sets of light sources includes two sets of light sources and the plurality of image sensors includes two image sensors.

In one embodiment, a first of the two sets of light sources comprises white light emitting devices distributed between the two image sensors and a second of the two sets of light sources comprises red, green and blue light emitting devices divergently distributed around the two image sensors.

Further, the program code, when executed by the processor 530, causes the three-dimensional image acquisition system 500 to perform the steps of: calculating the received image as an image with background light removed for the image matching and the photometric method calculation.

In one embodiment, the object is a human face, and the program code, when executed by the processor 530, further causes the three-dimensional image acquisition system 500 to perform the steps of: and carrying out face detection on the received image to obtain a face region image, and obtaining a three-dimensional image of the face according to the image matching and the result of the photometric normal calculation.

In one embodiment, the program code when executed by the processor 530 causes the three-dimensional image acquisition system 500 to perform the step of performing a photometric calculation of images acquired by the same image sensor under different sets of light sources based on the result of the image matching, including: and calculating a preliminary depth value and a preliminary three-dimensional coordinate of a pixel in an image acquired by the same sensor under a group of light sources based on the image matching result, and calculating a photometric normal corresponding to the pixel according to the preliminary depth value and the preliminary three-dimensional coordinate of the pixel.

Furthermore, according to an embodiment of the present invention, there is also provided a storage medium on which program instructions are stored, which when executed by a computer or a processor are used for executing the respective steps of the three-dimensional image acquisition method according to an embodiment of the present invention, and for implementing the respective modules in the three-dimensional image acquisition apparatus according to an embodiment of the present invention. The storage medium may include, for example, a memory card of a smart phone, a storage component of a tablet computer, a hard disk of a personal computer, a Read Only Memory (ROM), an Erasable Programmable Read Only Memory (EPROM), a portable compact disc read only memory (CD-ROM), a USB memory, or any combination of the above storage media. The computer-readable storage medium may be any combination of one or more computer-readable storage media, for example, one computer readable storage medium comprises computer readable program code for receiving images from a plurality of image sensors respectively acquired under a plurality of sets of light sources distributed differently with respect to a same object, another computer readable storage medium comprises computer readable program code for performing image matching on images acquired by different image sensors under the same set of light sources, still another computer readable storage medium comprises computer readable program code for performing a photometric method calculation on images acquired by the same image sensor under different sets of light sources based on a result of the image matching, and still another computer readable storage medium comprises computer readable program code for obtaining a three-dimensional image of the object based on a result of the image matching and a result of the photometric method calculation.

In one embodiment, the computer program instructions may, when executed by a computer, implement the functional modules of the three-dimensional image acquisition apparatus according to the embodiment of the present invention, and/or may perform the three-dimensional image acquisition method according to the embodiment of the present invention.

In one embodiment, the computer program instructions, when executed by a computer or processor, cause the computer or processor to perform the steps of: receiving images which are respectively acquired by a plurality of image sensors under a plurality of groups of light sources distributed differently aiming at the same object; carrying out image matching on images acquired by different image sensors under the same group of light sources; performing photometric normal calculation on images acquired by the same image sensor under different groups of light sources based on the image matching result; and obtaining a three-dimensional image of the object based on the result of the image matching and the result of the photometric method calculation.

In one embodiment, the plurality of sets of light sources includes two sets of light sources and the plurality of image sensors includes two image sensors.

In one embodiment, a first of the two sets of light sources comprises white light emitting devices distributed between the two image sensors and a second of the two sets of light sources comprises red, green and blue light emitting devices divergently distributed around the two image sensors.

In one embodiment, the computer program instructions, when executed by a computer or processor, cause the computer or processor to perform the steps of: calculating the received image as an image with background light removed for the image matching and the photometric method calculation.

In one embodiment, the object is a human face, and the computer program instructions, when executed by a computer or processor, cause the computer or processor to perform the steps of: and carrying out face detection on the received image to obtain a face region image, so as to obtain a three-dimensional image of the face according to the image matching and the result of the photometric normal calculation.

In one embodiment, the computer program instructions, when executed by a computer or processor, cause the computer or processor to perform the step of performing photometric calculations based on the results of the image matching on images acquired by the same image sensor under different sets of light sources, comprising: and calculating a preliminary depth value and a preliminary three-dimensional coordinate of a pixel in an image acquired by the same sensor under a group of light sources based on the image matching result, and calculating a photometric normal corresponding to the pixel according to the preliminary depth value and the preliminary three-dimensional coordinate of the pixel.

The modules in the three-dimensional image acquisition apparatus according to the embodiment of the present invention may be implemented by a processor of the three-dimensional image acquisition electronic device according to the embodiment of the present invention executing computer program instructions stored in a memory, or may be implemented when computer instructions stored in a computer-readable storage medium of a computer program product according to the embodiment of the present invention are executed by a computer.

The three-dimensional image acquisition method, the three-dimensional image acquisition device, the three-dimensional image acquisition system and the storage medium according to the embodiment of the invention are based on images acquired by two or more image sensors under two or more light source conditions, and can realize three-dimensional image acquisition with high acquisition speed (for example, only three or two continuous frames are needed), high precision and low cost (for example, only two or more camera arrays and LEDs are needed) by combining the photometric stereo vision method and the multi-view vision method.

Although the illustrative embodiments have been described herein with reference to the accompanying drawings, it is to be understood that the foregoing illustrative embodiments are merely exemplary and are not intended to limit the scope of the invention thereto. Various changes and modifications may be effected therein by one of ordinary skill in the pertinent art without departing from the scope or spirit of the present invention. All such changes and modifications are intended to be included within the scope of the present invention as set forth in the appended claims.

Those of ordinary skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present invention.

In the several embodiments provided in the present application, it should be understood that the disclosed apparatus and method may be implemented in other ways. For example, the above-described device embodiments are merely illustrative, and for example, the division of the units is only one logical functional division, and other divisions may be realized in practice, for example, a plurality of units or components may be combined or integrated into another device, or some features may be omitted, or not executed.

In the description provided herein, numerous specific details are set forth. It is understood, however, that embodiments of the invention may be practiced without these specific details. In some instances, well-known methods, structures and techniques have not been shown in detail in order not to obscure an understanding of this description.

Similarly, it should be appreciated that in the description of exemplary embodiments of the invention, various features of the invention are sometimes grouped together in a single embodiment, figure, or description thereof for the purpose of streamlining the invention and aiding in the understanding of one or more of the various inventive aspects. However, the method of the present invention should not be construed to reflect the intent: that the invention as claimed requires more features than are expressly recited in each claim. Rather, as the following claims reflect, inventive aspects lie in less than all features of a single disclosed embodiment. Thus, the claims following the detailed description are hereby expressly incorporated into this detailed description, with each claim standing on its own as a separate embodiment of this invention.

It will be understood by those skilled in the art that all of the features disclosed in this specification (including any accompanying claims, abstract and drawings), and all of the processes or elements of any method or apparatus so disclosed, may be combined in any combination, except combinations where such features are mutually exclusive. Each feature disclosed in this specification (including any accompanying claims, abstract and drawings) may be replaced by alternative features serving the same, equivalent or similar purpose, unless expressly stated otherwise.

Furthermore, those skilled in the art will appreciate that while some embodiments described herein include some features included in other embodiments, rather than other features, combinations of features of different embodiments are meant to be within the scope of the invention and form different embodiments. For example, in the claims, any of the claimed embodiments may be used in any combination.

The various component embodiments of the invention may be implemented in hardware, or in software modules running on one or more processors, or in a combination thereof. It will be appreciated by those skilled in the art that a microprocessor or Digital Signal Processor (DSP) may be used in practice to implement some or all of the functionality of some of the modules in an item analysis apparatus according to embodiments of the present invention. The present invention may also be embodied as apparatus programs (e.g., computer programs and computer program products) for performing a portion or all of the methods described herein. Such programs implementing the present invention may be stored on computer-readable media or may be in the form of one or more signals. Such a signal may be downloaded from an internet website or provided on a carrier signal or in any other form.

It should be noted that the above-mentioned embodiments illustrate rather than limit the invention, and that those skilled in the art will be able to design alternative embodiments without departing from the scope of the appended claims. In the claims, any reference signs placed between parentheses shall not be construed as limiting the claim. The word "comprising" does not exclude the presence of elements or steps not listed in a claim. The word "a" or "an" preceding an element does not exclude the presence of a plurality of such elements. The invention may be implemented by means of hardware comprising several distinct elements, and by means of a suitably programmed computer. In the unit claims enumerating several means, several of these means may be embodied by one and the same item of hardware. The usage of the words first, second and third, etcetera do not indicate any ordering. These words may be interpreted as names.

The above description is only for the specific embodiment of the present invention or the description thereof, and the protection scope of the present invention is not limited thereto, and any person skilled in the art can easily conceive of the changes or substitutions within the technical scope of the present invention, and the changes or substitutions should be covered within the protection scope of the present invention. The protection scope of the present invention shall be subject to the protection scope of the claims.

Claims (10)

1. A three-dimensional image acquisition method, characterized in that it comprises:
receiving images which are respectively acquired by a plurality of image sensors under a plurality of groups of light sources distributed differently aiming at the same object;
carrying out image matching on images acquired by different image sensors under the same group of light sources;
performing photometric normal calculation on images acquired by the same image sensor under different groups of light sources based on the result of image matching, wherein preliminary depth values and preliminary three-dimensional coordinates of pixels in the images acquired by the same sensor under one group of light sources are calculated based on the result of image matching, and the photometric normal corresponding to the pixels is calculated according to the preliminary depth values and the preliminary three-dimensional coordinates of the pixels; and
obtaining a three-dimensional image of the object based on the result of the image matching and the result of the photometric normal calculation, wherein the three-dimensional image coordinates corresponding to the pixel located at (x, y) are (x/f R (x, y), y/f R (x, y), R (x, y)), f is a known parameter, and R (x, y) is obtained by solving a least squares problem as follows: argminR(w∑x,y(R(x,y)-Z(x,y))2+(Nz(x,y)(R(x,y)-R(x+1,y))-Nx(x,y))2+(Nz(x,y)(R(x,y)-R(x,y+1))-Ny(x,y))2) Wherein Z represents the preliminary depth value and N represents the photometric normal.
2. The method of claim 1, wherein the plurality of light sources comprises two light sources and the plurality of image sensors comprises two image sensors.
3. The three-dimensional image capturing method according to claim 2, wherein a first of the two sets of light sources comprises a white light emitting device distributed between the two image sensors, and a second of the two sets of light sources comprises a red light emitting device, a green light emitting device, and a blue light emitting device divergently distributed around the two image sensors.
4. The three-dimensional image acquisition method according to any one of claims 1 to 3, characterized in that the three-dimensional image acquisition method further comprises: calculating the received image as an image with background light removed for the image matching and the photometric method calculation.
5. The three-dimensional image acquisition method according to any one of claims 1 to 3, wherein the object is a human face, and the three-dimensional image acquisition method further comprises: and carrying out face detection on the received image to obtain a face region image, so as to obtain a three-dimensional image of the face according to the image matching and the result of the photometric normal calculation.
6. A three-dimensional image capturing apparatus characterized in that it comprises:
the receiving module is used for receiving images which are acquired by a plurality of image sensors under a plurality of groups of light sources which are distributed differently aiming at the same object;
the image matching module is used for carrying out image matching on images acquired by different image sensors under the same group of light sources;
a photometric normal calculation module, configured to perform photometric normal calculation on images acquired by the same image sensor under different sets of light sources based on the result of image matching, where based on the result of image matching, preliminary depth values and preliminary three-dimensional coordinates of pixels in the images acquired by the same sensor under a set of light sources are calculated, and a photometric normal corresponding to the pixels is calculated according to the preliminary depth values and the preliminary three-dimensional coordinates of the pixels; and
a three-dimensional image obtaining module, configured to obtain a three-dimensional image of the object based on the result of the image matching and the result of the photometric normal calculation, where the three-dimensional image coordinates corresponding to the pixel located at (x, y) are (x/f × R (x, y), y/f × R (x, y), R (x, y)), f is a known parameter, and R (x, y) is obtained by solving a least square problem as follows: argminR(w∑xy(R(x,y)-Z(x,y))2+(Nz(x,y)(R(x,y)-R(x+1,y))-Nx(x,y))2+(Nz(x,y)(R(x,y)-R(x,y+1)))-Ny(x,y)2) Wherein Z represents the preliminary depth value and N represents the photometric normal.
7. The three-dimensional image capturing device according to claim 6, wherein the plurality of sets of light sources includes two sets of light sources, and the plurality of image sensors includes two image sensors.
8. The three-dimensional image capture device of claim 7, wherein a first of the two sets of light sources comprises a white light emitting device distributed between the two image sensors and a second of the two sets of light sources comprises a red light emitting device, a green light emitting device, and a blue light emitting device divergently distributed around the two image sensors.
9. The three-dimensional image acquisition apparatus according to any one of claims 6 to 8, characterized in that the three-dimensional image acquisition apparatus further comprises: and the background light removing module is used for calculating the image received by the receiving module into a background light removing image for image matching and photometric calculation.
10. The three-dimensional image capturing apparatus according to any one of claims 6 to 8, wherein the object is a human face, and the three-dimensional image capturing apparatus further includes: and the face detection module is used for carrying out face detection on the image received by the receiving module to obtain a face region image so as to obtain a three-dimensional image of the face according to the image matching and the photometric normal calculation result.
CN201610917877.6A 2016-10-20 2016-10-20 Three-dimensional image acquisition method and device CN106524909B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201610917877.6A CN106524909B (en) 2016-10-20 2016-10-20 Three-dimensional image acquisition method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201610917877.6A CN106524909B (en) 2016-10-20 2016-10-20 Three-dimensional image acquisition method and device

Publications (2)

Publication Number Publication Date
CN106524909A CN106524909A (en) 2017-03-22
CN106524909B true CN106524909B (en) 2020-10-16

Family

ID=58332874

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201610917877.6A CN106524909B (en) 2016-10-20 2016-10-20 Three-dimensional image acquisition method and device

Country Status (1)

Country Link
CN (1) CN106524909B (en)

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107087150B (en) * 2017-04-26 2019-05-21 成都通甲优博科技有限责任公司 A kind of three-dimensional camera shooting method, system and device based on binocular solid and photometric stereo
CN107677216B (en) * 2017-09-06 2019-10-29 西安交通大学 A kind of multiple abrasive grain three-dimensional appearance synchronous obtaining methods based on photometric stereo vision
CN108334836A (en) * 2018-01-29 2018-07-27 杭州美界科技有限公司 A kind of wrinkle of skin appraisal procedure and system
CN108363964A (en) * 2018-01-29 2018-08-03 杭州美界科技有限公司 A kind of pretreated wrinkle of skin appraisal procedure and system
CN108303045A (en) * 2018-02-01 2018-07-20 北京科技大学 A kind of surface roughness measuring method and device

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1506911A (en) * 2002-12-06 2004-06-23 中国科学院自动化研究所 3D image acquring system
CN101872491A (en) * 2010-05-21 2010-10-27 清华大学 Free view angle relighting method and system based on photometric stereo
JP2015132523A (en) * 2014-01-10 2015-07-23 キヤノン株式会社 Position/attitude measurement apparatus, position/attitude measurement method, and program
CN105389846A (en) * 2015-10-21 2016-03-09 北京雅昌文化发展有限公司 Demonstration method of three-dimensional model
CN105580050A (en) * 2013-09-24 2016-05-11 谷歌公司 Providing control points in images
CN105654549A (en) * 2015-12-31 2016-06-08 中国海洋大学 Underwater three-dimensional reconstruction device and method based on structured light technology and photometric stereo technology

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2010071782A (en) * 2008-09-18 2010-04-02 Omron Corp Three-dimensional measurement apparatus and method thereof
JP2015099337A (en) * 2013-11-20 2015-05-28 株式会社ニコン Focusing direction detector, imaging device, focusing direction detection processing program
CN105554385B (en) * 2015-12-18 2018-07-10 天津中科智能识别产业技术研究院有限公司 A kind of remote multi-modal biological characteristic recognition methods and its system

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1506911A (en) * 2002-12-06 2004-06-23 中国科学院自动化研究所 3D image acquring system
CN101872491A (en) * 2010-05-21 2010-10-27 清华大学 Free view angle relighting method and system based on photometric stereo
CN105580050A (en) * 2013-09-24 2016-05-11 谷歌公司 Providing control points in images
JP2015132523A (en) * 2014-01-10 2015-07-23 キヤノン株式会社 Position/attitude measurement apparatus, position/attitude measurement method, and program
CN105389846A (en) * 2015-10-21 2016-03-09 北京雅昌文化发展有限公司 Demonstration method of three-dimensional model
CN105654549A (en) * 2015-12-31 2016-06-08 中国海洋大学 Underwater three-dimensional reconstruction device and method based on structured light technology and photometric stereo technology

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
《Multiview Photometric Stereo using Planar Mesh Parameterization》;Jaesik Park;《2013 IEEE International Conference on Computer Vision》;20131231;第1161-1168页 *

Also Published As

Publication number Publication date
CN106524909A (en) 2017-03-22

Similar Documents

Publication Publication Date Title
GB2564794B (en) Image-stitching for dimensioning
US20170287923A1 (en) Method and system for object reconstruction
Fuhrmann et al. MVE-A Multi-View Reconstruction Environment.
CN106716450B (en) Image-based feature detection using edge vectors
US9829309B2 (en) Depth sensing method, device and system based on symbols array plane structured light
CN107113415B (en) The method and apparatus for obtaining and merging for more technology depth maps
JP2019514123A (en) Remote determination of the quantity stored in containers in geographical areas
CN105894499B (en) A kind of space object three-dimensional information rapid detection method based on binocular vision
US9915827B2 (en) System, method and computer program product to project light pattern
Doerschner et al. Visual motion and the perception of surface material
US9392262B2 (en) System and method for 3D reconstruction using multiple multi-channel cameras
US9727775B2 (en) Method and system of curved object recognition using image matching for image processing
EP3101624B1 (en) Image processing method and image processing device
US9562857B2 (en) Specular object scanner for measuring reflectance properties of objects
JP6456156B2 (en) Normal line information generating apparatus, imaging apparatus, normal line information generating method, and normal line information generating program
CN105933589B (en) A kind of image processing method and terminal
EP2531979B1 (en) Depth camera compatibility
JP2017520050A (en) Local adaptive histogram flattening
US10152634B2 (en) Methods and systems for contextually processing imagery
Bianco et al. A comparative analysis between active and passive techniques for underwater 3D reconstruction of close-range objects
JP2016502704A (en) Image processing method and apparatus for removing depth artifacts
CN105308650A (en) Active stereo with adaptive support weights from a separate image
WO2016106383A3 (en) First-person camera based visual context aware system
US20150146032A1 (en) Light field processing method
US20150369593A1 (en) Orthographic image capture system

Legal Events

Date Code Title Description
PB01 Publication
C06 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information

Address after: 100190 Beijing, Haidian District Academy of Sciences, South Road, No. 2, block A, No. 313

Applicant after: MEGVII INC.

Applicant after: Beijing maigewei Technology Co., Ltd.

Address before: 100190 Beijing, Haidian District Academy of Sciences, South Road, No. 2, block A, No. 313

Applicant before: MEGVII INC.

Applicant before: Beijing aperture Science and Technology Ltd.

Address after: 100190 Beijing, Haidian District Academy of Sciences, South Road, No. 2, block A, No. 313

Applicant after: MEGVII INC.

Applicant after: Beijing maigewei Technology Co., Ltd.

Address before: 100190 Beijing, Haidian District Academy of Sciences, South Road, No. 2, block A, No. 313

Applicant before: MEGVII INC.

Applicant before: Beijing aperture Science and Technology Ltd.

CB02 Change of applicant information
GR01 Patent grant
GR01 Patent grant