CN110874862A - System and method for three-dimensional reconstruction - Google Patents
System and method for three-dimensional reconstruction Download PDFInfo
- Publication number
- CN110874862A CN110874862A CN201811001662.5A CN201811001662A CN110874862A CN 110874862 A CN110874862 A CN 110874862A CN 201811001662 A CN201811001662 A CN 201811001662A CN 110874862 A CN110874862 A CN 110874862A
- Authority
- CN
- China
- Prior art keywords
- blue
- red
- pixel point
- light
- image
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 66
- 238000012545 processing Methods 0.000 claims abstract description 20
- 238000000926 separation method Methods 0.000 claims abstract description 7
- 238000005286 illumination Methods 0.000 claims description 11
- 230000010354 integration Effects 0.000 description 10
- 238000005516 engineering process Methods 0.000 description 5
- 230000003287 optical effect Effects 0.000 description 5
- 238000010586 diagram Methods 0.000 description 4
- 235000014101 wine Nutrition 0.000 description 4
- 238000004519 manufacturing process Methods 0.000 description 3
- 230000003068 static effect Effects 0.000 description 3
- 239000003086 colorant Substances 0.000 description 2
- 238000010276 construction Methods 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 238000004458 analytical method Methods 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- YOBAEOGBNPPUQV-UHFFFAOYSA-N iron;trihydrate Chemical compound O.O.O.[Fe].[Fe] YOBAEOGBNPPUQV-UHFFFAOYSA-N 0.000 description 1
- 238000012423 maintenance Methods 0.000 description 1
- 238000011084 recovery Methods 0.000 description 1
- 235000020095 red wine Nutrition 0.000 description 1
- 238000011160 research Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T17/00—Three dimensional [3D] modelling, e.g. data description of 3D objects
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T15/00—3D [Three Dimensional] image rendering
- G06T15/50—Lighting effects
- G06T15/80—Shading
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T19/00—Manipulating 3D models or images for computer graphics
- G06T19/20—Editing of 3D images, e.g. changing shapes or colours, aligning objects or positioning parts
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2219/00—Indexing scheme for manipulating 3D models or images for computer graphics
- G06T2219/20—Indexing scheme for editing of 3D models
- G06T2219/2012—Colour editing, changing, or manipulating; Use of colour codes
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Computer Graphics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Software Systems (AREA)
- Geometry (AREA)
- Architecture (AREA)
- Computer Hardware Design (AREA)
- General Engineering & Computer Science (AREA)
- Length Measuring Devices By Optical Means (AREA)
- Image Processing (AREA)
Abstract
The embodiment of the invention provides a system and a method for three-dimensional reconstruction, belonging to the technical field of image reconstruction. The system for three-dimensional reconstruction includes: a light emitting device including three kinds of light emitting units emitting red, blue and green lights, respectively, for illuminating a surface of an object; the image acquisition device is used for acquiring images of the surface of the object under the irradiation of red, blue and green light; and the processing device is used for carrying out three-channel separation on the acquired images to obtain three gray level images respectively corresponding to the red light, the blue light and the green light, and carrying out three-dimensional reconstruction on the object according to the irradiation angles of the red light, the blue light and the green light and the gray level images, wherein the light-emitting device and the image acquisition device are integrated into an integrated device. By the technical scheme provided by the invention, the three-dimensional reconstruction of the surface of the object to be constructed can be completed according to the single-frame image, and the method has the advantages of high precision, good detail processing and high working efficiency.
Description
Technical Field
The invention relates to the technical field of image reconstruction, in particular to a three-dimensional reconstruction method and a system and a method for three-dimensional reconstruction.
Background
With the improvement of the integrated automation degree of enterprises such as petrochemical enterprises and the rapid development of computer information technology, an "intelligent factory" and a "digital factory" become one of the targets of enterprise construction. The digitization of the three-dimensional model of the factory is the foundation of the construction of an intelligent factory, and various data such as production operation and the like are integrated on the basis of the three-dimensional model of the factory equipment by constructing the three-dimensional model of the factory equipment, so that an integral solution for safety production is provided for enterprises. Because factory equipment has various characteristics such as complexity, specialty and the like, the research of the rapid and efficient three-dimensional reconstruction technology for the factory equipment is one of the key links for popularization and application of the virtual reality technology in factory production.
Three-dimensional reconstruction broadly refers to a technique for restoring and reconstructing an object or scene in a three-dimensional space, and a reconstructed model can be conveniently represented, processed and displayed by a computer. In practice, three-dimensional reconstruction is the inverse process of describing or reconstructing two-dimensional projection images of objects and scenes in three-dimensional space, from which two-dimensional images are restored to objects or scenes containing three-dimensional information. Therefore, the three-dimensional reconstruction technology is a key technology for establishing the virtual reality of an objective world in a computer, and can provide more three-dimensional information, and optical three-dimensional reconstruction is the mainstream three-dimensional reconstruction method at present.
An Active Optical (Active Optical) three-dimensional reconstruction method is a method for artificially adding one or more artificial light sources besides ambient light in the original image acquisition stage in the process of three-dimensional reconstruction of an object and performing three-dimensional reconstruction by depending on Optical characteristics. The reconstruction method has more photometric constraints, so that more accurate reconstruction details can be obtained. Among them, the structured light method and the photometric stereo method are one of the methods with the highest precision among the existing reconstruction methods.
The structured Light method (structured Light) calculates the height of the object at the laser line according to the offset of the calibrated laser line on the surface of the object, so as to obtain a section height of the object, and obtains all the section heights through scanning, so as to obtain three-dimensional information of the object. The structured light method is a reconstruction method performed according to the properties of an object in which laser light propagates in a medium, and is also a reconstruction method with the highest precision in the prior art, and the reconstruction result is close to a true value (Ground Truth). However, the method has the disadvantages that only a static object can be scanned, reconstruction cannot be performed in motion, the setting requirement is high, the reconstruction range is small, and the like, so that the method is a reconstruction method under an ideal condition and is difficult to be practically applied.
The Photometric Stereo method is to use the same camera to shoot a plurality of images under the irradiation of artificial light sources at different angles, and to use the information in the plurality of images to increase the constraint conditions to obtain a unique solution. The method solves the problem that the gray information in a single-frame image is combined with the shape (local or whole) of an object to carry out constraint to obtain the three-dimensional information of the object to be built by the starting shading recovery method. But the constraint conditions in the single-frame image are less, so that the reconstruction result is not fine enough.
The inventor of the application finds that the conventional three-dimensional reconstruction method needs to acquire or scan images of a static object for multiple times, is low in efficiency and cannot reconstruct the object to be reconstructed in real time.
Disclosure of Invention
It is an object of embodiments of the present invention to provide a system and method for three-dimensional reconstruction that addresses one or more of the above-mentioned technical problems.
In order to achieve the above object, an embodiment of the present invention provides a system for three-dimensional reconstruction, the system including: a light emitting device including three kinds of light emitting units emitting red, blue and green lights, respectively, for illuminating a surface of an object; the image acquisition device is used for acquiring images of the surface of the object under the irradiation of red, blue and green light; and the processing device is used for carrying out three-channel separation on the acquired images to obtain three gray level images respectively corresponding to the red light, the blue light and the green light, and carrying out three-dimensional reconstruction on the object according to the irradiation angles of the red light, the blue light and the green light and the gray level images, wherein the light-emitting device and the image acquisition device are integrated into an integrated device.
Optionally, the image capturing device is fixed relative to the light emitting unit, and is configured to determine the illumination angles of the red, blue, and green lights.
Optionally, the system further includes a display device, connected to the processing device, for displaying a three-dimensional image of the object after three-dimensional reconstruction.
Optionally, the system further includes a storage module, connected to the processing device, for storing information about a three-dimensional image of the object after three-dimensional reconstruction.
Optionally, the system further comprises an electric actuator for adjusting the light emitting angle of the light emitting unit.
Correspondingly, the embodiment of the invention also provides a three-dimensional reconstruction method, which comprises the following steps: controlling the light-emitting devices to respectively emit red, blue and green light to irradiate the surface of the object; acquiring images of the surface of an object under red, blue and green light irradiation simultaneously by an image acquisition device; and carrying out three-channel separation on the acquired images through a processing device to obtain three gray level images respectively corresponding to the red, blue and green lights, and carrying out three-dimensional reconstruction on the object according to the irradiation angles of the red, blue and green lights and the gray level images, wherein the light-emitting device and the image acquisition device are integrated into an integrated device.
Optionally, the three-dimensional reconstructing the object according to the illumination angles of the red, blue and green lights and the grayscale image includes: determining a surface normal vector of each pixel point of the collected image according to the irradiation angles of the red, blue and green lights and the brightness of each pixel point in the three gray level images; determining the gradient value of each pixel point according to the surface normal vector of each pixel point; determining the height of the surface of the object according to the gradient value of each pixel point; and performing three-dimensional reconstruction on the object according to the surface height of the object.
Optionally, the determining the surface normal vector of each pixel point according to the illumination angles of the red, blue and green lights and the brightness of each pixel point in the gray-scale image includes determining the surface normal vector of each pixel point according to the following formula: where i (x, y) is the luminance of a pixel point (x, y) in the grayscale image corresponding to one of the red, blue, and green lights, ρ is the albedo of the object surface, l is the direction vector of the one of the red, blue, or green lights, and n is the surface normal vector of the pixel point (x, y) of the acquired image.
Optionally, the determining the gradient value of each pixel according to the surface normal vector of each pixel includes determining the gradient value of each pixel according to the following formula:
wherein n is the surface normal vector of the pixel point (x, y), nxIs the coordinate of the surface normal vector of the pixel point (x, y) in the x-axis direction in a three-dimensional coordinate system, nyIs the coordinate of the surface normal vector of the pixel point (x, y) in the y-axis direction in the three-dimensional coordinate system, nzThe coordinates of the surface normal vector of the pixel point (x, y) in the direction of the z axis in a three-dimensional coordinate system are represented by p and qAnd gradient values of the pixel points (x, y) in the directions of an x axis and a y axis.
Optionally, the determining the height of the surface of the object according to the gradient value of each pixel point includes: the gradient values p and q of the pixel point (x, y) are integrated to determine the object surface height.
By the technical scheme, the three-dimensional reconstruction of the surface of the object to be constructed can be completed only by the single-frame color image acquired at a certain moment without acquiring a plurality of images, so that the working efficiency is high, and the accuracy is very high. The technical scheme provided by the invention can also realize the real-time reconstruction of the image of the surface of the object to be constructed, namely, the real-time three-dimensional reconstruction can be carried out on the non-static object, and the application range is wide. In addition, the technical scheme provided by the embodiment of the invention can be used for carrying out three-dimensional reconstruction on the surface image of the factory equipment in real time, and the reconstructed image has high accuracy, so that the problems of incomplete overhaul and omission of the conventional factory equipment can be solved, and the overhaul efficiency of an enterprise can be improved.
Additional features and advantages of embodiments of the invention will be set forth in the detailed description which follows.
Drawings
The accompanying drawings, which are included to provide a further understanding of the embodiments of the invention and are incorporated in and constitute a part of this specification, illustrate embodiments of the invention and together with the description serve to explain the embodiments of the invention without limiting the embodiments of the invention. In the drawings:
fig. 1 is a schematic structural diagram of a system for three-dimensional reconstruction according to an embodiment of the present invention;
FIG. 2 is a schematic structural diagram of a system for three-dimensional reconstruction according to an embodiment of the present invention;
fig. 3 is a flowchart of a three-dimensional reconstruction method according to an embodiment of the present invention.
Description of the reference numerals
1 light emitting device 2 image acquisition device
3 processing device 4 light emitting unit
5 terminal
Detailed Description
The following detailed description of embodiments of the invention refers to the accompanying drawings. It should be understood that the detailed description and specific examples, while indicating embodiments of the invention, are given by way of illustration and explanation only, not limitation.
Fig. 1 is a schematic structural diagram of a system for three-dimensional reconstruction according to an embodiment of the present invention. As shown in fig. 1, the system for three-dimensional reconstruction includes: the device comprises a light-emitting device 1, an image acquisition device 2 and a processing device 3. The light emitting apparatus 1 includes a plurality of light emitting units for emitting red, blue and green lights, respectively, to illuminate a surface of an object to be reconstructed; the image acquisition device 2 is used for acquiring images of the surface of the object under the irradiation of red, blue and green light at the same time; the processing device 3 is connected with the image acquisition device 2, receives the image acquired by the image acquisition device 2, processes the image, determines the three-dimensional information of the surface of the object to be built, and performs three-dimensional reconstruction on the object according to the three-dimensional information.
The light emitting unit may be an LED lamp or the like capable of emitting light of red, green, and blue lamp colors.
In order to ensure the accuracy of the three-dimensional reconstruction of the surface of the object, the lamps emitting red, green and blue light are such that the light of each color completely strikes the surface of the object to be reconstructed.
The image acquisition device 2 is preferably a camera with high resolution, and the higher the resolution of the image acquisition device 2 is, the higher the accuracy of the reconstructed object image is, and the better the effect is. The image collected by the image collecting device 2 is a two-dimensional color image.
The Processing device 3 may be a general purpose processor, a special purpose processor, a conventional processor, a Digital Signal Processor (DSP), a microprocessor, a controller, a microcontroller, an embedded processor, or the like.
Wherein the processing means 3 enable a three-dimensional reconstruction of the object from the acquired image of the surface of the object by performing the following steps:
receiving an image of the surface of the object under the irradiation of red, green and blue light while being acquired by the image acquisition device 2, wherein the acquired image is a single-frame color image;
carrying out three-channel separation on the collected single-frame color image to obtain three gray level images respectively corresponding to red, blue and green light; and
and according to the irradiation angles of the red, green and blue lights and the three gray-scale images, performing three-dimensional reconstruction on the object.
Wherein the illumination angles of the red, green and blue lights are determined by the relative positions of the light emitting unit and the photographing angle of view of the image pickup device 2. In order to facilitate determination of the illumination angles of the lights of the respective colors, the position of the light emitting unit may be fixed before image acquisition, so as to determine an included angle between the shooting angle of view of the image acquisition device 2 and the illumination angle of the light emitted by the light emitting unit.
In order to facilitate the three-dimensional reconstruction of the surfaces of a plurality of objects, the image capturing device 2 may be fixed relative to the position of the light emitting units, such that the illumination angles of the red, blue and green light may be determined.
Wherein the light emitting unit and the image capturing device 2 may be integrated into a single device. For example, the light emitting unit and the image capturing device 2 may be fixed on the same scaffold, and the scaffold on which the light emitting unit and the image capturing device 2 are installed may be moved as necessary to synchronously move the light emitting unit and the image capturing device 2 so that the relative positions thereof are maintained constant.
In addition, when the surface of different objects is reconstructed in three dimensions, the positions of the light-emitting units can be finely adjusted, so that the light of each color can completely illuminate the surface of the object to be reconstructed. Therefore, the method can meet the shooting requirements in scenes with different sizes, and can obtain data required by three-dimensional reconstruction of the object to be built more conveniently in actual maintenance without a fixed light source of the traditional method.
Optionally, the system further comprises an electric actuator, and the light emitting angle of the light emitting unit is adjusted by using the electric actuator. Under the condition that the light-emitting angle of the light-emitting unit can be adjusted, the worker can select other devices to drive the light-emitting unit to act so as to adjust the light-emitting angle of the light-emitting unit.
The processing device 3 performs three-dimensional reconstruction on the object according to the irradiation angles of the red, green and blue lights and the three grayscale images, and specifically includes the following steps:
firstly, determining a surface normal vector of each pixel point of an acquired image according to an irradiation angle of red light, the brightness of each pixel point in a gray level image corresponding to red light, the irradiation angle of green light, the brightness of each pixel point in a gray level image corresponding to green light, the irradiation angle of blue light and the brightness of each pixel point in a gray level image corresponding to blue light;
determining the gradient value of each pixel point according to the surface normal vector of each pixel point;
determining the heights of all parts of the surface of the object according to the gradient value of each pixel point; and
and realizing three-dimensional reconstruction of the surface of the object according to the heights of all the parts of the surface of the object.
The embodiment of the invention also provides a formula for determining the surface normal vector of each pixel point of the acquired image, which comprises the following steps: i (x, y) ═ ρ nl. Wherein i (x, y) represents the brightness of a pixel point (x, y) in the grayscale image corresponding to one of the red, blue and green lights, ρ represents the object surface albedo, i represents the direction vector of the one of the red, blue or green lights, and n represents the surface normal vector of the pixel point (x, y) of the acquired image, i.e., the direction in which the tangent plane of the pixel point (x, y) faces.
Specifically, the brightness i of the pixel point (x, y) of the gray image corresponding to the red light is first determinedRed wine(x, y) and a direction vector l of red lightRed wineAn equation (1) is determined (determined from the irradiation angle of the red light): i.e. iRed wine(x,y)=ρnlRed wine;
Then according to the light correspondence of the first greenLuminance i of a pixel point (x, y) of the gray scale image of (1)Green(x, y) and a direction vector l of red lightGreenAn equation (2) is determined (determined from the irradiation angle of the red light): i.e. iGreen(x,y)=ρnlGreen;
Then according to the brightness i of the pixel point (x, y) of the gray level image corresponding to the blue lightBlue (B)Direction vector l of (x, y) blue lightBlue (B)An equation (3) is determined (determined from the irradiation angle of the red light): i.e. iBlue (B)(x,y)=ρnlBlue (B)。
Wherein i in equations (1), (2) and (3)Red wine(x,y)、lRed wine、iGreen(x,y)、lGreen、iBlue (B)(x, y) and lBlue (B)Are known, and ρ is the same value and n is the surface normal vector of the same pixel (x, y) in the three equations.
Because the surface normal vector n of the pixel point (x, y) is a vector in a three-dimensional space, it is a three-dimensional data and contains three unknowns, equations (1), (2) and (3) need to be combined into an equation set, and in the process of solving, rho is reduced, so that the unique solution of the surface normal vector n of the pixel point (x, y) can be obtained.
The embodiment of the invention also provides a formula for determining the gradient value of each pixel point, which comprises the following steps:wherein n represents the surface normal vector of the pixel point (x, y), nxIs the coordinate of the surface normal vector of the pixel point (x, y) in the x-axis direction in a three-dimensional coordinate system, nyIs the coordinate of the surface normal vector of the pixel point (x, y) in the y-axis direction in the three-dimensional coordinate system, nzAnd p and q are gradient values of the pixel point (x, y) in the directions of the x axis and the y axis.
Wherein, the coordinate value n of the surface normal vector n of the pixel point (x, y)x、nyAnd nzAs is known, from the above formula, three equations can be determined:andand (3) solving the three equations simultaneously to determine gradient values p and q in the directions of the x axis and the y axis corresponding to the pixel point (x, y).
Wherein, the gradient value p is the partial derivative of the surface height z of the object to be built at the pixel point (x, y) in the x-axis directionThe gradient value q is the partial derivative of the surface height z of the object to be built at the pixel point (x, y) in the y-axis directionTherefore, under the condition that the gradient values p and q are known, the height of each pixel point can be restored by respectively integrating the directions of the x axis and the y axis, and then the height is converted into a coordinate in a three-dimensional coordinate system, so that the three-dimensional reconstruction of the surface of the object to be built can be completed.
In the integration process of the gradient values p and q of the pixel point (x, y), any integration method may be used, for example, the Horn integration method in the calculus method, the poisson solution method in the direct integration method, or the Frankot integration method in the frequency domain. Any one of the above methods can obtain the height information of the surface of the object to be built.
According to the technical scheme provided by the embodiment of the invention, three channels of the two-dimensional color image acquired under the irradiation of the three primary color light sources with known illumination angles are separated, so that the gray image under the irradiation of each light source is obtained independently, namely, the real-time separation of one frame of image into three images is realized, thus the efficiency of acquiring the surface information data of the object to be built is greatly improved, and the real-time three-dimensional reconstruction can be realized on the basis of the three-dimensional image.
Optionally, an embodiment of the present invention further provides a system for three-dimensional reconstruction, which has a display device, where the display device is connected to the processing device 3, and is configured to display a three-dimensional image of an object after three-dimensional reconstruction.
Optionally, the display device may be a display of an upper computer, or may be an independent display screen only used for displaying a three-dimensional image obtained by three-dimensionally reconstructing the surface of the object to be created.
Optionally, the system for three-dimensional reconstruction may further include a storage module connected to the processing device 3, and after the processing device 3 performs three-dimensional reconstruction on the surface of the object according to the determined three-dimensional information of the surface of the object to be reconstructed, the data related to the three-dimensional reconstruction of the surface of the object may be stored in the storage module, so as to facilitate later viewing and analysis.
Fig. 2 is a schematic structural diagram of a system for three-dimensional reconstruction according to an embodiment of the present invention. As shown in fig. 2, the light emitting unit 4 may emit red, blue, and green lights and may entirely illuminate the surface of the object to be built, and the image pickup device 2 picks up an image of the surface of the object under the red, blue, and green lights at the same time and transmits the picked-up image to the terminal 5. The terminal 5 may perform the operations performed by the processing device, and may further display the three-dimensionally reconstructed image of the surface of the object through a display screen of the terminal 5.
Fig. 3 is a flowchart of a three-dimensional reconstruction method according to an embodiment of the present invention. As shown in fig. 3, an embodiment of the present invention provides a method for three-dimensional reconstruction, including: controlling red, blue and green light to irradiate the surface of the object; acquiring images of the surface of an object under red, blue and green light irradiation simultaneously; carrying out three-channel separation on the collected images to obtain three gray level images respectively corresponding to red, blue and green lights; and performing three-dimensional reconstruction on the object according to the irradiation angles of the red, blue and green lights and the grayscale image.
Specifically, the three-dimensional reconstruction of the object according to the acquired image of the surface of the object is realized by the following method:
determining a surface normal vector of each pixel point of the collected image according to the irradiation angles of the red, blue and green lights and the brightness of each pixel point in the three gray level images;
determining the gradient value of each pixel point according to the surface normal vector of each pixel point;
determining the height of the surface of the object according to the gradient value of each pixel point; and
and according to the height of the surface of the object, performing three-dimensional reconstruction on the object.
Optionally, an embodiment of the present invention further provides a method for determining a surface normal vector of each pixel point according to the following formula: i (x, y) ═ ρ nl. Wherein i (x, y) represents the brightness of a pixel point (x, y) in the grayscale image corresponding to one of the red, blue and green lights, ρ represents the object surface albedo, l represents the direction vector of the one of the red, blue or green lights, and n represents the surface normal vector of the pixel point (x, y) of the acquired image.
Optionally, an embodiment of the present invention further provides a method for determining a gradient value of each pixel according to the following formula:wherein n represents the surface normal vector of the pixel point (x, y), nxIs the coordinate of the surface normal vector of the pixel point (x, y) in the x-axis direction in a three-dimensional coordinate system, nyIs the coordinate of the surface normal vector of the pixel point (x, y) in the y-axis direction in the three-dimensional coordinate system, nzAnd p and q are coordinates of the surface normal vector of the pixel point (x, y) in the z-axis direction in a three-dimensional coordinate system, and are gradient values of the pixel point (x, y).
Wherein p is the partial derivative of the surface height z of the object to be built at the pixel point (x, y) in the x-axis directionq is the partial derivative of the surface height z of the object to be built at the pixel point (x, y) in the y-axis directionThus, with the gradient values p and q known, integrating the x-axis and y-axis directions, respectively, determines the value to be determinedHeight of the surface of the building.
In the integration process of the gradient values p and q of the pixel point (x, y), any integration method may be used, for example, the Horn integration method in the calculus method, the poisson solution method in the direct integration method, or the Frankot integration method in the frequency domain. Any one of the above methods can obtain the height information of the surface of the object to be built.
For specific details and benefits of the three-dimensional reconstruction method provided by the present invention, reference may be made to the above description of the system for three-dimensional reconstruction provided by the present invention, and details are not repeated herein.
Accordingly, the embodiment of the present invention further provides a machine-readable storage medium, which stores instructions for causing a machine to execute the three-dimensional reconstruction method.
Although the embodiments of the present invention have been described in detail with reference to the accompanying drawings, the embodiments of the present invention are not limited to the details of the above embodiments, and various simple modifications can be made to the technical solutions of the embodiments of the present invention within the technical idea of the embodiments of the present invention, and the simple modifications all belong to the protection scope of the embodiments of the present invention.
It should be noted that the various features described in the above embodiments may be combined in any suitable manner without departing from the scope of the invention. In order to avoid unnecessary repetition, the embodiments of the present invention do not describe every possible combination.
Those skilled in the art will understand that all or part of the steps in the method according to the above embodiments may be implemented by a program, which is stored in a storage medium and includes several instructions to enable a single chip, a chip, or a processor (processor) to execute all or part of the steps in the method according to the embodiments of the present application. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and other various media capable of storing program codes.
In addition, any combination of various different implementation manners of the embodiments of the present invention is also possible, and the embodiments of the present invention should be considered as disclosed in the embodiments of the present invention as long as the combination does not depart from the spirit of the embodiments of the present invention.
Claims (10)
1. A system for three-dimensional reconstruction, the system comprising:
a light emitting device including three kinds of light emitting units emitting red, blue and green lights, respectively, for illuminating a surface of an object;
the image acquisition device is used for acquiring images of the surface of the object under the irradiation of red, blue and green light; and
a processing device for three-channel separation of the collected images to obtain three gray images corresponding to the red, blue and green lights, and three-dimensional reconstruction of the object according to the irradiation angles of the red, blue and green lights and the gray images,
wherein the light emitting device and the image capturing device are integrated into a unitary apparatus.
2. The system of claim 1, wherein the image capturing device is fixed relative to the position of the light emitting unit for determining the illumination angles of the red, blue and green lights.
3. The system of claim 1, further comprising a display device coupled to the processing device for displaying a three-dimensional image of the object after three-dimensional reconstruction.
4. The system of claim 1, further comprising a storage module, coupled to the processing device, for storing information about a three-dimensional volumetric image of the object after three-dimensional reconstruction.
5. The system of claim 1, further comprising an electric actuator for adjusting the lighting angle of the lighting unit.
6. A method of three-dimensional reconstruction, the method comprising:
controlling the light-emitting devices to respectively emit red, blue and green light to irradiate the surface of the object;
acquiring images of the surface of an object under red, blue and green light irradiation simultaneously by an image acquisition device; and
three channels of collected images are separated through a processing device to obtain three gray level images respectively corresponding to red, blue and green lights, and the object is three-dimensionally reconstructed according to the irradiation angles of the red, blue and green lights and the gray level images,
wherein the light emitting device and the image capturing device are integrated into a unitary apparatus.
7. The method according to claim 6, wherein the three-dimensional reconstruction of the object from the illumination angles of the red, blue and green light and the grayscale image comprises:
determining a surface normal vector of each pixel point of the collected image according to the irradiation angles of the red, blue and green lights and the brightness of each pixel point in the three gray level images;
determining the gradient value of each pixel point according to the surface normal vector of each pixel point;
determining the height of the surface of the object according to the gradient value of each pixel point; and
and according to the height of the surface of the object, performing three-dimensional reconstruction on the object.
8. The method of claim 7, wherein determining the surface normal vector of each pixel point according to the illumination angles of the red, blue, and green lights and the brightness of each pixel point in the gray scale image comprises determining the surface normal vector of each pixel point according to the following formula:
i(x,y)=ρnl,
wherein i (x, y) is the brightness of the pixel point (x, y) in the grayscale image corresponding to one of the red, blue and green lights, ρ is the albedo of the object surface, l is the direction vector of the one of the red, blue or green lights, and n is the surface normal vector of the pixel point (x, y) of the acquired image.
9. The method of claim 7, wherein determining the gradient value for each pixel based on the surface normal vector for each pixel comprises determining the gradient value for each pixel based on the following formula:
wherein n is the surface normal vector of the pixel point (x, y), nxIs the coordinate of the surface normal vector of the pixel point (x, y) in the x-axis direction in a three-dimensional coordinate system, nyIs the coordinate of the surface normal vector of the pixel point (x, y) in the y-axis direction in the three-dimensional coordinate system, nzAnd p and q are gradient values of the pixel point (x, y) in the directions of the x axis and the y axis.
10. The method of claim 7, wherein said determining the height of the surface of the object according to the gradient value of each pixel point comprises:
the gradient values p and q of the pixel point (x, y) are integrated to determine the object surface height.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201811001662.5A CN110874862A (en) | 2018-08-30 | 2018-08-30 | System and method for three-dimensional reconstruction |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201811001662.5A CN110874862A (en) | 2018-08-30 | 2018-08-30 | System and method for three-dimensional reconstruction |
Publications (1)
Publication Number | Publication Date |
---|---|
CN110874862A true CN110874862A (en) | 2020-03-10 |
Family
ID=69714380
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201811001662.5A Pending CN110874862A (en) | 2018-08-30 | 2018-08-30 | System and method for three-dimensional reconstruction |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN110874862A (en) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112669318A (en) * | 2021-03-17 | 2021-04-16 | 上海飞机制造有限公司 | Surface defect detection method, device, equipment and storage medium |
CN113538680A (en) * | 2021-06-10 | 2021-10-22 | 无锡中车时代智能装备有限公司 | Three-dimensional measurement method and equipment based on binocular luminosity stereo vision |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20040184031A1 (en) * | 2003-03-20 | 2004-09-23 | Vook Dietrich W. | Optical inspection system, apparatus and method for reconstructing three-dimensional images for printed circuit board and electronics manufacturing inspection |
JP2006073767A (en) * | 2004-09-01 | 2006-03-16 | Shimatec:Kk | Led lighting device and lighting control unit |
US20150193973A1 (en) * | 2014-01-08 | 2015-07-09 | Adobe Systems Incorporated | Single image photometric stereo for colored objects |
CN108109201A (en) * | 2017-12-28 | 2018-06-01 | 深圳市易尚展示股份有限公司 | The three-dimensional rebuilding method and system of complex colors surface object |
US20180165809A1 (en) * | 2016-12-02 | 2018-06-14 | Panagiotis Stanitsas | Computer vision for cancerous tissue recognition |
CN108195312A (en) * | 2017-12-28 | 2018-06-22 | 深圳市易尚展示股份有限公司 | Color body three-dimensional rebuilding method and system based on changeable weight |
-
2018
- 2018-08-30 CN CN201811001662.5A patent/CN110874862A/en active Pending
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20040184031A1 (en) * | 2003-03-20 | 2004-09-23 | Vook Dietrich W. | Optical inspection system, apparatus and method for reconstructing three-dimensional images for printed circuit board and electronics manufacturing inspection |
JP2006073767A (en) * | 2004-09-01 | 2006-03-16 | Shimatec:Kk | Led lighting device and lighting control unit |
US20150193973A1 (en) * | 2014-01-08 | 2015-07-09 | Adobe Systems Incorporated | Single image photometric stereo for colored objects |
US20180165809A1 (en) * | 2016-12-02 | 2018-06-14 | Panagiotis Stanitsas | Computer vision for cancerous tissue recognition |
CN108109201A (en) * | 2017-12-28 | 2018-06-01 | 深圳市易尚展示股份有限公司 | The three-dimensional rebuilding method and system of complex colors surface object |
CN108195312A (en) * | 2017-12-28 | 2018-06-22 | 深圳市易尚展示股份有限公司 | Color body three-dimensional rebuilding method and system based on changeable weight |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112669318A (en) * | 2021-03-17 | 2021-04-16 | 上海飞机制造有限公司 | Surface defect detection method, device, equipment and storage medium |
CN112669318B (en) * | 2021-03-17 | 2021-06-08 | 上海飞机制造有限公司 | Surface defect detection method, device, equipment and storage medium |
CN113538680A (en) * | 2021-06-10 | 2021-10-22 | 无锡中车时代智能装备有限公司 | Three-dimensional measurement method and equipment based on binocular luminosity stereo vision |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN109920007B (en) | Three-dimensional imaging device and method based on multispectral photometric stereo and laser scanning | |
US20180376116A1 (en) | Method and system for projector calibration | |
Martull et al. | Realistic CG stereo image dataset with ground truth disparity maps | |
US20200225030A1 (en) | Handheld large-scale three-dimensional measurement scanner system simultaneously having photogrammetric and three-dimensional scanning functions | |
WO2021042277A1 (en) | Method for acquiring normal vector, geometry and material of three-dimensional object employing neural network | |
US7999862B2 (en) | Method and apparatus for an automated background lighting compensation system | |
JP7105246B2 (en) | Reconstruction method and reconstruction device | |
US20100118122A1 (en) | Method and apparatus for combining range information with an optical image | |
US20030202120A1 (en) | Virtual lighting system | |
EP2869263A1 (en) | Method and apparatus for generating depth map of a scene | |
AU2003298666A1 (en) | Reality-based light environment for digital imaging in motion pictures | |
CN101422035A (en) | Image high-resolution upgrading device, image high-resolution upgrading method, image high-resolution upgrading program and image high-resolution upgrading system | |
CN112525107B (en) | Structured light three-dimensional measurement method based on event camera | |
CN109889799B (en) | Monocular structure light depth perception method and device based on RGBIR camera | |
EP3382645A2 (en) | Method for generation of a 3d model based on structure from motion and photometric stereo of 2d sparse images | |
CN110458964B (en) | Real-time calculation method for dynamic illumination of real environment | |
CN107680039B (en) | Point cloud splicing method and system based on white light scanner | |
WO2020075252A1 (en) | Information processing device, program, and information processing method | |
TW201705088A (en) | Generating a disparity map based on stereo images of a scene | |
US9204130B2 (en) | Method and system for creating a three dimensional representation of an object | |
JP7479729B2 (en) | Three-dimensional representation method and device | |
WO2014145427A1 (en) | Systems and methods for 3d photorealistic automated modeling | |
KR20200129657A (en) | Method for gaining 3D model video sequence | |
CN108090877A (en) | A kind of RGB-D camera depth image repair methods based on image sequence | |
CN110874862A (en) | System and method for three-dimensional reconstruction |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20200310 |