CN111540042A - Method, device and related equipment for three-dimensional reconstruction - Google Patents

Method, device and related equipment for three-dimensional reconstruction Download PDF

Info

Publication number
CN111540042A
CN111540042A CN202010350942.8A CN202010350942A CN111540042A CN 111540042 A CN111540042 A CN 111540042A CN 202010350942 A CN202010350942 A CN 202010350942A CN 111540042 A CN111540042 A CN 111540042A
Authority
CN
China
Prior art keywords
ideal
target object
pixel point
exposure intensity
target
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010350942.8A
Other languages
Chinese (zh)
Other versions
CN111540042B (en
Inventor
吴笛
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Shenghuang Optical Technology Co ltd
Original Assignee
Shanghai Shenghuang Optical Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Shenghuang Optical Technology Co ltd filed Critical Shanghai Shenghuang Optical Technology Co ltd
Priority to CN202010350942.8A priority Critical patent/CN111540042B/en
Publication of CN111540042A publication Critical patent/CN111540042A/en
Application granted granted Critical
Publication of CN111540042B publication Critical patent/CN111540042B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects

Landscapes

  • Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Computer Graphics (AREA)
  • Geometry (AREA)
  • Software Systems (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Length Measuring Devices By Optical Means (AREA)

Abstract

The present disclosure provides a method, apparatus, electronic device, computer-readable storage medium, logic circuit, and system for three-dimensional reconstruction, the method comprising: projecting the first prediction image to a target object, and synchronously shooting the target object through the first exposure intensity to obtain a target gray value of each pixel point of the target object; determining ideal exposure intensity corresponding to each pixel point of the target object according to the ideal gray value, the first exposure intensity and the target gray value of each pixel point; performing grouping statistics on each pixel point of the target object according to the ideal exposure intensity to obtain a grouping statistical result; and determining the ideal exposure setting of the target object according to the grouping statistical result so as to carry out three-dimensional reconstruction on the target object through the structured light technology according to the ideal exposure setting. The method provided by the embodiment of the disclosure can determine an ideal exposure setting capable of covering most pixel points of the target object, so as to better perform three-dimensional reconstruction on the target object.

Description

Method, device and related equipment for three-dimensional reconstruction
Technical Field
The present disclosure relates to the field of image processing technologies, and in particular, to a method and an apparatus for three-dimensional reconstruction, an electronic device computer-readable storage medium, a logic circuit, and a system.
Background
Structured light technology is a non-contact three-dimensional reconstruction technology. The method comprises the steps of projecting a specific single-width pattern or a plurality of patterns onto a measured object by adopting one or a plurality of light sources, shooting the measured object by one or a plurality of image sensors, and finally obtaining image data containing depth information by analyzing and calculating the image data acquired by the image sensors.
Imaging integrity is one of the most important indicators for measuring three-dimensional reconstruction techniques. For the surface structured light technology, proper exposure intensity is crucial, and overexposure or underexposure can cause failure of pixel point reconstruction, thereby affecting imaging integrity.
Therefore, a proper exposure setting is found for the target object, and the target object is subjected to three-dimensional reconstruction through the exposure setting, which is very important for improving the integrity of the three-dimensional reconstruction of the target object.
It is to be noted that the information disclosed in the above background section is only for enhancement of understanding of the background of the present disclosure, and thus may include information that does not constitute prior art known to those of ordinary skill in the art.
Disclosure of Invention
The embodiment of the disclosure provides a method, a device, an electronic device, a computer-readable storage medium, a logic circuit and a system for three-dimensional reconstruction, which can determine an ideal exposure setting capable of enabling each pixel point on a target object to be imaged as much as possible, and perform three-dimensional reconstruction on the target object through the exposure setting, so that the integrity of the three-dimensional reconstruction of the target object can be improved, and the three-dimensional imaging quality of the target object is further improved.
Additional features and advantages of the disclosure will be set forth in the detailed description which follows, or in part will be obvious from the description, or may be learned by practice of the disclosure.
The embodiment of the present disclosure provides a method for three-dimensional reconstruction, including: projecting a first predicted image to a target object, and synchronously shooting the target object through first exposure intensity to obtain a target gray value of each pixel point of the target object; determining ideal exposure intensity corresponding to each pixel point of the target object according to the ideal gray value, the first exposure intensity and the target gray value of each pixel point; performing grouping statistics on each pixel point of the target object according to the ideal exposure intensity to obtain a grouping statistical result; and determining the ideal exposure setting of the target object according to the grouping statistical result so as to carry out three-dimensional reconstruction on the target object through a structured light technology according to the ideal exposure setting.
In some embodiments, the target object includes a first pixel point, and a target gray value corresponding to the first pixel point is a first gray value; determining the ideal exposure intensity corresponding to each pixel point of the target object according to the ideal gray value, the first exposure intensity and the target gray value of each pixel point, including: acquiring a first target exposure intensity required by the first pixel point when the target measurement image is based on the ideal gray value obtained according to the ideal gray value, the first gray value and the first exposure intensity; and determining the ideal exposure intensity of the first pixel point according to the first target exposure intensity.
In some embodiments, determining the ideal exposure intensity of the first pixel point from the first exposure intensity comprises: projecting the first prediction image to the first pixel point, and synchronously shooting the first pixel point through second exposure intensity to obtain a second gray value; according to the ideal gray value, the second gray value and the second exposure intensity, acquiring second target exposure intensity required by the first pixel point when the target measurement image is used for acquiring the ideal gray value; and if the first target exposure intensity is greater than the second target exposure intensity, the first target exposure intensity is the ideal exposure intensity of the first pixel point.
In some embodiments, the target object includes a second pixel, a target gray value corresponding to the second pixel is a third gray value, and the first predicted image is different from a target measurement image of the second pixel; determining the ideal exposure intensity corresponding to each pixel point of the target object according to the ideal gray value, the first exposure intensity and the target gray value of each pixel point, including: determining the gray level enhancement coefficient ratio of the target measurement image of the second pixel point and the first prediction image; and acquiring the ideal exposure intensity required by the second pixel point when the target measurement image is used for acquiring the ideal gray value according to the ideal gray value, the third gray value, the first exposure intensity and the gray enhancement coefficient ratio.
In some embodiments, determining the ratio of the gray enhancement coefficients of the target measurement image of the second pixel point and the first prediction image comprises: projecting the first prediction image to a target object with uniform light reflection rate, and synchronously shooting the target object through third exposure intensity to obtain a first average gray value of the target object; projecting the target measurement image of the second pixel point to the target object, and synchronously shooting the target object through the third exposure intensity to obtain a second average gray value of the target object; and determining the gray enhancement coefficient ratio of the target measurement image of the second pixel point and the first prediction image according to the ratio of the first average gray value to the second average gray value.
In some embodiments, determining an ideal exposure setting for the target object from the grouping statistics comprises: acquiring candidate exposure settings; determining the imaging quality of the target object under the candidate exposure setting according to the grouping statistical result; an ideal exposure setting for the target object is determined among candidate exposure settings according to the imaging quality.
In some embodiments, the grouping statistic result is a frequency distribution graph of the number of pixels relative to an ideal exposure intensity; wherein obtaining the candidate exposure setting comprises: acquiring a first exposure time; according to the first exposure times, dividing the frequency distribution graph equally according to the area to obtain a first division graph; and acquiring ideal exposure intensity corresponding to the gravity center of each first partial graph to generate the candidate exposure setting.
In some embodiments, determining the imaging quality of the target object at the candidate exposure setting from the grouping statistics comprises: acquiring a third pixel point which can be imaged by the target object under the candidate exposure setting; and determining the imaging quality of the target object under the candidate exposure setting according to the third pixel point.
In some embodiments, the candidate exposure setting comprises a fourth exposure intensity at which a fourth pixel point in the target object may be imaged; determining the imaging quality of the target object under the candidate exposure setting according to the third pixel point, wherein the determining comprises: acquiring a target difference value between the ideal exposure intensity corresponding to the fourth pixel point and the fourth exposure intensity; determining the weight of the fourth pixel point according to the target difference value; and carrying out number weighted summation on the fourth pixel points according to the weights, and determining the imaging quality of the fourth exposure intensity.
In some embodiments, the method for three-dimensional reconstruction further comprises: projecting a second prediction image onto the target object, and synchronously shooting the target object through the fourth exposure intensity to obtain a fourth gray value; acquiring a fifth pixel point corresponding to the maximum unexposed gray value and the minimum unexposed gray value in the fourth gray value; and determining the fourth pixel point which can be imaged by the target object under the fourth exposure intensity according to the grouping statistical result based on the ideal exposure intensity corresponding to the fifth pixel point.
In some embodiments, the grouping statistic result is a frequency distribution graph of the number of pixels relative to an ideal exposure intensity; wherein determining an ideal exposure setting for the target object based on the group statistics comprises: acquiring a second exposure time; according to the second exposure times, equally dividing the frequency distribution graph according to the area to obtain a second halving graph; acquiring ideal exposure intensity corresponding to the gravity center of each second bisection map; and determining the ideal exposure setting according to the ideal exposure intensity corresponding to the gravity center of the second bisection map.
In some embodiments, the ideal exposure setting comprises a sixth exposure intensity and a seventh exposure intensity, the target object comprises sixth pixels; wherein three-dimensional reconstruction of the target object by structured light techniques according to the ideal exposure setting comprises: projecting the target measurement image to the sixth pixel point; synchronously shooting the sixth pixel point according to the sixth exposure intensity to obtain a fifth gray value, wherein the fifth gray value is not overexposed; synchronously shooting the sixth pixel point through the seventh exposure intensity to obtain a sixth gray value, wherein the sixth gray value is not overexposed; and if the fifth gray value is larger than the sixth gray value, performing the three-dimensional reconstruction on the sixth pixel point by the structured light technology according to the sixth exposure intensity.
The embodiment of the present disclosure provides an apparatus for three-dimensional reconstruction, which includes: the device comprises a target gray value acquisition module, an ideal exposure intensity acquisition module, a grouping statistical result acquisition module and an ideal exposure setting acquisition module.
The target gray value obtaining module may be configured to project a first prediction image to a target object, and synchronously shoot the target object through a first exposure intensity to obtain a target gray value of each pixel of the target object. The ideal exposure intensity obtaining module may be configured to determine an ideal exposure intensity corresponding to each pixel of the target object according to an ideal gray value, the first exposure intensity, and a target gray value of each pixel. The grouping statistic result obtaining module may be configured to perform grouping statistics on each pixel point of the target object according to the ideal exposure intensity to obtain a grouping statistic result. The ideal exposure setting obtaining module may be configured to determine an ideal exposure setting of the target object according to the grouping statistics, so as to perform three-dimensional reconstruction on the target object through a structured light technique according to the ideal exposure setting.
The disclosed embodiment provides a logic circuit, which includes: a programmable logic chip. The programmable logic chip can realize the method for three-dimensional reconstruction.
The disclosed embodiments provide a system for three-dimensional reconstruction, the system for three-dimensional reconstruction comprising: a target projection device, a target image acquisition device and an ideal exposure setting determination device.
Wherein the target projection device can project the first prediction image to the target object. The target image acquisition device can synchronously shoot the target object through first exposure intensity to obtain a target gray value of each pixel point of the target object while projecting the first prediction image to the target object. The ideal exposure setting determination device may determine ideal exposure intensities corresponding to respective pixel points of the target object according to an ideal gray value, the first exposure intensity and a target gray value of the respective pixel points, perform grouping statistics on the respective pixel points of the target object according to the ideal exposure intensities to obtain a grouping statistical result, and determine an ideal exposure setting of the target object according to the grouping statistical result, so as to perform three-dimensional reconstruction on the target object by a structured light technique according to the ideal exposure setting.
In some embodiments, the ideal exposure setting determination device is a logic circuit.
An embodiment of the present disclosure provides an electronic device, including: one or more processors; a storage device for storing one or more programs which, when executed by the one or more processors, cause the one or more processors to implement the method for three-dimensional reconstruction as recited in any one of the above.
The disclosed embodiments provide a computer-readable storage medium, on which a computer program is stored, which when executed by a processor implements a method for three-dimensional reconstruction as described in any of the above.
The method, the device, the system, the logic circuit, the electronic device and the computer-readable storage medium for three-dimensional reconstruction provided by some embodiments of the present disclosure obtain ideal exposure intensity of each pixel of a target object, determine ideal exposure setting for the target object through statistical analysis of the ideal exposure intensity of each pixel, and perform three-dimensional reconstruction on the target object according to the ideal exposure setting and through a structured light technique, so as to improve three-dimensional reconstruction rate and three-dimensional imaging quality of each pixel of the target object.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the disclosure.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the present disclosure and together with the description, serve to explain the principles of the disclosure. The drawings described below are merely some embodiments of the present disclosure, and other drawings may be derived from those drawings by those of ordinary skill in the art without inventive effort.
Fig. 1 is a schematic configuration diagram illustrating a computer system applied to an apparatus for three-dimensional reconstruction according to an exemplary embodiment.
Fig. 2 is a view showing an area array structured light system according to the related art.
FIG. 3 is a flow chart illustrating a method for three-dimensional reconstruction in accordance with an exemplary embodiment.
Fig. 4 is a diagram illustrating a grouping of pixel points according to ideal exposure intensity in accordance with an exemplary embodiment.
Fig. 5 is a flowchart of step S2 in fig. 3 in an exemplary embodiment.
Fig. 6 is a flowchart of step S21 in fig. 5 in an exemplary embodiment.
Fig. 7 is a flowchart of step S2 in fig. 3 in an exemplary embodiment.
Fig. 8 is a flowchart of step S23 in fig. 7 in an exemplary embodiment.
Fig. 9 is a flowchart of step S4 in fig. 3 in an exemplary embodiment.
FIG. 10 is a diagram illustrating a grouping of pixel points according to ideal exposure intensity in accordance with an exemplary embodiment.
FIG. 11 is a flowchart of step S41 of FIG. 9 in an exemplary embodiment.
Fig. 12 is a schematic diagram illustrating a frequency division diagram equally divided according to a first exposure time according to an exemplary embodiment.
FIG. 13 is a flowchart of step S42 of FIG. 9 in an exemplary embodiment.
Fig. 14 is a flowchart of step S422 in fig. 13 in an exemplary embodiment.
FIG. 15 is a schematic diagram illustrating an imageable packet to determine a fourth exposure intensity in accordance with an exemplary embodiment.
FIG. 16 is a flowchart illustrating a method for obtaining a fourth pixel point that a target object can have at a fourth exposure intensity according to an exemplary embodiment.
FIG. 17 is a diagram illustrating a method of determining imageable pixels for various exposure intensities in a candidate exposure setting in accordance with an exemplary embodiment.
FIG. 18 is a flowchart of step S4 of FIG. 3 in an exemplary embodiment.
FIG. 19 illustrates a method for three-dimensional reconstruction, according to an exemplary embodiment.
FIG. 20 is an illustration of a system for three-dimensional reconstruction, in accordance with an exemplary embodiment.
FIG. 21 is an image data input schematic diagram, shown in accordance with an exemplary embodiment.
FIG. 22 is an image data input schematic diagram, shown in accordance with an exemplary embodiment.
Fig. 23 is a schematic view of a region of interest shown in accordance with an exemplary embodiment.
FIG. 24 is a block diagram illustrating an apparatus for three-dimensional reconstruction in accordance with an exemplary embodiment.
Detailed Description
Example embodiments will now be described more fully with reference to the accompanying drawings. Example embodiments may, however, be embodied in many different forms and should not be construed as limited to the embodiments set forth herein; rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the concept of example embodiments to those skilled in the art. The same reference numerals denote the same or similar parts in the drawings, and thus, a repetitive description thereof will be omitted.
The described features, structures, or characteristics of the disclosure may be combined in any suitable manner in one or more embodiments. In the following description, numerous specific details are provided to give a thorough understanding of embodiments of the disclosure. One skilled in the relevant art will recognize, however, that the subject matter of the present disclosure can be practiced without one or more of the specific details, or with other methods, components, devices, steps, and the like. In other instances, well-known methods, devices, implementations, or operations have not been shown or described in detail to avoid obscuring aspects of the disclosure.
The drawings are merely schematic illustrations of the present disclosure, in which the same reference numerals denote the same or similar parts, and thus, a repetitive description thereof will be omitted. Some of the block diagrams shown in the figures do not necessarily correspond to physically or logically separate entities. These functional entities may be implemented in the form of software, or in one or more hardware modules or integrated circuits, or in different networks and/or processor devices and/or microcontroller devices.
The flow charts shown in the drawings are merely illustrative and do not necessarily include all of the contents and steps, nor do they necessarily have to be performed in the order described. For example, some steps may be decomposed, and some steps may be combined or partially combined, so that the actual execution sequence may be changed according to the actual situation.
In this specification, the terms "a", "an", "the", "said" and "at least one" are used to indicate the presence of one or more elements/components/etc.; the terms "comprising," "including," and "having" are intended to be inclusive and mean that there may be additional elements/components/etc. other than the listed elements/components/etc.; the terms "first," "second," and "third," etc. are used merely as labels, and are not limiting on the number of their objects.
The following detailed description of exemplary embodiments of the disclosure refers to the accompanying drawings.
Fig. 1 is a schematic configuration diagram illustrating a computer system applied to an apparatus for three-dimensional reconstruction according to an exemplary embodiment.
Referring now to FIG. 1, a block diagram of a computer system 100 suitable for implementing a terminal device of the embodiments of the present application is shown. The terminal device shown in fig. 1 is only an example, and should not bring any limitation to the functions and the scope of use of the embodiments of the present application.
As shown in fig. 1, the computer system 100 includes a Central Processing Unit (CPU)101 that can perform various appropriate actions and processes according to a program stored in a Read Only Memory (ROM)102 or a program loaded from a storage section 108 into a Random Access Memory (RAM) 103. In the RAM 103, various programs and data necessary for the operation of the system 100 are also stored. The CPU 101, ROM 102, and RAM 103 are connected to each other via a bus 104. An input/output (I/O) interface 105 is also connected to bus 104.
The following components are connected to the I/O interface 105: an input portion 106 including a keyboard, a mouse, and the like; an output section 107 including a display such as a Cathode Ray Tube (CRT), a Liquid Crystal Display (LCD), and the like, and a speaker; a storage section 108 including a hard disk and the like; and a communication section 109 including a network interface card such as a LAN card, a modem, or the like. The communication section 109 performs communication processing via a network such as the internet. A drive 110 is also connected to the I/O interface 105 as needed. A removable medium 111 such as a magnetic disk, an optical disk, a magneto-optical disk, a semiconductor memory, or the like is mounted on the drive 110 as necessary, so that a computer program read out therefrom is mounted into the storage section 108 as necessary.
In particular, according to an embodiment of the present disclosure, the processes described above with reference to the flowcharts may be implemented as computer software programs. For example, embodiments of the present disclosure include a computer program product comprising a computer program embodied on a computer readable storage medium, the computer program containing program code for performing the method illustrated by the flow chart. In such an embodiment, the computer program may be downloaded and installed from a network through the communication section 109, and/or installed from the removable medium 111. The above-described functions defined in the system of the present application are executed when the computer program is executed by the Central Processing Unit (CPU) 101.
It should be noted that the computer readable storage medium shown in the present application can be a computer readable signal medium or a computer readable storage medium or any combination of the two. A computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples of the computer readable storage medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the present application, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. In this application, however, a computer readable signal medium may include a propagated data signal with computer readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated data signal may take many forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A computer readable signal medium may also be any computer readable storage medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer readable storage medium may be transmitted using any appropriate medium, including but not limited to: wireless, wire, fiber optic cable, RF, etc., or any suitable combination of the foregoing.
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present application. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams or flowchart illustration, and combinations of blocks in the block diagrams or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The modules and/or sub-modules and/or units and/or sub-units described in the embodiments of the present application may be implemented by software or hardware. The described modules and/or sub-modules and/or units and/or sub-units may also be provided in a processor, which may be described as: a processor includes a transmitting unit, an obtaining unit, a determining unit, and a first processing unit. Wherein the names of these modules and/or sub-modules and/or units and/or sub-units in some cases do not constitute a limitation of the modules and/or sub-modules and/or units and/or sub-units themselves.
As another aspect, the present application also provides a computer-readable storage medium, which may be contained in the apparatus described in the above embodiments; or may be separate and not incorporated into the device. The computer readable storage medium carries one or more programs which, when executed by a device, cause the device to perform functions including: projecting a first predicted image to a target object, and synchronously shooting the target object through first exposure intensity to obtain a target gray value of each pixel point of the target object; determining ideal exposure intensity corresponding to each pixel point of the target object according to the ideal gray value, the first exposure intensity and the target gray value of each pixel point; performing grouping statistics on each pixel point of the target object according to the ideal exposure intensity to obtain a grouping statistical result; and determining the ideal exposure setting of the target object according to the grouping statistical result so as to carry out three-dimensional reconstruction on the target object through a structured light technology according to the ideal exposure setting.
The structured light technique can be divided into a point structured light technique, a line structured light technique, and a surface structured light technique by distinguishing from the dimension of the acquired data. The projection light source of the point structured light technology is a single light spot, the image sensor is usually a linear array image sensor, and the point structured light technology can obtain the depth information of the single spot, namely one-dimensional information (Z). The projection light source of the line-structured light is a line, the image sensor can be an area array image sensor, and the line-structured light technology can obtain depth information of the line array, namely two-dimensional information (X, Z). The projection light source of the area structured light technology is an area array light source, the image sensor is an area array image sensor, and the technology can obtain depth information of an area array, namely three-dimensional information (X, Y, Z). Although the point structured light technology and the line structured light technology cannot directly obtain complete three-dimensional information including X, Y, and Z, in practical application, data dimensionality can be improved through mechanical movement scanning to finally obtain three-dimensional information, and therefore the point structured light technology and the line structured light technology are still generally considered to belong to a three-dimensional reconstruction technology.
To facilitate the explanation of the principles and concepts of the present disclosure, the principles and steps of a typical three-step sinusoidal fringe phase-shifting system will first be described.
It should be noted that the technical solution provided in the embodiment of the present disclosure may be applied to a surface structured light technology, a line structured light technology, and other three-dimensional reconstruction technologies, and the present disclosure does not limit this.
Fig. 2 is a view showing an area array structured light system according to the related art. As illustrated in fig. 2, the area-array structured light system may include a projection light source 201, an area-array camera 202, and a target object 203.
The projection light source 201 may project the projection image 204 onto the target object 203, and the area-array camera 205 may perform image acquisition on the target object synchronously while projecting. The projection image 204 projected onto the target object 203 may be three sinusoidal stripe patterns, the brightness of each pixel in each projection image varies sinusoidally with the abscissa of the pixel on the target surface of the light source, and the field of view may include a plurality of sinusoidal periods. In addition, the phases of the three projection images may be different, for example, the phase difference between each two of the three fringe images is 2/3 pi, wherein the phase of the first image is 0, the phase of the second image is 2/3 pi, and the phase of the third image is 4/3 pi.
In some embodiments, P may be obtained by projecting and shooting the three projection images onto the target object 2031,P2,P3Three images. Correspondingly, P1Corresponding to a phase of 0, P2Corresponding to a phase of 2/3 pi, P3The corresponding phase is 4/3 pi.
A pixel point A with (Xc, Yc) pixel coordinates on the target surface of the area-array camera is at P1,P2,P3The gray value of the pixel in (1) is G1,G2,G3。G1,G2,G3The expressions of (a) may be:
G1=I(Rasinα+Rb) (1)
Figure BDA0002471814600000111
Figure BDA0002471814600000112
wherein:
and I is the exposure intensity of the area-array camera. It describes the sensitivity of the camera pixels for the entire system, which is positively correlated to the exposure time and camera gain;
Rathe reflectivity of the measured object at the position of the pixel point A is obtained;
alpha is the modulation phase of the pixel point A and is in a linear relation with the abscissa of the modulated light on the light source target surface;
Rbis the ambient light radiation intensity around the target object;
since the pixel cells of the image sensor have a fixed maximum data value, for example, the a/D conversion result of 8 bits is 255 at maximum, the overexposure will cause the gray value of the pixel to no longer increase with the amount of light radiation received by the pixel cell, and therefore the above formula holds on the premise that the corresponding pixel is not overexposed (i.e., the gray value is too high) or underexposed (i.e., the gray value is too low).
The following solutions can be obtained by combining equations (1) to (3):
Figure BDA0002471814600000113
in the structured light technology, after the modulation phase of a target pixel point under each projection image is obtained, the corresponding relation between a camera pixel and a light source pixel can be established, the calculation of a triangulation method is completed, and the three-dimensional coordinate information of a measured point is obtained.
Generally, a pixel point is considered to be incapable of being reconstructed in the following three cases.
(1) In the absence of a measured object within the effective depth of field of the system, G1,G2,G3May be very close, its amplitude is mainly caused by system noise, so that the modulation phase cannot be effectively confirmed according to equation (4). Because although in the formula (4), 2G1≠G2+G3The modulation phase α can still be solved, but the result is not meaningful.
(2) The gray value of the pixel is too large (overexposure). When the exposure intensity is too high, the pixel point will be overexposed, the gray value obtained by the sensor no longer conforms to the formulas (1) - (3), and the modulation phase α solved by the formula (4) will contain a large error. Therefore, the pixel points corresponding to the excessively large gray values should be marked as being non-reconstructable.
(3) The gray value of the pixel is too small (underexposure). When the exposure intensity is too low, G1,G2,G3Will be very close, which behaves the same as if there were no object under measurement within the depth of field, and should also be marked as non-reconstructable.
Generally, if one pixel does not have the above three conditions, the three-dimensional reconstruction can be effectively performed. At this moment, the higher the exposure intensity is, the better the signal-to-noise ratio of the pixel point imaging is, and the higher the precision of the modulation phase alpha obtained by calculation is. Therefore, for the three-dimensional reconstruction technology of the structured light, it is desirable that the exposure intensity corresponding to each pixel point during imaging can be sufficiently high.
For a pixel, the ideal exposure intensity should make its gray value G close to pixel saturation.
In the present invention, a gray value close to the saturation of a pixel is referred to as an ideal gray value GIdeal. Ideal gray value GIdealA certain margin should be left from the saturation value to avoid overexposure of pixels due to random noise. For example, for 8-bit analog-to-digital converted image data, the gray scale value ranges from 0 to 255, so GIdealMay be set to 220.
For a single pixel, the gray value G of the pixel is proportional to the exposure intensity I, if the exposure intensity can be adjusted such that G ═ GIdealThe best reconstruction can be achieved, and the exposure intensity is recorded as IIdeal. Therefore, it is desirable to find a set of ideal exposure settings so that as many pixels as possible have the desired gray value.
In structured light systems, there are a number of adjustable parameters that can affect linearly or nearly linearly the gray scale values recorded by the pixels of the image sensor per unit time, such as: camera gain, camera exposure time, light source radiation intensity, light source image modulation amplitude, and the like. The disclosure refers to exposure intensity, exposure setting, and broadly to all such parameters. In the specific implementation, one or more of all adjustable parameters that can affect the gray value recorded by the pixel of the image sensor can be adjusted according to the actual situation of the system, so as to achieve the purpose of adjusting the exposure intensity.
The embodiment of the disclosure provides a method for three-dimensional reconstruction, which can provide an ideal exposure setting for structured light technology, and can image as many pixel points as possible by performing three-dimensional reconstruction on a target object through the ideal exposure setting, and can also improve the quality of three-dimensional imaging of the target object.
FIG. 3 is a flow chart illustrating a method for three-dimensional reconstruction in accordance with an exemplary embodiment. The method provided by the embodiment of the present disclosure can be processed by any electronic device with computing processing capability, and the present disclosure does not limit this.
Referring to fig. 3, a method for three-dimensional reconstruction provided by an embodiment of the present disclosure may include the following steps.
In step S1, the first prediction image is projected onto the target object, and the target object is synchronously photographed through the first exposure intensity to obtain the target gray scale value of each pixel of the target object.
In some embodiments, the first prediction image may be a sine stripe image, a binary stripe, a full white image, and so on, which is not limited by this disclosure.
In some embodiments, the first prediction image may be projected to the target object through a target projection device in the structured light system, and the target object is synchronously acquired through an image acquisition device in the structured light system, so as to obtain the target gray values of the respective pixel points.
It is to be understood that, since the target gray-scale value can be obtained by different first prediction images, the first prediction image of the target object can be implemented in various ways, and the present disclosure does not limit this.
In step S2, an ideal exposure intensity corresponding to each pixel of the target object is calculated according to the ideal gray value, the first exposure intensity, and the target gray value of each pixel.
In this embodiment, a target pixel of a target object is taken as an example to explain how to obtain an ideal exposure intensity of the target pixel.
In the structured light technology, different measurement shot images can obtain different gray values for the same pixel point. Generally, the larger the gray value is, the better the signal-to-noise ratio of the pixel point for three-dimensional reconstruction is.
In some embodiments, a plurality of images may be projected onto the target pixel, any one of the projected images may be a target measurement image, and a gray value obtained after the target measurement image is projected onto the target pixel is a target gray value.
In some embodiments, the relationship between the target gray-scale value of the target pixel and the exposure intensity I corresponding to the acquired target gray-scale value can be described by formula (5).
Gp=Ip(RaC+Rb) (5)
Wherein:
Gpthe gray value of the target pixel point is the target gray value of the target pixel point.
IpThe exposure intensity applied when the target gray value of the target pixel point is obtained;
Rathe reflectivity of the target object at the position of the target pixel point;
c is a gray scale enhancement factor describing the ability of a particular projected image to cause an increase in gray scale value;
Rbis the ambient light radiation intensity of the environment in which the target object is located.
In some embodiments, the ideal gray-level value of the target pixel point may be set in advance, for example, may be set to 220. Under the condition that overexposure does not occur, the higher the exposure intensity of the camera is, the higher the gray value obtained by shooting the target pixel point through the camera is, and the better the signal-to-noise ratio of the three-dimensional reconstruction of the target pixel is finally. Therefore, through the ideal exposure setting provided by the embodiment of the disclosure, as many pixel points as possible in the target object can be imaged, and the gray values corresponding to as many pixel points as possible can reach or approach the ideal gray values.
In some embodiments, if the first predicted image projected to the target pixel is the same as the target measurement pattern of the target pixel, the relationship between the ideal gray value and the ideal exposure intensity of the target pixel can be described by formula (6).
GIdeal=IIdeal(RaC+Rb) (6)
Wherein,
GIdealthe ideal gray value of the target pixel point is obtained;
IIdealthe ideal exposure intensity of the target pixel point;
Rathe reflectivity of the target object at the position of the target pixel point;
c is a gray scale enhancement factor describing the ability of a particular projected image to cause an increase in gray scale value;
Rbis the ambient light radiation intensity of the environment in which the target object is located.
By combining equation (5) and equation (6), the ideal exposure intensity I can be describedIdealWith ideal gray value GIdealTarget gray value GpAnd the exposure intensity I corresponding to the target gray valuepEquation (7) for the relationship between.
Figure BDA0002471814600000141
In some embodiments, if the first predicted image projected to the target pixel is different from the target measurement pattern of the target pixel, and the ambient light radiation intensity of the environment where the target object is located is negligible, the target gray value G under the first predicted image can be described by formula (8)pAnd the exposure intensity I of the exposurepThe ideal gray value G under the target measurement image is described by the formula (9)IdealAnd ideal exposure intensity IIdealThe relationship (2) of (c).
Gp=IpRaCp(8)
GIdeal=IIdealRaCM(9)
Wherein, CPIs the gray scale enhancement coefficient of the first predicted image, CMThe gamma of the image is measured for the target.
From equation (8), equation (9) can be derived:
Figure BDA0002471814600000142
wherein,
Figure BDA0002471814600000151
is a value measured in advance. The measuring method comprises the following steps: the method comprises the steps of using the same set of structured light system, using the same exposure intensity I, respectively using a first prediction image and a target measurement image of a target pixel point to shoot a measured object with uniform light reflection rate under the conditions of no overexposure, enough signal-to-noise ratio and no ambient light interference, calculating the gray value of each pixel point in the first prediction image and the gray value of each pixel point in the target measurement image, and calculating the average gray value of all the pixel points in the first prediction image
Figure BDA0002471814600000152
And the average gray peak value of all pixel points under the target measurement image
Figure BDA0002471814600000153
Formula (11) can be obtained:
Figure BDA0002471814600000154
and substituting the formula (11) into the formula (10) to determine the ideal gray value of the target pixel point.
In step S3, performing grouping statistics on each pixel point of the target object according to the ideal exposure intensity to obtain a grouping statistical result.
In some embodiments, after obtaining the ideal exposure intensity of each pixel of the target object, the pixel may be grouped and counted according to the exposure intensity.
For example, the grouping statistics can be performed on each pixel point by the following steps.
Eliminating pixel points which are overexposed or underexposed in the target object through the target gray value; respectively taking the maximum ideal exposure intensity and the minimum ideal exposure intensity as the upper limit and the lower limit I of the ideal exposure intensity in the grouping statistical processmax、Imin(ii) a As shown in FIG. 4, the total range of packets may be divided into sequence numbers ofN (n is a positive integer greater than 1) regions from 1 to n, each region being referred to as an ideal exposure intensity grouping, labeled Ki(1. ltoreq. i.ltoreq.n, i ∈ N) (N represents a natural number), adjacent optimum exposure intensity groups being concatenated with upper and lower bounds, each group having a counter CntiBefore starting statistics, firstly clearing counters of each group, then traversing each pixel, if the current pixel is not marked as an invalid point (no overexposure or underexposure occurs), and the optimal exposure intensity of the current pixel is in a certain group KiIn the range of (2), the counter Cnt of the group is setiThe value is increased by 1.
In step S4, an ideal exposure setting of the target object is determined according to the grouping statistics, so that the target object is three-dimensionally reconstructed by a structured light technique according to the ideal exposure setting.
In some embodiments, the ideal exposure setting may be determined based on the grouping statistics of the individual pixel points.
For example, a certain exposure intensity in the group of ideal exposure intensities with a larger number of pixels may be used as the ideal exposure intensity in the ideal exposure setting; for another example, a plurality of candidate exposure settings may be manually set, and then the number of pixels that can be covered by each candidate setting is evaluated through a grouping statistical result to determine an ideal exposure setting; for another example, a plurality of candidate exposure settings may be manually set, and then the number of pixels that can be covered by each candidate setting and the imaging accuracy of each pixel point are evaluated through a grouping statistical result to determine an ideal exposure setting; as another example, an algorithm may be set to select a number of candidate exposure settings according to a certain rule and evaluate their imaging quality through group statistics to determine the ideal exposure setting. Any desired exposure setting that can be determined by the above-described grouping statistics is within the scope of the present disclosure, which is not limited by the present disclosure.
According to the technical scheme provided by the embodiment, the ideal exposure setting of the target object is determined based on the statistical analysis result by performing statistical analysis on the ideal exposure intensity of each pixel point of the target object. The target object is subjected to three-dimensional reconstruction through the ideal exposure setting, the ideal exposure intensity of each pixel point in the target object is considered, and the reconstruction imaging rate and the imaging quality of the target object can be improved.
In some embodiments, the target object to be three-dimensionally reconstructed may include a first pixel point, and a target gray value corresponding to the first pixel point is referred to as a first gray value. Since the obtaining process of the ideal exposure intensity of each pixel point in the target object is substantially the same, the first pixel point will be taken as an example for the description in this embodiment, but the disclosure does not limit this.
When the first predicted image of the target pixel is the same as the target measurement image of the first pixel, the method improved in the embodiment shown in fig. 5 may be adopted to determine the ideal exposure intensity of the target pixel.
In some embodiments, the target measurement image of the first pixel point may refer to: when the three-dimensional reconstruction is carried out on the target pixel point through the structured light technology, any one image is projected to the target pixel point.
Fig. 5 is a flowchart of step S2 in fig. 3 in an exemplary embodiment. Referring to fig. 5, the above-mentioned step S2 may include the following steps.
In step S21, according to the ideal gray value, the first gray value, and the first exposure intensity, a first target exposure intensity required by the first pixel point when the ideal gray value is obtained based on the target measurement image is obtained.
Generally, under the condition that overexposure does not occur, the larger the exposure intensity is, the larger the gray value of the first pixel point is, and finally, the better the signal-to-noise ratio of the three-dimensional reconstruction of the first pixel point is. In order to determine the optimal exposure intensity of the first pixel, an ideal gray value may be set for the first pixel in advance, for example, the ideal gray value may be set to 220.
It can be determined from equation (5) that the exposure intensity I of the target pixel taken under the target measurement image can be determinedpWith the target gray value GpThe relationship between them.
The ideal exposure intensity I can be determined according to equation (6)IdealWith ideal gray value GIdealThe relationship between them.
By combining equation (5) and equation (6), the ideal exposure intensity I can be describedIdealWith ideal gray value GIdealTarget gray value GpAnd the exposure intensity I corresponding to the target gray valuepEquation (7) for the relationship between.
The ideal gray value, the first gray value and the first exposure intensity of the first pixel point are processed through a formula (7), and then the first target exposure intensity required by the first pixel point when the ideal gray value is obtained based on the target measurement image can be obtained.
In step S22, the ideal exposure intensity of the first pixel point is determined according to the first target exposure intensity.
In some embodiments, the first target exposure intensity may be directly used as the ideal exposure intensity of the first pixel, which is not limited by the present disclosure.
The technical scheme provided by the embodiment can accurately and efficiently determine the ideal exposure intensity of the first pixel point.
When the ideal exposure intensity is predicted, a first determined target exposure intensity I is adoptedPThe coverage of the reflectivity of the pixel spot is limited. When the reflectivity R of a certain pixel pointaBelow the first target exposure intensity IPThe acceptable range, quantization error and random noise of the ideal exposure intensity predicted by the first target exposure intensity are relatively large, and the signal-to-noise ratio is poor, so that the final result is inaccurate; when the reflectivity R of a certain pixel pointaHigher than the first target exposure intensity IPThe acceptable range, the predicted image overexposure, cannot obtain accurate results.
The following scheme is adopted in the embodiment, so that the problems can be effectively avoided.
Fig. 6 is a flowchart of step S21 in fig. 5 in an exemplary embodiment. Referring to fig. 6, the above-mentioned step S21 may include the following steps.
In step S221, the first prediction image is projected onto the first pixel, and the first pixel is synchronously photographed through a second exposure intensity to obtain a second gray scale value.
In step S222, according to the ideal gray value, the second gray value, and the second exposure intensity, a second target exposure intensity required by the first pixel point when the ideal gray value is obtained based on the target measurement image is obtained.
In step S223, if the first target exposure intensity is greater than the second target exposure intensity, the first exposure intensity is the ideal exposure intensity of the first pixel.
According to the technical scheme provided by the embodiment, the ideal exposure intensity is predicted for multiple times by conducting multiple exposure on the first pixel point, and then the maximum value of the ideal exposure intensity obtained through multiple prediction is used as the final predicted ideal exposure intensity. The method and the device can avoid inaccurate prediction of ideal exposure intensity due to the fact that the light reflection rate of the first pixel point is not within the prediction range of the first target exposure intensity.
In some embodiments, the target object to be three-dimensionally reconstructed may include a second pixel, a target gray value corresponding to the second pixel may be a third gray value, and the first prediction image for predicting the ideal exposure intensity of the second pixel may be different from the target measurement image of the second pixel.
Fig. 7 is a flowchart of step S2 in fig. 3 in an exemplary embodiment. Referring to fig. 7, the above-mentioned step S2 may include the following steps.
In step S23, the ratio of the gray scale enhancement coefficients of the target measured image of the second pixel point and the first predicted image is determined.
In some embodiments, if the intensity of the ambient light radiation in the environment where the target object is located is negligible, the target gray value G of the second pixel point under the first prediction image can be described by formula (8)pAnd the exposure intensity I of the exposurepThe ideal gray scale of the second pixel point under the target measurement image is described by formula (9)Value GIdealAnd ideal exposure intensity IIdealThe relationship (2) of (c).
In step S24, according to an ideal gray value, the third gray value, the first exposure intensity, and the gray enhancement coefficient ratio, an ideal exposure intensity required by the second pixel point when the ideal gray value is obtained based on the target measurement image is obtained.
By combining the formula (8) and the formula (9), the ideal exposure intensity I capable of determining the second pixel point can be obtainedIdealEquation (10) of (1).
In some embodiments, different exposure intensities I may be employedPAnd (4) predicting the ideal exposure intensity for multiple times, and splicing the prediction results. Preferably, the prediction success (i.e. the predicted image has no amplitude under-or over-exposure) and the predicted exposure intensity I in multiple predictions are usedPAnd the highest prediction result is used as the final optimal exposure intensity of the second pixel point.
The technical scheme provided by the embodiment can avoid the problem that the ideal exposure intensity of the second pixel point is not accurately predicted due to the fact that the reflection rate of the second pixel point is not within the exposure range of the first exposure intensity.
Fig. 8 is a flowchart of step S23 in fig. 7 in an exemplary embodiment. Referring to fig. 8, the above-mentioned step S23 may include the following steps.
In step S231, the first prediction image is projected onto a target object with uniform light reflection rate, and the target object is synchronously photographed through a third exposure intensity, so as to obtain a first average gray value of the target object.
In some embodiments, to determine in equation (10)
Figure BDA0002471814600000191
Firstly, under the condition of third exposure intensity, under the conditions of no overexposure, enough signal-to-noise ratio and no ambient light interference, a measured object with uniform light reflection rate is shot by using a first prediction image and a target measurement image, the gray value of each pixel point in a target object under the first prediction image is calculated, and then the second gray value of all the pixel points is calculatedAn average gray value
Figure BDA0002471814600000192
In step S232, the target measurement image of the second pixel point is projected onto the target object, and the target object is synchronously photographed through the third exposure intensity, so as to obtain a second average gray value of the target object.
In some embodiments, the same structured light system may be adopted, and also under the third exposure intensity, under the conditions of no overexposure, sufficient signal-to-noise ratio and no ambient light interference, the target measurement image of the second pixel point is used to shoot the measured object with uniform light reflection rate, the gray value of each pixel point in the target object under the target measurement image is calculated, and then the second average gray value of all the pixel points is calculated
Figure BDA0002471814600000193
In step S233, the ratio of the gray scale enhancement coefficients of the target measurement image of the second pixel and the first prediction image is determined according to the ratio of the first average gray scale value to the second average gray scale value.
In some embodiments, the ratio of the gray enhancement coefficients of the first predicted image can be determined according to equation (11)
Figure BDA0002471814600000194
Fig. 9 is a flowchart of step S4 in fig. 3 in an exemplary embodiment. Referring to fig. 9, the above-mentioned step S4 may include the following steps.
In step S41, a candidate exposure setting is acquired.
In some embodiments, the candidate exposure setting may include one exposure intensity or may include a plurality of exposure intensities, which are not limited by this disclosure.
In some embodiments, a plurality of candidate exposure intensities may be randomly acquired to generate candidate exposure settings; the candidate exposure setting may be generated according to a grouping statistical result of the ideal exposure intensities of the respective pixel points, for example, the candidate exposure setting may be generated according to the ideal exposure intensities in the distribution comparison set. The present disclosure does not limit the acquisition of candidate exposure settings.
In step S42, the imaging quality of the target object at the candidate exposure setting is determined according to the grouping statistics.
In some embodiments, the imaging quality of each candidate exposure setting may be evaluated according to the grouping statistics result of each pixel point of the target object, for example, the imaging rate of the target object under each candidate exposure setting may be evaluated, which is not limited in this disclosure.
According to the technical scheme provided by the embodiment, based on the grouping statistical result of the ideal exposure intensity of each pixel point of the target object, the imaging quality of each candidate exposure setting can be comprehensively evaluated, so that the ideal exposure setting can be conveniently confirmed.
In some embodiments, the grouping statistics may be performed on each pixel point of the target object according to the ideal exposure degree, for example, the grouping statistics result shown in fig. 10 may be obtained, where the abscissa represents the ideal exposure intensity, and the ordinate represents the number of the pixel points.
In general, the upper or lower limit of each ideal exposure intensity grouping may be set to an equal ratio series, i.e., for one grouping Ki(1. ltoreq. i.ltoreq.n, i ∈ N), the range of which satisfies the following formula:
Figure BDA0002471814600000201
Figure BDA0002471814600000202
wherein N is a positive integer greater than or equal to 1, N is a natural number, b is an integer greater than 0, and c is a constant.
In some embodiments, the number of pixels that can be imaged by the target object under each candidate exposure setting may be obtained according to the grouping statistics result, and the number is used as the imaging quality of the target object.
In some embodiments, a ratio of the number of imageable pixels of the target object in each candidate exposure setting to the number of imageable pixels of the target object may be further determined, so as to obtain an imaging quality of the target object in each candidate exposure setting. The user can intuitively know the imaging quality of each candidate exposure setting through the ratio so as to intuitively acquire an evaluation result.
In step S43, an ideal exposure setting for the target object is determined among candidate exposure settings according to the imaging quality.
In some embodiments, candidate exposure settings with better imaging quality may be taken as ideal exposure settings, which the present disclosure does not limit.
In some embodiments, based on the grouping statistics performed on each pixel of the target object by the ideal exposure intensity, a frequency distribution graph of the number of pixels relative to the ideal exposure intensity as shown in fig. 10 can be obtained.
FIG. 11 is a flowchart of step S41 of FIG. 9 in an exemplary embodiment. Referring to fig. 11, the above-mentioned step S41 may include the following steps.
In step S411, a first exposure number is acquired.
In some embodiments, a first number of exposures may be set in advance as the number of exposure intensities in the ideal exposure setting.
In step S412, the frequency distribution map is divided equally according to the first exposure number to obtain a first division map.
In some embodiments, the frequency distribution map shown in fig. 12 may be divided equally by area according to the first exposure number to obtain a first division map.
In step S413, the ideal exposure intensity corresponding to the barycenter of each first partial map is acquired to generate the candidate exposure setting.
In some embodiments, the ideal exposure intensity (e.g., I in fig. 12) corresponding to the center of gravity (G) of each first partial map may be determined1、I2And I3Etc.) to generate candidate exposure settings.
It will be appreciated that different candidate exposure settings may be generated based on different numbers of first exposures, and then the desired exposure setting with better imaging quality may be determined from the different candidate exposure settings.
In some embodiments, assuming that the number of exposures is at most N, which is a positive integer greater than or equal to 1, the ideal exposure setting may be determined by the following steps.
The method comprises the following steps: setting the first exposure frequency to be 1; finding a first ideal exposure intensity corresponding to the center of gravity in the frequency distribution graph; determining a first candidate exposure setting according to the ideal exposure intensity, and determining a first imaging quality of the target object under the first candidate exposure setting; and judging whether the first imaging quality exceeds a preset threshold value, if so, determining that the first ideal exposure intensity is the ideal exposure setting, and if not, executing the second step.
Step two: setting the first exposure times to be 2; dividing the frequency distribution graph into two first equal distribution graphs according to the area; acquiring ideal exposure intensity corresponding to the gravity center of each first partial map to generate a second candidate exposure setting; and obtaining second imaging quality of the target object under the candidate exposure setting, if the second imaging quality exceeds a preset threshold value, the second candidate exposure setting can be considered as an ideal exposure setting, and if the second imaging quality does not exceed the preset threshold value, the incremental processing is continuously carried out on the first exposure times until an ideal exposure setting which can meet the preset threshold value is found.
The ideal exposure setting provided by the embodiment is used for three-dimensional reconstruction, and effective imaging can be performed on as many pixel points as possible under the condition of as few exposure times as possible.
FIG. 13 is a flowchart of step S42 of FIG. 9 in an exemplary embodiment. Referring to fig. 13, the above-mentioned step S42 may include the following steps.
In step S421, a third pixel point that can be imaged by the target object under the candidate exposure setting is obtained.
In some embodiments, an imageable pixel point corresponding to each exposure intensity in the candidate exposure setting may be obtained as a third pixel point.
In step S422, the imaging quality of the target object under the candidate exposure setting is determined according to the third pixel point.
In some embodiments, the number of the third pixel points may be counted as the imaging quality of the target object under the candidate exposure setting; and the proportion of the third pixel points to all effective imaging pixel points of the target object can be counted (neither overexposure nor underexposure occurs) and used as the imaging quality of the target object under the candidate exposure setting.
In some embodiments, if the pixel is shot with an exposure intensity lower than the optimal exposure intensity of the pixel, although effective imaging may be possible, the imaging accuracy of the pixel may decrease with the decrease of the shot exposure intensity. When evaluating an exposure setting, if the optimal exposure intensities of different imaging accuracies are grouped, the numbers of the pixel points in the groups are directly accumulated by the same weight, and a scheme of sacrificing the accuracies of most pixel points to meet the imaging integrity of a few points will appear.
In order to solve the problem, the embodiments of the present disclosure provide the following technical solutions.
In some embodiments, the candidate exposure setting comprises a fourth exposure intensity at which a fourth pixel point in the target object may be imaged.
Fig. 14 is a flowchart of step S422 in fig. 13 in an exemplary embodiment. Referring to fig. 14, the above-mentioned step S422 may include the following steps.
In step S4221, a target difference between the ideal exposure intensity corresponding to the fourth pixel point and the fourth exposure intensity is obtained.
In step S4222, the weight of the fourth pixel point is determined according to the target difference.
In some embodiments, if the ideal exposure intensity of the fourth pixel is much greater than the fourth exposure intensity, it can be considered that the fourth pixel is exposed through the fourth exposure intensityThe obtained imaging accuracy may be decreased so that the imaging quality of the target object is decreased; if the ideal exposure intensity of the fourth pixel point is smaller than the difference value of the fourth exposure intensity, the fourth pixel point is shot through the fourth exposure intensity, overexposure can occur at most (the pixel point which is overexposed can be generally rejected), and the imaging quality of the target object cannot be reduced. Therefore, as shown in fig. 15, in the dynamic range that can be covered by the fourth exposure intensity (the range that can be covered by the pixel point imaged by the fourth exposure intensity), if the exposure intensity is greater than the fourth exposure intensity I1Assigning a smaller weight (for example, the weights may be assigned to 0.99, 0.98, 0.97, etc.) to the pixel point corresponding to the exposure intensity; if the exposure intensity is less than the fourth exposure intensity I1Then, a weight of 1 may be assigned to the pixel point corresponding to the exposure intensity.
In step S4223, the fourth pixel point is subjected to number weighted summation according to the weight, and the imaging quality of the fourth exposure intensity is determined.
In this embodiment, how to determine the imageable fourth pixel point of the target object at the fourth exposure intensity will be explained by taking the fourth exposure intensity in the candidate exposure setting as an example.
FIG. 16 is a flowchart illustrating a method for obtaining a fourth pixel point of a target object that is imageable at a fourth exposure intensity in accordance with an exemplary embodiment. Referring to fig. 16, the above method may include the following steps.
In step S4224, a second prediction image is projected onto the target object, and the target object is synchronously photographed with the fourth exposure intensity to obtain a fourth gradation value.
In some embodiments, in order to accurately obtain the imageable fourth pixel point of the fourth exposure intensity, the second predicted image may be projected to the target object through the structured light system, and then the fourth gray value of each pixel point of the target object is obtained.
In step S4225, a fifth pixel point corresponding to the maximum non-overexposed gray value and the minimum non-underexposed gray value in the fourth gray value is obtained.
In step S4226, based on the ideal exposure intensity corresponding to the fifth pixel point, determining, according to the grouping statistic result, a fourth pixel point of the target object that can be imaged at the fourth exposure intensity.
In some embodiments, the fourth exposure intensity I may be determined in the frequency distribution graph of FIG. 17 according to the ideal exposure intensity corresponding to the fifth pixel point1The dynamic range of the imageable pixel. It will be appreciated that in I1All pixel points in the dynamic range can pass through the fourth exposure intensity I1And (6) imaging.
Similarly, other exposure intensities I2、I3The dynamic range of the imageable pixel points can be determined by the above method.
In some embodiments, the target object is subjected to grouping statistics based on the ideal exposure intensity, and a frequency distribution graph of the number of pixels relative to the ideal exposure intensity as shown in fig. 12 can be obtained.
FIG. 18 is a flowchart of step S4 of FIG. 3 in an exemplary embodiment. Referring to fig. 18, the above-mentioned step S4 may include the following steps.
In step S44, a second exposure number is acquired.
In step S45, the frequency distribution map is equally divided by area according to the second exposure number to obtain a second division map.
In step S46, ideal exposure intensities corresponding to the centers of gravity of the respective second partial graphs are acquired.
In step S47, the ideal exposure setting is determined according to the ideal exposure intensity corresponding to the center of gravity of the second bisection map.
In some embodiments, the ideal exposure intensity corresponding to the center of gravity of each second partial graph can be directly set as the ideal exposure setting.
The technical scheme provided by the embodiment can visually and effectively determine the ideal exposure setting from the frequency distribution map, and can ensure that most of pixel points of the target object can be imaged under the ideal exposure setting.
In some embodiments, the ideal exposure setting includes a sixth exposure intensity and a seventh exposure intensity, and the target object includes a sixth pixel point.
In some embodiments, the desired exposure setting may include a plurality of exposure intensities, each of which may overlap pixels imageable.
The embodiment adopts the following technical scheme to solve the problem of pixel point overlapping.
FIG. 19 illustrates a method for three-dimensional reconstruction, according to an exemplary embodiment. Referring to fig. 19, the above-described method for three-dimensional reconstruction may include the following steps.
In step S5, the target measurement image is projected to the sixth pixel point.
In step S6, the sixth pixel is synchronously photographed according to the sixth exposure intensity to obtain a fifth gray scale value, where the fifth gray scale value is not overexposed.
In step S7, the sixth pixel is synchronously photographed through the seventh exposure intensity to obtain a sixth gray scale value, where the sixth gray scale value is not overexposed.
In step S8, if the fifth grayscale value is greater than the sixth grayscale value, the three-dimensional reconstruction is performed on the sixth pixel point according to the sixth exposure intensity by the structured light technique.
According to the technical scheme provided by the embodiment, the characteristic that the larger the gray value is, the better the three-dimensional reconstruction effect is fully considered, the exposure intensity corresponding to the maximum gray value is selected from the multiple overlapped exposure intensity groups for three-dimensional reconstruction, and the precision of the three-dimensional reconstruction is ensured.
FIG. 20 is an illustration of a system for three-dimensional reconstruction, in accordance with an exemplary embodiment. Referring to fig. 20, the above-described system for three-dimensional reconstruction may include: a target projection device 2001, a target image capturing device 2002, and an ideal exposure setting determination device 2003.
Wherein the target projecting device 2001 can project the first prediction image to the target object; the target image acquisition device 2002 may synchronously photograph the target object through a first exposure intensity while projecting the first prediction image to the target object to obtain a target gray value of each pixel of the target object; the ideal exposure setting determination device 2003 may determine ideal exposure intensities corresponding to the respective pixel points of the target object according to ideal gray values, the first exposure intensities, and target gray values of the respective pixel points, perform group statistics on the respective pixel points of the target object according to the ideal exposure intensities to obtain group statistical results, and determine ideal exposure settings of the target object according to the group statistical results, so as to perform three-dimensional reconstruction on the target object according to the ideal exposure settings by a structured light technique.
In some embodiments, target projection device 2001 may be any device that can perform image projection, such as a projector; the target image capturing device 2002 may be any device capable of capturing an image, such as an area array image sensor, a linear array image sensor, or the like; the ideal exposure setting determination device 2003 may be any calculation unit that can perform calculation, such as a server, an electronic device that can perform calculation, an embedded system firmware, and a logic Circuit (e.g., an FPGA (Field-Programmable Gate Array), an ASIC (Application Specific Integrated Circuit), etc.). The determination of the ideal exposure setting can be carried out by a single carrier or by a mixture of multiple carriers. Preferably, the exposure setting prediction unit is realized by a logic circuit such as an FPGA and an ASIC, so that the parallel computing capability is stronger and the shorter processing time is obtained.
The system for three-dimensional reconstruction provided by the embodiment can automatically adjust exposure setting when needed, reduces manual intervention, and improves imaging quality.
In some embodiments, if the first predicted images of the pixel points of the target object are different (i.e. there may be more than one first predicted image), the images may be stored in the memory during the shooting process, and then the image data belonging to different pictures and having the same pixel coordinate may be adjacently transmitted to the exposure setting prediction unit (which is similar to the ideal one)Exposure setting determination device correspondence), for example: let n pictures in total for the first predicted image, each picture is marked as Pi(i∈[1,n]I ∈ N), in picture PiIn the above, the pixel point with the pixel coordinate of (X, Y) is marked as Pn(X,Y)。
P1(X,Y),P2(X,Y),P3(X,Y)……,Pn(X,Y),
P1(X+1,Y),P2(X+1,Y),P3(X+1,Y)……,Pn(X+1,Y)……
In some embodiments, to save the computing resources, the gray values of the same pixel under the respective first prediction images may be simultaneously input to the ideal exposure setting determination device to determine the ideal exposure intensity of the pixel, as shown in fig. 21.
Since the prediction of the optimal exposure intensity for a pixel requires the value of the pixel in each predicted captured image, the structure of the prediction unit can be simplified using the above method without increasing the buffer memory of the resolution level of the picture.
The memory has a bit width that may be greater than the bit width of the image data for one pixel. If only one pixel of data is read at a time, the memory bandwidth may be wasted. In order to fully utilize the bandwidth of the memory, another embodiment is that each image is stored in the memory in the shooting process, then a plurality of adjacent pixels are used as a pixel group, the image data of the same pixel group belonging to different pictures are adjacently transmitted to an image data input interface of an exposure setting prediction unit. For example, 4 adjacent pixels may be made a small group of pixels, as shown below.
P1(X,Y),P2(X,Y),P3(X,Y)……,Pn(X,Y),……
P1(X+1,Y),P2(X+1,Y),P3(X+1,Y)……,Pn(X+1,Y)……
Pn(X,Y),Pn(X+1,Y),Pn(X+2,Y),Pn(X+3,Y),
P1(X+4,Y),P1(X+5,Y),P1(X+6,Y),P1(X+7,Y),
P2(X+4,Y),P2(X+5,Y),P2(X+6,Y),P2(X+7,Y),
Pn(X+4,Y),Pn(X+5,Y),Pn(X+6,Y),Pn(X+7,Y),……
As shown in fig. 22, image data of the same pixel group may be adjacently transmitted.
In some embodiments, the ideal exposure intensity value of each pixel point and/or the predictable state information (e.g., whether overexposure or underexposure occurs) of each pixel may also be directly output and displayed to the user, so that the user can know the optimal exposure intensity distribution information of the detected scene conveniently, and the user can be helped to adjust the exposure setting more accurately and more quickly in a manual manner.
In some embodiments, the process of determining the ideal exposure setting of the target object according to the ideal exposure intensity of each pixel point may be referred to as one-time exposure setting prediction. The timing and manner in which the exposure prediction is triggered may be varied for the system for three-dimensional reconstruction as proposed by the present disclosure.
In some embodiments, the prediction of the ideal exposure setting may be triggered manually, and the system retrieves the ideal exposure setting only when deemed necessary by the user and applies it to subsequent shots. This approach is closest to the traditional workflow of manual intervention to adjust exposure parameters, but the ideal exposure setting determination device gives more accurate results and a shorter process time than by manually finding the preferred exposure setting. This may be appropriate for applications where the scene change to be measured is small and the exposure settings need to be adjusted only rarely.
In some embodiments, one exposure prediction may be automatically triggered before one three-dimensional reconstruction is performed, and then the acquired ideal exposure setting is applied to the one three-dimensional reconstruction. The method has the advantage that exposure setting of three-dimensional reconstruction can be automatically adjusted according to scenes, and optimal imaging integrity is obtained.
In some embodiments, the one-time exposure prediction is automatically triggered when the imaging integrity of one three-dimensional reconstruction is lower than a certain threshold (for example, the number of pixels finally imaged by the target object is less than a preset threshold) or after the imaging integrity is deteriorated by more than a certain threshold compared with the last three-dimensional reconstruction (for example, when the number of pixels imaged by the next three-dimensional reconstruction is much lower than the number of pixels imaged in the last three-dimensional reconstruction process). The method only starts exposure prediction when needed, can save time compared with the method that exposure prediction is triggered in each shooting, and can automatically adapt to the situation that a scene to be detected is continuously changed on the application occasion needing continuous shooting.
In some embodiments, one exposure prediction is automatically triggered when the imaging integrity of one three-dimensional reconstruction is below a certain threshold or is more than a certain deterioration compared to the last three-dimensional reconstruction, and after the exposure prediction is completed, the preferred exposure setting is applied and one three-dimensional shot is re-taken. The method can start exposure prediction as required, save time, and adapt to the detected scene with large difference during discontinuous shooting.
In most applications, the structured light system includes both areas of interest to the user and areas of no interest to the user within the field of view. As shown in fig. 23, a workpiece to be tested is placed on the carrier, and the field of view of the structured light system includes the carrier. The measured area of the measured workpiece is the area of interest of the user, and the non-measured areas of the carrier and the measured workpiece are the areas of no interest of the user. For the region of interest, imaging completeness and imaging accuracy are very important, while for the region of no interest, imaging completeness and imaging accuracy do not affect the result of system operation, but only affect the aesthetic appearance of the imaging result.
In some applications, which do not require an attractive imaging result, the exposure setting prediction process may be developed only for the region of interest, so as to improve the system operation speed and the imaging accuracy and integrity of the region of interest of the user.
FIG. 24 is a block diagram illustrating an apparatus for three-dimensional reconstruction in accordance with an exemplary embodiment. Referring to fig. 24, an apparatus 2400 for three-dimensional reconstruction provided in an embodiment of the present disclosure may include: a target gray value obtaining module 2401, an ideal exposure intensity obtaining module 2402, a grouping statistic result obtaining module 2403 and an ideal exposure setting obtaining module 2404.
The target gray-scale value obtaining module 2401 may be configured to project a first prediction image to a target object, and synchronously shoot the target object through a first exposure intensity to obtain a target gray-scale value of each pixel of the target object. The ideal exposure intensity obtaining module 2402 may be configured to determine an ideal exposure intensity corresponding to each pixel of the target object according to an ideal gray value, the first exposure intensity, and a target gray value of each pixel. The grouping statistic result obtaining module 2403 may be configured to perform grouping statistics on each pixel point of the target object according to the ideal exposure intensity to obtain a grouping statistic result. The ideal exposure setting acquisition module 2404 may be configured to determine an ideal exposure setting of the target object according to the grouping statistics, so as to perform three-dimensional reconstruction on the target object through a structured light technique according to the ideal exposure setting.
In some embodiments, the target object includes a first pixel point, and the target grayscale value corresponding to the first pixel point is a first grayscale value.
In some embodiments, the ideal exposure intensity acquisition module 2402 may include: a first target exposure intensity acquisition sub-module and a first ideal exposure intensity acquisition sub-module.
The first exposure intensity obtaining submodule may be configured to obtain, according to the ideal gray value, the first gray value, and the first exposure intensity, a first target exposure intensity required by the first pixel point when the ideal gray value is obtained based on the target measurement image. The first ideal exposure intensity obtaining submodule may be configured to determine the ideal exposure intensity of the first pixel point according to the first exposure intensity.
The first ideal exposure intensity acquisition sub-module may include: a second gray value acquisition unit, a second target exposure intensity acquisition unit, and an ideal exposure intensity acquisition unit.
The second gray value obtaining unit may be configured to project the first predicted image onto the first pixel point and synchronously shoot the first pixel point through a second exposure intensity to obtain a second gray value, and the second exposure intensity obtaining unit may be configured to obtain, according to the ideal gray value, the second gray value, and the second exposure intensity, a second target exposure intensity required by the first pixel point when the ideal gray value is obtained based on the target measurement image; the ideal exposure intensity obtaining unit may be configured to obtain the ideal exposure intensity of the first pixel if the first target exposure intensity is greater than the second target exposure intensity.
In some embodiments, the target object includes a second pixel, the target gray value corresponding to the second pixel is a third gray value, and the first predicted image is different from the target measurement image of the second pixel.
In some embodiments, the ideal exposure intensity acquisition module 2402 may include: an enhancement coefficient ratio determination sub-module and a second ideal exposure intensity acquisition sub-module.
Wherein the enhancement coefficient ratio determining sub-module may be configured to determine a gray enhancement coefficient ratio of the target measurement image of the second pixel point and the first prediction image. The second ideal exposure intensity obtaining submodule may be configured to obtain, according to an ideal gray value, the third gray value, the first exposure intensity, and the gray enhancement coefficient ratio, an ideal exposure intensity required by the second pixel point when the ideal gray value is obtained based on the target measurement image.
In some embodiments, the enhancement factor ratio determination submodule may include: the device comprises a first average gray value determining unit, a second average gray value determining unit and a gray enhancement coefficient ratio determining unit.
The first average gray value determining unit may be configured to project the first prediction image onto a target object with uniform light reflection rate, and synchronously shoot the target object through a third exposure intensity, so as to obtain a first average gray value of the target object. The second average gray value determining unit may be configured to project the target measurement image of the second pixel point onto the target object, and synchronously shoot the target object according to the third exposure intensity, so as to obtain a second average gray value of the target object. The gray enhancement coefficient ratio determining unit may be configured to determine a gray enhancement coefficient ratio of the target measurement image of the second pixel point and the first prediction image according to a ratio of the first average gray value and the second average gray value.
In some embodiments, the ideal exposure setting acquisition module 2404 may include: the system comprises a first candidate exposure setting acquisition sub-module, an imaging quality determination sub-module and a first ideal exposure setting determination sub-module.
Wherein the first candidate exposure setting acquisition sub-module may be configured to acquire a candidate exposure setting. The imaging quality determination sub-module may be configured to determine an imaging quality of the target object at the candidate exposure setting based on the grouping statistics. The first ideal exposure setting determination sub-module may be configured to determine an ideal exposure setting of the target object among candidate exposure settings according to the imaging quality.
In some embodiments, the grouping statistic is a frequency distribution graph of the number of pixels relative to the ideal exposure intensity.
In some embodiments, the first candidate exposure setting acquisition sub-module may include: a first exposure number acquisition unit, a first score map acquisition unit, and a first candidate setting generation unit.
Wherein the first exposure number acquisition unit may be configured to acquire a first exposure number. The first histogram acquisition unit may be configured to equally divide the frequency distribution map by area according to the first exposure number to acquire a first histogram. The first candidate setting generation unit may be configured to acquire ideal exposure intensities corresponding to the barycenter of the respective first partial graphs to generate the candidate exposure settings.
In some embodiments, the imaging quality determination sub-module may include: a third pixel point obtaining unit and an imaging quality determining unit.
The third pixel point obtaining unit may be configured to obtain a third pixel point that the target object may be imaged in the candidate exposure setting. The imaging quality determination unit may be configured to determine the imaging quality of the target object at the candidate exposure setting from the third pixel point.
In some embodiments, the candidate exposure setting comprises a fourth exposure intensity at which a fourth pixel point in the target object may be imaged.
In some embodiments, the imaging quality determination unit may include: the device comprises a target difference value acquisition subunit, a weight determination subunit and an imaging quality determination subunit.
Wherein the target difference acquiring subunit may be configured to acquire a target difference between the ideal exposure intensity corresponding to the fourth pixel point and the fourth exposure intensity. The weight determination subunit may be configured to determine the weight of the fourth pixel point according to the target difference value. The imaging quality determination subunit may be configured to perform number weighted summation on the fourth pixel point according to the weight, and determine the imaging quality of the fourth exposure intensity.
In some embodiments, the imaging quality determination unit further comprises: the fourth gray value determining subunit, the fifth pixel point obtaining subunit and the fourth pixel point determining subunit.
The fourth gray value determining subunit may be configured to project a second prediction image onto the target object, and synchronously capture the target object through the fourth exposure intensity to obtain a fourth gray value, where the fourth gray value is not overexposed or underexposed. The fifth pixel point obtaining subunit may be configured to obtain a fifth pixel point corresponding to the maximum gray value and the minimum gray value in the fourth gray value. The fourth pixel point determination subunit may be configured to determine, based on the ideal exposure intensity corresponding to the fifth pixel point, the fourth pixel point that is imageable by the target object at the fourth exposure intensity according to the grouping statistical result.
In some embodiments, the grouping statistic is a frequency distribution graph of the number of pixels relative to the ideal exposure intensity.
In some embodiments, the ideal exposure setting acquisition module 2404 includes: a second exposure time acquisition sub-module, a second bisection map acquisition sub-module, a third ideal exposure intensity acquisition sub-module and a center of gravity determination sub-module.
Wherein the second exposure number acquisition submodule may be configured to acquire a second exposure number. The second histogram obtaining sub-module may be configured to obtain a second histogram by equally dividing the frequency histogram by area according to the second exposure number. The third ideal exposure intensity acquisition submodule may be configured to acquire an ideal exposure intensity corresponding to the center of gravity of each of the second partial graphs. The center of gravity determination submodule may be configured to determine the ideal exposure setting from an ideal exposure intensity corresponding to the center of gravity of the second partial map.
In some embodiments, the ideal exposure setting includes a sixth exposure intensity and a seventh exposure intensity, and the target object includes a sixth pixel point.
In some embodiments, the ideal exposure setting acquisition module 2404 may include: the system comprises a target measurement image projection submodule, a fifth gray value acquisition submodule, a sixth gray value acquisition submodule and a three-dimensional reconstruction submodule.
Wherein the target measurement image projection submodule may be configured to project a target measurement image to the sixth pixel point. The fifth gray value obtaining submodule may be configured to perform synchronous shooting on the sixth pixel point according to the sixth exposure intensity to obtain a fifth gray value, where the fifth gray value is not overexposed. The sixth gray value obtaining submodule may be configured to perform synchronous shooting on the sixth pixel point according to the seventh exposure intensity to obtain a sixth gray value, where the sixth gray value is not overexposed. The three-dimensional reconstruction submodule may be configured to perform the three-dimensional reconstruction on the sixth pixel point by using the structured light technology according to the sixth exposure intensity if the fifth grayscale value is greater than the sixth grayscale value.
Since each functional module of the apparatus 2400 for three-dimensional reconstruction according to the exemplary embodiment of the present disclosure corresponds to the step of the above-described exemplary embodiment of the method for three-dimensional reconstruction, it is not described herein again.
Through the above description of the embodiments, those skilled in the art will readily understand that the exemplary embodiments described herein may be implemented by software, or by software in combination with necessary hardware. Therefore, the technical solution of the embodiment of the present disclosure may be embodied in the form of a software product, which may be stored in a non-volatile storage medium (which may be a CD-ROM, a usb disk, a removable hard disk, etc.), and includes several instructions for enabling a computing device (which may be a personal computer, a server, a mobile terminal, or a smart device, etc.) to execute the method according to the embodiment of the present disclosure, such as one or more of the steps shown in fig. 3.
Furthermore, the above-described figures are merely schematic illustrations of processes included in methods according to exemplary embodiments of the present disclosure, and are not intended to be limiting. It will be readily understood that the processes shown in the above figures are not intended to indicate or limit the chronological order of the processes. In addition, it will also be readily appreciated that these processes may be performed synchronously or asynchronously in multiple modules.
Other embodiments of the disclosure will be apparent to those skilled in the art from consideration of the specification and practice of the disclosure disclosed herein. This disclosure is intended to cover any variations, uses, or adaptations of the disclosure following, in general, the principles of the disclosure and including such departures from the present disclosure as come within known or customary practice within the art to which the disclosure pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the disclosure being indicated by the following claims.
It is to be understood that the disclosure is not limited to the details of construction, the arrangements of the drawings, or the manner of implementation that have been set forth herein, but on the contrary, is intended to cover various modifications and equivalent arrangements included within the spirit and scope of the appended claims.

Claims (18)

1. A method for three-dimensional reconstruction, comprising:
projecting a first predicted image to a target object, and synchronously shooting the target object through first exposure intensity to obtain a target gray value of each pixel point of the target object;
determining ideal exposure intensity corresponding to each pixel point of the target object according to the ideal gray value, the first exposure intensity and the target gray value of each pixel point;
performing grouping statistics on each pixel point of the target object according to the ideal exposure intensity to obtain a grouping statistical result;
and determining the ideal exposure setting of the target object according to the grouping statistical result so as to carry out three-dimensional reconstruction on the target object through a structured light technology according to the ideal exposure setting.
2. The method of claim 1, wherein the target object comprises a first pixel point, and a target gray value corresponding to the first pixel point is a first gray value; determining the ideal exposure intensity corresponding to each pixel point of the target object according to the ideal gray value, the first exposure intensity and the target gray value of each pixel point, including:
acquiring a first target exposure intensity required by the first pixel point when the ideal gray value is obtained based on a target measurement image according to the ideal gray value, the first gray value and the first exposure intensity;
and determining the ideal exposure intensity of the first pixel point according to the first target exposure intensity.
3. The method of claim 2, wherein determining the desired exposure level for the first pixel point based on the first target exposure level comprises:
projecting the first prediction image to the first pixel point, and synchronously shooting the first pixel point through second exposure intensity to obtain a second gray value;
according to the ideal gray value, the second gray value and the second exposure intensity, acquiring second target exposure intensity required by the first pixel point when the target measurement image is used for acquiring the ideal gray value;
and if the first target exposure intensity is greater than the second target exposure intensity, the first target exposure intensity is the ideal exposure intensity of the first pixel point.
4. The method according to claim 1, wherein the target object includes a second pixel, the target gray value corresponding to the second pixel is a third gray value, and the first predicted image is different from the target measurement image of the second pixel; determining the ideal exposure intensity corresponding to each pixel point of the target object according to the ideal gray value, the first exposure intensity and the target gray value of each pixel point, including:
determining the gray level enhancement coefficient ratio of the target measurement image of the second pixel point and the first prediction image;
and acquiring the ideal exposure intensity required by the second pixel point when the target measurement image is used for acquiring the ideal gray value according to the ideal gray value, the third gray value, the first exposure intensity and the gray enhancement coefficient ratio.
5. The method of claim 4, wherein determining the ratio of the gray scale enhancement coefficients of the target measurement image and the first prediction image at the second pixel point comprises:
projecting the first prediction image to a target object with uniform light reflection rate, and synchronously shooting the target object through third exposure intensity to obtain a first average gray value of the target object;
projecting the target measurement image of the second pixel point to the target object, and synchronously shooting the target object through the third exposure intensity to obtain a second average gray value of the target object;
and determining the gray enhancement coefficient ratio of the target measurement image of the second pixel point and the first prediction image according to the ratio of the first average gray value to the second average gray value.
6. The method of claim 1, wherein determining an ideal exposure setting for the target object based on the grouping statistics comprises:
acquiring candidate exposure settings;
determining the imaging quality of the target object under the candidate exposure setting according to the grouping statistical result;
an ideal exposure setting for the target object is determined among candidate exposure settings according to the imaging quality.
7. The method of claim 6, wherein the grouping statistic is a frequency distribution graph of the number of pixels relative to an ideal exposure intensity; wherein obtaining the candidate exposure setting comprises:
acquiring a first exposure time;
according to the first exposure times, dividing the frequency distribution graph equally according to the area to obtain a first division graph;
and acquiring ideal exposure intensity corresponding to the gravity center of each first partial graph to generate the candidate exposure setting.
8. The method of claim 6, wherein determining the imaging quality of the target object at the candidate exposure setting based on the grouping statistics comprises:
acquiring a third pixel point which can be imaged by the target object under the candidate exposure setting;
and determining the imaging quality of the target object under the candidate exposure setting according to the third pixel point.
9. The method of claim 8, wherein the candidate exposure setting comprises a fourth exposure intensity at which a fourth pixel in the target object can be imaged; determining the imaging quality of the target object under the candidate exposure setting according to the third pixel point, wherein the determining comprises:
acquiring a target difference value between the ideal exposure intensity corresponding to the fourth pixel point and the fourth exposure intensity;
determining the weight of the fourth pixel point according to the target difference value;
and carrying out number weighted summation on the fourth pixel points according to the weights, and determining the imaging quality of the fourth exposure intensity.
10. The method of claim 9, further comprising:
projecting a second prediction image onto the target object, and synchronously shooting the target object through the fourth exposure intensity to obtain a fourth gray value;
acquiring a fifth pixel point corresponding to the maximum unexposed gray value and the minimum unexposed gray value in the fourth gray value;
and determining the fourth pixel point which can be imaged by the target object under the fourth exposure intensity according to the grouping statistical result based on the ideal exposure intensity corresponding to the fifth pixel point.
11. The method of claim 1, wherein the grouping statistic is a frequency distribution graph of the number of pixels relative to an ideal exposure intensity; wherein determining an ideal exposure setting for the target object based on the group statistics comprises:
acquiring a second exposure time;
according to the second exposure times, equally dividing the frequency distribution graph according to the area to obtain a second halving graph;
acquiring ideal exposure intensity corresponding to the gravity center of each second bisection map;
and determining the ideal exposure setting according to the ideal exposure intensity corresponding to the gravity center of the second bisection map.
12. The method of claim 1, wherein the ideal exposure setting comprises a sixth exposure intensity and a seventh exposure intensity, and wherein the target object comprises a sixth pixel point; wherein three-dimensional reconstruction of the target object by structured light techniques according to the ideal exposure setting comprises:
projecting the target measurement image to the sixth pixel point;
synchronously shooting the sixth pixel point according to the sixth exposure intensity to obtain a fifth gray value, wherein the fifth gray value is not overexposed;
synchronously shooting the sixth pixel point through the seventh exposure intensity to obtain a sixth gray value, wherein the sixth gray value is not overexposed;
and if the fifth gray value is larger than the sixth gray value, performing the three-dimensional reconstruction on the sixth pixel point by the structured light technology according to the sixth exposure intensity.
13. An apparatus for three-dimensional reconstruction, comprising:
the target gray value acquisition module is configured to project the first prediction image to a target object and synchronously shoot the target object through first exposure intensity so as to obtain a target gray value of each pixel point of the target object;
the ideal exposure intensity acquisition module is configured to determine ideal exposure intensity corresponding to each pixel point of the target object according to an ideal gray value, the first exposure intensity and a target gray value of each pixel point;
the grouping statistical result acquisition module is configured to perform grouping statistics on each pixel point of the target object according to the ideal exposure intensity so as to obtain a grouping statistical result;
and the ideal exposure setting acquisition module is configured to determine the ideal exposure setting of the target object according to the grouping statistical result so as to perform three-dimensional reconstruction on the target object through a structured light technology according to the ideal exposure setting.
14. An electronic device, comprising:
one or more processors;
a storage device for storing one or more programs,
when executed by the one or more processors, cause the one or more processors to implement the method of any one of claims 1-12.
15. A computer-readable storage medium, on which a computer program is stored, which program, when being executed by a processor, carries out the method according to any one of claims 1-12.
16. A logic circuit, comprising:
a programmable logic chip that when executed performs the method of any one of claims 1-13.
17. A system for three-dimensional reconstruction, comprising:
a target projection device that projects the first prediction image to a target object;
the target image acquisition equipment is used for synchronously shooting the target object through first exposure intensity to obtain a target gray value of each pixel point of the target object while projecting the first prediction image to the target object;
and the ideal exposure setting determining equipment is used for determining the ideal exposure intensity corresponding to each pixel point of the target object according to an ideal gray value, the first exposure intensity and the target gray value of each pixel point, performing grouping statistics on each pixel point of the target object according to the ideal exposure intensity to obtain a grouping statistical result, and determining the ideal exposure setting of the target object according to the grouping statistical result so as to perform three-dimensional reconstruction on the target object through a structured light technology according to the ideal exposure setting.
18. The system of claim 17, wherein the ideal exposure setting determining device is a logic circuit.
CN202010350942.8A 2020-04-28 2020-04-28 Method, device and related equipment for three-dimensional reconstruction Active CN111540042B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010350942.8A CN111540042B (en) 2020-04-28 2020-04-28 Method, device and related equipment for three-dimensional reconstruction

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010350942.8A CN111540042B (en) 2020-04-28 2020-04-28 Method, device and related equipment for three-dimensional reconstruction

Publications (2)

Publication Number Publication Date
CN111540042A true CN111540042A (en) 2020-08-14
CN111540042B CN111540042B (en) 2023-08-11

Family

ID=71975785

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010350942.8A Active CN111540042B (en) 2020-04-28 2020-04-28 Method, device and related equipment for three-dimensional reconstruction

Country Status (1)

Country Link
CN (1) CN111540042B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113014826A (en) * 2021-02-18 2021-06-22 科络克电子科技(上海)有限公司 Image photosensitive intensity parameter adjusting method, device, equipment and medium
CN113645459A (en) * 2021-10-13 2021-11-12 杭州蓝芯科技有限公司 High-dynamic 3D imaging method and device, electronic equipment and storage medium

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20060088210A1 (en) * 2004-10-21 2006-04-27 Microsoft Corporation Video image quality
US20090185800A1 (en) * 2008-01-23 2009-07-23 Sungkyunkwan University Foundation For Corporate Collaboration Method and system for determining optimal exposure of structured light based 3d camera
CN108742663A (en) * 2018-04-03 2018-11-06 深圳蓝韵医学影像有限公司 Exposure dose evaluation method, device and computer readable storage medium
US20180352134A1 (en) * 2017-06-02 2018-12-06 Apple Inc. Reducing Or Eliminating Artifacts In High Dynamic Range (HDR) Imaging
CN109510948A (en) * 2018-09-30 2019-03-22 先临三维科技股份有限公司 Exposure adjustment method, device, computer equipment and storage medium
CN110177221A (en) * 2019-06-25 2019-08-27 维沃移动通信有限公司 The image pickup method and device of high dynamic range images
WO2020042074A1 (en) * 2018-08-30 2020-03-05 深圳市大疆创新科技有限公司 Exposure adjustment method and apparatus

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20060088210A1 (en) * 2004-10-21 2006-04-27 Microsoft Corporation Video image quality
US20090185800A1 (en) * 2008-01-23 2009-07-23 Sungkyunkwan University Foundation For Corporate Collaboration Method and system for determining optimal exposure of structured light based 3d camera
US20180352134A1 (en) * 2017-06-02 2018-12-06 Apple Inc. Reducing Or Eliminating Artifacts In High Dynamic Range (HDR) Imaging
CN108742663A (en) * 2018-04-03 2018-11-06 深圳蓝韵医学影像有限公司 Exposure dose evaluation method, device and computer readable storage medium
WO2020042074A1 (en) * 2018-08-30 2020-03-05 深圳市大疆创新科技有限公司 Exposure adjustment method and apparatus
CN109510948A (en) * 2018-09-30 2019-03-22 先临三维科技股份有限公司 Exposure adjustment method, device, computer equipment and storage medium
CN110177221A (en) * 2019-06-25 2019-08-27 维沃移动通信有限公司 The image pickup method and device of high dynamic range images

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
吴丽娟;刘桂华;刘先勇;张启戎;魏志勇;高国防;: "基于结构光技术的高光表面三维测量方法" *
官斌;何大华;: "距离选通切片图像高精度三维重构方法" *
焦阿敏;董明利;娄小平;李伟仙;: "光条图像成像参数的模糊自适应调整方法研究" *

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113014826A (en) * 2021-02-18 2021-06-22 科络克电子科技(上海)有限公司 Image photosensitive intensity parameter adjusting method, device, equipment and medium
CN113645459A (en) * 2021-10-13 2021-11-12 杭州蓝芯科技有限公司 High-dynamic 3D imaging method and device, electronic equipment and storage medium
CN113645459B (en) * 2021-10-13 2022-01-14 杭州蓝芯科技有限公司 High-dynamic 3D imaging method and device, electronic equipment and storage medium

Also Published As

Publication number Publication date
CN111540042B (en) 2023-08-11

Similar Documents

Publication Publication Date Title
US20190164257A1 (en) Image processing method, apparatus and device
CN105812675B (en) Method for generating HDR images of a scene based on a compromise between luminance distribution and motion
Grossberg et al. Modeling the space of camera response functions
US10255682B2 (en) Image detection system using differences in illumination conditions
US20150049215A1 (en) Systems And Methods For Generating High Dynamic Range Images
JP7327733B2 (en) Method, apparatus and computer readable medium for flicker reduction
JP2021526248A (en) HDR image generation from a single shot HDR color image sensor
US8619153B2 (en) Radiometric calibration using temporal irradiance mixtures
CN101326549B (en) Method for detecting streaks in digital images
US20170287157A1 (en) Three-dimensional shape measuring apparatus, three-dimensional shape measuring method, and storage medium
CN102023456B (en) Light metering weight regulating method and device thereof
CN111540042B (en) Method, device and related equipment for three-dimensional reconstruction
US20170307869A1 (en) Microscope and method for obtaining a high dynamic range synthesized image of an object
JP2019168862A (en) Processing equipment, processing system, imaging device, processing method, program, and recording medium
JP4176369B2 (en) Compensating digital images for optical falloff while minimizing changes in light balance
CN112200848B (en) Depth camera vision enhancement method and system under low-illumination weak-contrast complex environment
KR20220024255A (en) Methods and apparatus for improved 3-d data reconstruction from stereo-temporal image sequences
EP1976308B1 (en) Device and method for measuring noise characteristics of image sensor
CN115225820A (en) Automatic shooting parameter adjusting method and device, storage medium and industrial camera
US11871117B2 (en) System for performing ambient light image correction
CN110708471B (en) CCD self-correlation imaging system and method based on active illumination
CN117173324A (en) Point cloud coloring method, system, terminal and storage medium
KR101418521B1 (en) Image enhancement method and device by brightness-contrast improvement
JP4746761B2 (en) Radiation image processing apparatus, radiation image processing method, storage medium, and program
JP4181753B2 (en) Exposure determining apparatus and imaging apparatus provided with the exposure determining apparatus

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant