CN116958415A - Focusing three-dimensional reconstruction method and system based on structured light - Google Patents

Focusing three-dimensional reconstruction method and system based on structured light Download PDF

Info

Publication number
CN116958415A
CN116958415A CN202310764285.5A CN202310764285A CN116958415A CN 116958415 A CN116958415 A CN 116958415A CN 202310764285 A CN202310764285 A CN 202310764285A CN 116958415 A CN116958415 A CN 116958415A
Authority
CN
China
Prior art keywords
definition
position point
dimensional reconstruction
modulation degree
detection platform
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310764285.5A
Other languages
Chinese (zh)
Inventor
蒋斌峰
寇冠中
潘威
曹玲
卢盛林
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong OPT Machine Vision Co Ltd
Original Assignee
Guangdong OPT Machine Vision Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong OPT Machine Vision Co Ltd filed Critical Guangdong OPT Machine Vision Co Ltd
Priority to CN202310764285.5A priority Critical patent/CN116958415A/en
Publication of CN116958415A publication Critical patent/CN116958415A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2200/00Indexing scheme for image data processing or generation, in general
    • G06T2200/04Indexing scheme for image data processing or generation, in general involving 3D image data
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2200/00Indexing scheme for image data processing or generation, in general
    • G06T2200/08Indexing scheme for image data processing or generation, in general involving all processing steps from image acquisition to 3D model generation

Abstract

The application relates to the technical field of computer vision and discloses a focusing three-dimensional reconstruction method and a three-dimensional reconstruction system based on structured light. The application adopts a mode of collecting images from the front surface, can restore the three-dimensional structure of the measured object more comprehensively, has smoother surface, stronger diffuse reflection and better imaging effect, can reduce the dependence of the peripheral characteristics of the detection points based on the definition measurement of the four-step phase shift method, greatly reduces the limitation and can improve the accuracy of three-dimensional reconstruction.

Description

Focusing three-dimensional reconstruction method and system based on structured light
Technical Field
The application relates to the technical field of computer vision, in particular to a focusing three-dimensional reconstruction method and a three-dimensional reconstruction system based on structured light.
Background
Three-dimensional reconstruction is a technique of reconstructing a three-dimensional virtual model of a real object in a computer based on a two-dimensional image and displaying the model on a computer screen. Three-dimensional reconstruction has been a research hotspot in the field of computer vision technology.
The conventional three-dimensional reconstruction method can be classified into an active type and a passive type according to whether the sensor actively irradiates the object with the light source. The active sensor is used for actively irradiating a signal to an object, and then analyzing the returned signal to obtain three-dimensional information of the object, which is common: structured light, TOF (Time of Flight) and triangulation; the passive method is to directly obtain RGB images by means of surrounding light sources, then analyze the images according to the multi-view geometric principle, so as to obtain three-dimensional information of the object, and the common methods are as follows: monocular vision, binocular/multi-view, and consumer-grade based RGB-D cameras. Each of these methods has its own advantages and disadvantages and also has a range of applications for which each is applicable.
At present, the conventional three-dimensional reconstruction method has the problems that dead points are necessarily generated after reconstruction of a region with shielding due to more pictures taken from the side, the three-dimensional structure of an object is completely restored without a method, and the imaging effect of the object with stronger specular reflection is poorer. The three-dimensional reconstruction method based on the focusing method in the market is not only based on pixel-level definition function judgment but also based on contour extraction of a single-frame picture, and actually, whether the definition of one position is judged depends on surrounding pixels, so that the limitation is strong, and the popularization is difficult.
Accordingly, improvements in the art are needed.
The above information is presented as background information only to aid in the understanding of the present disclosure and is not intended or admitted to be prior art relative to the present disclosure.
Disclosure of Invention
The application provides a focusing three-dimensional reconstruction method and a three-dimensional reconstruction system based on structured light, which are used for solving the defects in the prior art.
In order to achieve the above object, the present application provides the following technical solutions:
in a first aspect, the present application provides a method for focused three-dimensional reconstruction based on structured light, the method comprising:
controlling the detection platform to move to an initial height position along the Z axis, and controlling the projector to be opened;
collecting a four-step phase fringe pattern of an object to be detected on the detection platform, and calculating a modulation degree pattern and a background definition pattern corresponding to the four-step phase fringe pattern; the modulation degree map comprises a modulation amplitude B of each position point (x, y), and the definition map comprises a definition value Q of each position point;
controlling the detection platform to move in height along the Z axis according to a set step length, and judging whether the moved detection platform exceeds a set maximum height;
if not, returning to the step of executing the four-step phase fringe pattern of the detected object on the detection platform, and calculating a modulation degree pattern and a background definition pattern corresponding to the four-step phase fringe pattern;
if yes, obtaining n groups of modulation degree graphs and definition graphs which are in one-to-one correspondence with the height positions of the detection platform;
in each group of modulation degree diagram and definition diagram, carrying out weighted calculation on a modulation amplitude B and a definition value Q of each position point (x, y) to obtain a definition evaluation parameter P corresponding to each position point (x, y);
and traversing each group of modulation degree diagram and definition diagram, and solving a height position value corresponding to the maximum value in n definition evaluation parameters P corresponding to each position point (x, y) so as to reconstruct the three-dimensional object to be measured.
Further, in the focusing three-dimensional reconstruction method based on the structured light, the light intensity function of the sinusoidal grating projected by the projector is:
wherein I is a light intensity function, A is background light intensity, B is modulation amplitude of stripes,the phase corresponding to the (x, y) point is represented by delta, which is the shift phase value.
Further, in the method for reconstructing a focusing three-dimensional structure based on structured light, a calculation formula of the modulation amplitude B of each position point (x, y) in the modulation degree map is as follows:
further, in the method for reconstructing a three-dimensional focus based on structured light, a calculation formula of the sharpness value Q of each position point in the sharpness map is as follows:
Q(x,y)=|A(x,y)-A(x+1,y)|*|A(x,y)-A(x,y+1)|。
further, in the method for reconstructing a focusing three-dimensional image based on structured light, the step of performing weighted calculation on the modulation amplitude B and the sharpness value Q of each position point (x, y) in each set of the modulation degree map and the sharpness map to obtain a sharpness evaluation parameter P corresponding to each position point (x, y) includes:
in each group of the modulation degree map and the definition map, the modulation amplitude B and the definition value Q of each position point (x, y) are weighted and calculated by the following formula to obtain a definition evaluation parameter P corresponding to each position point (x, y):
P(x,y)=a*B(x,y)+b*Q(x,y);
where a and b are weight coefficients, a+b=1.
In a second aspect, the present application provides a structured light based focused three-dimensional reconstruction system, the system comprising:
the action control module is used for controlling the detection platform to move to an initial height position along the Z axis and controlling the projector to be opened;
the acquisition and calculation module is used for acquiring a four-step phase fringe pattern of the detected object on the detection platform and calculating a modulation degree pattern and a background definition pattern corresponding to the four-step phase fringe pattern; the modulation degree map comprises a modulation amplitude B of each position point (x, y), and the definition map comprises a definition value Q of each position point;
the movement judging module is used for controlling the detection platform to move in height along the Z axis according to a set step length and judging whether the moved detection platform exceeds a set maximum height or not; if not, returning to the step of executing the four-step phase fringe pattern of the detected object on the detection platform, and calculating a modulation degree pattern and a background definition pattern corresponding to the four-step phase fringe pattern; if yes, obtaining n groups of modulation degree graphs and definition graphs which are in one-to-one correspondence with the height positions of the detection platform;
the weighting calculation module is used for carrying out weighting calculation on the modulation amplitude B and the definition value Q of each position point (x, y) in each group of the modulation degree diagram and the definition diagram to obtain a definition evaluation parameter P corresponding to each position point (x, y);
and the three-dimensional reconstruction module is used for traversing each group of modulation degree diagram and definition diagram, solving the height position value corresponding to the maximum value in the n definition evaluation parameters P corresponding to each position point (x, y) and carrying out three-dimensional reconstruction on the measured object.
Further, in the structured light-based focusing three-dimensional reconstruction system, the light intensity function of the sinusoidal grating projected by the projector is:
wherein I is a light intensity function, A is background light intensity, B is modulation amplitude of stripes,the phase corresponding to the (x, y) point is represented by delta, which is a moving phase value;
further, in the structured light-based focusing three-dimensional reconstruction system, a calculation formula of the modulation amplitude B of each position point (x, y) in the modulation degree map is as follows:
further, in the structured light-based focusing three-dimensional reconstruction system, a calculation formula of the definition value Q of each position point in the definition map is as follows:
Q(x,y)=|A(x,y)-A(x+1,y)|*|A(x,y)-A(x,y+1)|。
further, in the structured light-based focused three-dimensional reconstruction system, the weight calculation module is specifically configured to:
in each group of the modulation degree map and the definition map, the modulation amplitude B and the definition value Q of each position point (x, y) are weighted and calculated by the following formula to obtain a definition evaluation parameter P corresponding to each position point (x, y):
P(x,y)=a*B(x,y)+b*Q(x,y);
where a and b are weight coefficients, a+b=1.
In a third aspect, the present application provides a computer device comprising a memory storing a computer program and a processor implementing the structured light based focused three-dimensional reconstruction method as described in the first aspect above when the computer program is executed by the processor.
In a fourth aspect, the present application provides a storage medium containing computer executable instructions for execution by a computer processor to implement the structured light based focused three-dimensional reconstruction method as described in the first aspect above.
Compared with the prior art, the application has the following beneficial effects:
the focusing three-dimensional reconstruction method and the three-dimensional reconstruction system based on the structured light provided by the application adopt a mode of collecting images from the front, so that the three-dimensional structure of a measured object can be restored more comprehensively, the smoother the surface is, the stronger the diffuse reflection is, the better the imaging effect is, in addition, the dependence of the peripheral characteristics of a detection point can be reduced by the definition measurement based on a four-step phase shift method, the limitation is greatly reduced, and the accuracy of three-dimensional reconstruction can be improved.
Drawings
In order to more clearly illustrate the embodiments of the application or the technical solutions in the prior art, the drawings that are required in the embodiments or the description of the prior art will be briefly described, it being obvious that the drawings in the following description are only some embodiments of the application, and that other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
Fig. 1 is a schematic flow chart of a focusing three-dimensional reconstruction method based on structured light according to an embodiment of the application
FIG. 2 is a schematic diagram of an optical path diagram according to an embodiment of the present application;
FIG. 3 is a schematic diagram of a four-step phase fringe pattern in accordance with one embodiment of the application;
FIG. 4 is a schematic diagram of a modulation scheme according to an embodiment of the present application;
FIG. 5 is a schematic diagram of a background sharpness map mentioned in one embodiment of the present application;
fig. 6 is a schematic diagram of calculating a height position value corresponding to a maximum sharpness evaluation parameter P according to the first embodiment of the present application;
fig. 7 is a schematic functional block diagram of a focusing three-dimensional reconstruction system based on structured light according to a second embodiment of the present application;
fig. 8 is a schematic structural diagram of a computer device according to a third embodiment of the present application.
Detailed Description
For the purpose of making the objects, technical solutions and advantages of the embodiments of the present application more apparent, the technical solutions of the embodiments of the present application will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present application, and it is apparent that the described embodiments are some embodiments of the present application, but not all embodiments of the present application. All other embodiments, which can be made by those skilled in the art based on the embodiments of the application without making any inventive effort, are intended to be within the scope of the application. In addition, as one of ordinary skill in the art can know, with technical development and new scenarios, the technical solution provided by the embodiment of the present application is also applicable to similar technical problems.
In the description of the present application, it is to be understood that all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this application belongs unless defined otherwise. Furthermore, any terminology used is for the purpose of describing particular embodiments only and is not intended to be limiting of the application.
In addition, numerous specific details are set forth in the following description in order to provide a better illustration of the application. It will be understood by those skilled in the art that the present application may be practiced without some of these specific details. In some instances, well known methods, procedures, components, and circuits have not been described in detail so as not to obscure the present application.
The technical scheme of the application is further described below by the specific embodiments with reference to the accompanying drawings.
Example 1
In view of the above-mentioned drawbacks of the existing three-dimensional reconstruction techniques, the present inventors have devised and manufactured for many years based on practical experience and expertise, and actively studied and innovated in combination with application of theories, so as to hope to create a technique capable of solving the drawbacks of the existing techniques, and make the three-dimensional reconstruction technique more practical. After continuous research and design and repeated trial and improvement, the application with practical value is finally created.
Referring to fig. 1, an embodiment of the present application provides a three-dimensional reconstruction method based on structured light, which is suitable for reconstructing a scene of a three-dimensional appearance of an object to be detected, is good for detecting and three-dimensionally restoring a small densely arranged object, and can make up for a short plate in most imaging modes currently. The method specifically comprises the following steps:
s101, controlling the detection platform to move to an initial height position along the Z axis, and controlling the projector to be opened.
It should be noted that the object to be detected is placed on the detection platform, the detection platform needs to be able to achieve accurate walking, and the initial height position of the detection platform is set to be Z 0
The projector projects from the front of the measured object, so that the restored three-dimensional model does not have dead points caused by shielding.
The light intensity function of the sinusoidal grating projected by the projector is as follows:
wherein I is a light intensity function, A is background light intensity, B is modulation amplitude of stripes,the phase corresponding to the (x, y) point is represented by delta, which is the shift phase value.
S102, collecting a four-step phase fringe pattern of a detected object on the detection platform, and calculating a modulation degree pattern and a background definition pattern corresponding to the four-step phase fringe pattern; the modulation degree map includes a modulation amplitude B for each position point (x, y), and the sharpness map includes a sharpness value Q for each position point.
It should be noted that, the projector first generates four grating images on the surface of the measured object, the light path diagram is shown in fig. 2, and then the four-step phase fringe diagram generated by the surface reflection of the measured object can be collected by the camera. The four grating images are standard sinusoidal stripe gratings with the phase step difference of pi/2, namely, the phase displacement of the four grating images is 0 pi/2, pi, 3 pi/2 respectively, and the corresponding light intensities are respectively:
it can be understood that after the four-step phase fringe pattern of the measured object is acquired, the modulation degree pattern and the background definition pattern corresponding to the four-step phase fringe pattern can be obtained through algorithm processing, and in view of the fact that the content is realized in the prior art, the content is not the key point of the design of the scheme, and the method is not further described herein.
Illustratively, the four-step phase fringe pattern of the collected measured object is shown in fig. 3; calculating a modulation degree diagram corresponding to the four-step phase fringe diagram as shown in fig. 4; the calculated background definition map corresponding to the four-step phase fringe pattern is shown in fig. 5.
In this embodiment, the calculation formula of the modulation amplitude B of each position point (x, y) in the modulation degree map is as follows:
the calculation formula of the definition value Q of each position point in the definition map is as follows:
Q(x,y)=|A(x,y)-A(x+1,y)|*|A(x,y)-A(x,y+1)|。
s103, controlling the detection platform to move in height along the Z axis according to a set step length, and judging whether the moved detection platform exceeds a set maximum height; if not, the process returns to step S102, and if yes, step S104 is performed.
The step size to be set may be arbitrarily set by a technician according to an empirical value and an actual application scenario, and the embodiment is not particularly limited herein.
It can be understood that, after the detection platform is controlled to move in height along the Z axis from the initial height position Z0 according to a set step length step, the height position of the detection platform is z=z0+step; after the detection platform is controlled to move along the Z axis according to the set step length step, the height position of the detection platform is Z=Z0+step+step, and the like.
The maximum height to be set is Zmax, which can be arbitrarily set by a technician according to an empirical value and an actual application scenario, and the embodiment is not particularly limited herein.
In this embodiment, the movement of the detection platform along the Z axis is not always performed, but the movement may be ended when the height position Z of the detection platform is greater than Zmax. In addition to the four-step phase fringe pattern of the object to be measured being acquired when the detection platform is at the initial height position Z0 and the modulation degree pattern and the background definition pattern corresponding to the four-step phase fringe pattern being calculated, each movement of the detection platform (Z being less than or equal to Zmax) also requires the four-step phase fringe pattern of the object to be measured to be acquired and the modulation degree pattern and the background definition pattern corresponding to the four-step phase fringe pattern being calculated.
And S104, obtaining n groups of modulation degree graphs and definition graphs which are in one-to-one correspondence with the height positions of the detection platform.
N is a natural number greater than 1. If the detection platform moves for 3 times, the height positions of the detection platform include 3 initial height positions Z0, correspondingly, four-step phase fringe patterns of 3 times of detected objects are acquired, and 3 groups of modulation degree patterns and background definition patterns corresponding to the four-step phase fringe patterns are calculated.
And S105, in each group of modulation degree diagram and definition diagram, carrying out weighted calculation on the modulation amplitude B and the definition value Q of each position point (x, y) to obtain a definition evaluation parameter P corresponding to each position point (x, y).
In this embodiment, the step S105 may be further refined to include the following steps:
in each group of the modulation degree map and the definition map, the modulation amplitude B and the definition value Q of each position point (x, y) are weighted and calculated by the following formula to obtain a definition evaluation parameter P corresponding to each position point (x, y):
P(x,y)=a*B(x,y)+b*Q(x,y);
where a and b are weight coefficients, a+b=1.
It should be noted that, in this embodiment, the peripheral features and the self features are weighted based on the definition measurement of the four-step phase shift method to obtain a new definition evaluation parameter P, so that the dependency on the peripheral features of the detection points is smaller, the independence is strong, the anti-interference capability is strong, and the limitation is greatly reduced.
S106, traversing each group of modulation degree diagram and definition diagram, and obtaining the height position value corresponding to the maximum value in n definition evaluation parameters P corresponding to each position point (x, y) so as to reconstruct the three-dimensional object to be measured.
The number of the sets of the modulation degree map and the sharpness map is the same for each set of the position points (x, y), that is, the number of the calculated sharpness evaluation parameters P corresponding to each position point (x, y) is plural, specifically, the number of the sets of the modulation degree map and the sharpness map is plural. For example, when there are 3 sets of the modulation degree map and the sharpness map, there are 3 calculated sharpness evaluation parameters P corresponding to each position point (x, y).
In this embodiment, the maximum value needs to be obtained therefrom, and is set as Pmax, as shown in fig. 6, and then a height position value (i.e., zi in fig. 6) corresponding to the maximum value Pmax is determined as a true height position value of the position point (x, y), and after all the position points (x, y) have determined the true height position value, the three-dimensional reconstruction can be performed on the measured object.
Although terms of three-dimensional, phase, modulation, sharpness, weighting, etc. are used more in the present application, the possibility of using other terms is not excluded. These terms are used merely for convenience in describing and explaining the nature of the application; they are to be interpreted as any additional limitation that is not inconsistent with the spirit of the present application.
The focusing three-dimensional reconstruction method based on the structured light provided by the application adopts a mode of collecting images from the front surface, can restore the three-dimensional structure of a measured object more comprehensively, has the advantages of smoother surface, stronger diffuse reflection and better imaging effect, and in addition, the dependence of the peripheral characteristics of a detection point can be reduced, the limitation is greatly reduced and the accuracy of three-dimensional reconstruction can be improved by the definition measurement based on a four-step phase shift method.
Example two
Referring to fig. 7, a schematic functional block diagram of a three-dimensional reconstruction system for focusing based on structured light according to a second embodiment of the present application is provided, and the system is suitable for executing the three-dimensional reconstruction method for focusing based on structured light according to the second embodiment of the present application. The system specifically comprises the following modules:
the motion control module 201 is used for controlling the detection platform to move to an initial height position along the Z axis and controlling the projector to be opened;
the acquisition and calculation module 202 is used for acquiring a four-step phase fringe pattern of the object to be detected on the detection platform and calculating a modulation degree pattern and a background definition pattern corresponding to the four-step phase fringe pattern; the modulation degree map comprises a modulation amplitude B of each position point (x, y), and the definition map comprises a definition value Q of each position point;
the movement judging module 203 is configured to control the detection platform to move in height along the Z axis according to a set step length, and judge whether the moved detection platform exceeds a set maximum height; if not, returning to the step of executing the four-step phase fringe pattern of the detected object on the detection platform, and calculating a modulation degree pattern and a background definition pattern corresponding to the four-step phase fringe pattern; if yes, obtaining n groups of modulation degree graphs and definition graphs which are in one-to-one correspondence with the height positions of the detection platform;
the weighting calculation module 204 is configured to perform weighting calculation on the modulation amplitude B and the sharpness value Q of each location point (x, y) in each set of the modulation degree map and the sharpness map, so as to obtain a sharpness evaluation parameter P corresponding to each location point (x, y);
the three-dimensional reconstruction module 205 is configured to traverse each set of the modulation degree map and the sharpness map, and calculate a height position value corresponding to a maximum value in the n sharpness evaluation parameters P corresponding to each position point (x, y), so as to perform three-dimensional reconstruction on the measured object.
Preferably, in the structured light-based focusing three-dimensional reconstruction system, the light intensity function of the sinusoidal grating projected by the projector is:
wherein I is a light intensity function, A is background light intensity, B is modulation amplitude of stripes,the phase corresponding to the (x, y) point is represented by delta, which is a moving phase value;
preferably, in the structured light-based focusing three-dimensional reconstruction system, a calculation formula of the modulation amplitude B of each position point (x, y) in the modulation degree map is:
preferably, in the structured light-based focusing three-dimensional reconstruction system, a calculation formula of the sharpness value Q of each position point in the sharpness map is:
Q(x,y)=|A(x,y)-A(x+1,y)|*|A(x,y)-A(x,y+1)|。
preferably, in the structured light-based focused three-dimensional reconstruction system, the weight calculation module 204 is specifically configured to:
in each group of the modulation degree map and the definition map, the modulation amplitude B and the definition value Q of each position point (x, y) are weighted and calculated by the following formula to obtain a definition evaluation parameter P corresponding to each position point (x, y):
P(x,y)=a*B(x,y)+b*Q(x,y);
where a and b are weight coefficients, a+b=1.
The focusing three-dimensional reconstruction system based on the structured light provided by the application adopts a mode of collecting images from the front surface, can restore the three-dimensional structure of a measured object more comprehensively, has smoother surface, stronger diffuse reflection and better imaging effect, can reduce the dependence of peripheral characteristics of detection points based on the definition measurement of a four-step phase shift method, greatly reduces the limitation, and can improve the accuracy of three-dimensional reconstruction.
The system can execute the method provided by any embodiment of the application, and has the corresponding functional modules and beneficial effects of executing the method.
Example III
Fig. 8 is a schematic structural diagram of a computer device according to a third embodiment of the present application. FIG. 8 illustrates a block diagram of an exemplary computer device 12 suitable for use in implementing embodiments of the present application. The computer device 12 shown in fig. 8 is merely an example and should not be construed as limiting the functionality and scope of use of embodiments of the present application.
As shown in FIG. 8, the computer device 12 is in the form of a general purpose computing device. Components of computer device 12 may include, but are not limited to: one or more processors or processing units 16, a system memory 28, a bus 18 that connects the various system components, including the system memory 28 and the processing units 16.
Bus 18 represents one or more of several types of bus structures, including a memory bus or memory controller, a peripheral bus, an accelerated graphics port, a processor, and a local bus using any of a variety of bus architectures. By way of example, and not limitation, such architectures include Industry Standard Architecture (ISA) bus, micro channel architecture (MAC) bus, enhanced ISA bus, video Electronics Standards Association (VESA) local bus, and Peripheral Component Interconnect (PCI) bus.
Computer device 12 typically includes a variety of computer system readable media. Such media can be any available media that is accessible by computer device 12 and includes both volatile and nonvolatile media, removable and non-removable media.
The system memory 28 may include computer system readable media in the form of volatile memory, such as Random Access Memory (RAM) 30 and/or cache memory 32. The computer device 12 may further include other removable/non-removable, volatile/nonvolatile computer system storage media. By way of example only, storage system 34 may be used to read from or write to non-removable, nonvolatile magnetic media (not shown in FIG. 8, commonly referred to as a "hard disk drive"). Although not shown in fig. 8, a magnetic disk drive for reading from and writing to a removable non-volatile magnetic disk (e.g., a "floppy disk"), and an optical disk drive for reading from or writing to a removable non-volatile optical disk (e.g., a CD-ROM, DVD-ROM, or other optical media) may be provided. In such cases, each drive may be coupled to bus 18 through one or more data medium interfaces. Memory 28 may include at least one program product having a set (e.g., at least one) of program modules configured to carry out the functions of embodiments of the application.
A program/utility 40 having a set (at least one) of program modules 42 may be stored in, for example, memory 28, such program modules 42 including, but not limited to, an operating system, one or more application programs, other program modules, and program data, each or some combination of which may include an implementation of a network environment. Program modules 42 generally perform the functions and/or methods of the embodiments described herein.
The computer device 12 may also communicate with one or more external devices 15 (e.g., keyboard, pointing device, display 24, etc.), one or more devices that enable a user to interact with the computer device 12, and/or any devices (e.g., network card, modem, etc.) that enable the computer device 12 to communicate with one or more other computing devices. Such communication may occur through an input/output (I/O) interface 22. Moreover, computer device 12 may also communicate with one or more networks such as a Local Area Network (LAN), a Wide Area Network (WAN) and/or a public network, such as the Internet, through network adapter 20. As shown, network adapter 20 communicates with other modules of computer device 12 via bus 18. It should be appreciated that although not shown in fig. 8, other hardware and/or software modules may be used in connection with computer device 12, including, but not limited to: microcode, device drivers, redundant processing units, external disk drive arrays, RAID systems, tape drives, data backup storage systems, and the like.
The processing unit 16 executes various functional applications and data processing by running a program stored in the system memory 28, for example, to implement the structured light-based focused three-dimensional reconstruction method provided by the embodiment of the present application.
Example IV
A fourth embodiment of the present application provides a computer-readable storage medium having stored thereon computer-executable instructions that, when executed by a processor, implement a structured light-based focused three-dimensional reconstruction method as provided by all embodiments of the present application.
Any combination of one or more computer readable media may be employed. The computer readable medium may be a computer readable signal medium or a computer readable storage medium. The computer readable storage medium can be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or a combination of any of the foregoing. More specific examples (a non-exhaustive list) of the computer-readable storage medium would include the following: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In this document, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device.
The computer readable signal medium may include a propagated data signal with computer readable program code embodied therein, either in baseband or as part of a carrier wave. Such a propagated data signal may take any of a variety of forms, including, but not limited to, electro-magnetic, optical, or any suitable combination of the foregoing. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device.
Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to wireless, wireline, optical fiber cable, RF, etc., or any suitable combination of the foregoing.
Computer program code for carrying out operations of the present application may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, smalltalk, C ++ and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any kind of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or may be connected to an external computer (for example, through the Internet using an Internet service provider).
In view of the foregoing, it will be evident to a person skilled in the art that the foregoing detailed disclosure may be presented by way of example only and may not be limiting. Although not explicitly described herein, those skilled in the art will appreciate that the present application is intended to embrace a variety of reasonable alterations, improvements and modifications to the embodiments. Such alterations, improvements, and modifications are intended to be proposed by this application, and are intended to be within the spirit and scope of the exemplary embodiments of the application.
Furthermore, certain terms in the present application have been used to describe embodiments of the present application. For example, "one embodiment," "an embodiment," and/or "some embodiments" means that a particular feature, structure, or characteristic described in connection with the embodiment may be included in at least one embodiment of the application. Thus, it is emphasized and should be appreciated that two or more references to "an embodiment" or "one embodiment" or "an alternative embodiment" in various portions of this specification are not necessarily all referring to the same embodiment. Furthermore, the particular features, structures, or characteristics may be combined as suitable in one or more embodiments of the application.
It should be appreciated that in the foregoing description of embodiments of the application, various features are grouped together in a single embodiment, figure, or description thereof for the purpose of streamlining the disclosure. However, this is not to say that a combination of these features is necessary, and it is entirely possible for a person skilled in the art to extract some of them as separate embodiments to understand them when reading this application. That is, embodiments of the present application may also be understood as an integration of multiple secondary embodiments. While each secondary embodiment is satisfied by less than all of the features of a single foregoing disclosed embodiment.
Each patent, patent application, publication of patent application, and other materials, such as articles, books, specifications, publications, documents, articles, etc., cited herein are hereby incorporated by reference. The entire contents for all purposes, except for any prosecution file history associated therewith, may be any identical prosecution file history inconsistent or conflicting with this file, or any identical prosecution file history which may have a limiting influence on the broadest scope of the claims. Now or later in association with this document. For example, if there is any inconsistency or conflict between the description, definition, and/or use of terms associated with any of the incorporated materials, the terms in the present document shall prevail.
Finally, it is to be understood that the embodiments of the application disclosed herein are illustrative of the principles of embodiments of the present application. Other modified embodiments are also within the scope of the application. Accordingly, the disclosed embodiments are illustrative only and not limiting. Those skilled in the art can adopt alternative configurations to implement the application of the present application according to embodiments of the present application. Accordingly, embodiments of the application are not limited to the embodiments precisely described in the application.

Claims (10)

1. A structured light-based focused three-dimensional reconstruction method, the method comprising:
controlling the detection platform to move to an initial height position along the Z axis, and controlling the projector to be opened;
collecting a four-step phase fringe pattern of an object to be detected on the detection platform, and calculating a modulation degree pattern and a background definition pattern corresponding to the four-step phase fringe pattern; the modulation degree map comprises a modulation amplitude B of each position point (x, y), and the definition map comprises a definition value Q of each position point;
controlling the detection platform to move in height along the Z axis according to a set step length, and judging whether the moved detection platform exceeds a set maximum height;
if not, returning to the step of executing the four-step phase fringe pattern of the detected object on the detection platform, and calculating a modulation degree pattern and a background definition pattern corresponding to the four-step phase fringe pattern;
if yes, obtaining n groups of modulation degree graphs and definition graphs which are in one-to-one correspondence with the height positions of the detection platform;
in each group of modulation degree diagram and definition diagram, carrying out weighted calculation on a modulation amplitude B and a definition value Q of each position point (x, y) to obtain a definition evaluation parameter P corresponding to each position point (x, y);
and traversing each group of modulation degree diagram and definition diagram, and solving a height position value corresponding to the maximum value in n definition evaluation parameters P corresponding to each position point (x, y) so as to reconstruct the three-dimensional object to be measured.
2. The structured light-based focused three-dimensional reconstruction method according to claim 1, wherein the light intensity function of the sinusoidal grating projected by the projector is:
wherein I is a light intensity function, A is background light intensity, B is modulation amplitude of stripes,the phase corresponding to the (x, y) point is represented by delta, which is the shift phase value.
3. The structured light-based focusing three-dimensional reconstruction method according to claim 1, wherein the calculation formula of the modulation amplitude B of each position point (x, y) in the modulation degree map is:
4. the structured light-based focused three-dimensional reconstruction method according to claim 1, wherein the calculation formula of the sharpness value Q of each position point in the sharpness map is:
Q(x,y)=|A(x,y)-A(x+1,y)|*|A(x,y)-A(x,y+1)|。
5. the method of claim 1, wherein the step of weighting the modulation amplitude B and the sharpness value Q of each location point (x, y) in each set of the modulation degree map and the sharpness map to obtain the sharpness evaluation parameter P corresponding to each location point (x, y) includes:
in each group of the modulation degree map and the definition map, the modulation amplitude B and the definition value Q of each position point (x, y) are weighted and calculated by the following formula to obtain a definition evaluation parameter P corresponding to each position point (x, y):
P(x,y)=a*B(x,y)+b*Q(x,y);
where a and b are weight coefficients, a+b=1.
6. A structured light based focused three-dimensional reconstruction system, the system comprising:
the action control module is used for controlling the detection platform to move to an initial height position along the Z axis and controlling the projector to be opened;
the acquisition and calculation module is used for acquiring a four-step phase fringe pattern of the detected object on the detection platform and calculating a modulation degree pattern and a background definition pattern corresponding to the four-step phase fringe pattern; the modulation degree map comprises a modulation amplitude B of each position point (x, y), and the definition map comprises a definition value Q of each position point;
the movement judging module is used for controlling the detection platform to move in height along the Z axis according to a set step length and judging whether the moved detection platform exceeds a set maximum height or not; if not, returning to the step of executing the four-step phase fringe pattern of the detected object on the detection platform, and calculating a modulation degree pattern and a background definition pattern corresponding to the four-step phase fringe pattern; if yes, obtaining n groups of modulation degree graphs and definition graphs which are in one-to-one correspondence with the height positions of the detection platform;
the weighting calculation module is used for carrying out weighting calculation on the modulation amplitude B and the definition value Q of each position point (x, y) in each group of the modulation degree diagram and the definition diagram to obtain a definition evaluation parameter P corresponding to each position point (x, y);
and the three-dimensional reconstruction module is used for traversing each group of modulation degree diagram and definition diagram, solving the height position value corresponding to the maximum value in the n definition evaluation parameters P corresponding to each position point (x, y) and carrying out three-dimensional reconstruction on the measured object.
7. The structured light based focused three-dimensional reconstruction system according to claim 6, wherein the light intensity function of the sinusoidal grating projected by the projector is:
wherein I is a light intensity function, A is background light intensity, B is modulation amplitude of stripes,the phase corresponding to the (x, y) point is represented by delta, which is a moving phase value;
the calculation formula of the modulation amplitude B of each position point (x, y) in the modulation degree chart is as follows:
the calculation formula of the definition value Q of each position point in the definition map is as follows:
Q(x,y)=|A(x,y)-A(x+1,y)|*|A(x,y)-A(x,y+1)|。
8. the structured light based focused three-dimensional reconstruction system according to claim 6, wherein the weight calculation module is specifically configured to:
in each group of the modulation degree map and the definition map, the modulation amplitude B and the definition value Q of each position point (x, y) are weighted and calculated by the following formula to obtain a definition evaluation parameter P corresponding to each position point (x, y):
P(x,y)=a*B(x,y)+b*Q(x,y);
where a and b are weight coefficients, a+b=1.
9. A computer device comprising a memory and a processor, the memory storing a computer program, characterized in that the processor implements the structured light based focused three-dimensional reconstruction method as defined in any one of claims 1-5 when the computer program is executed.
10. A storage medium containing computer executable instructions for execution by a computer processor to implement the structured light based focused three-dimensional reconstruction method of any one of claims 1-5.
CN202310764285.5A 2023-06-26 2023-06-26 Focusing three-dimensional reconstruction method and system based on structured light Pending CN116958415A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310764285.5A CN116958415A (en) 2023-06-26 2023-06-26 Focusing three-dimensional reconstruction method and system based on structured light

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310764285.5A CN116958415A (en) 2023-06-26 2023-06-26 Focusing three-dimensional reconstruction method and system based on structured light

Publications (1)

Publication Number Publication Date
CN116958415A true CN116958415A (en) 2023-10-27

Family

ID=88457498

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310764285.5A Pending CN116958415A (en) 2023-06-26 2023-06-26 Focusing three-dimensional reconstruction method and system based on structured light

Country Status (1)

Country Link
CN (1) CN116958415A (en)

Similar Documents

Publication Publication Date Title
EP2531980B1 (en) Depth camera compatibility
Wanner et al. Globally consistent depth labeling of 4D light fields
Wu et al. Real-time shading-based refinement for consumer depth cameras
Kadambi et al. 3d depth cameras in vision: Benefits and limitations of the hardware: With an emphasis on the first-and second-generation kinect models
EP2531979B1 (en) Depth camera compatibility
JP6239594B2 (en) 3D information processing apparatus and method
CN109521879B (en) Interactive projection control method and device, storage medium and electronic equipment
Maurer et al. Combining shape from shading and stereo: A joint variational method for estimating depth, illumination and albedo
Agresti et al. Stereo and ToF data fusion by learning from synthetic data
Iwai et al. Shadow removal of projected imagery by occluder shape measurement in a multiple overlapping projection system
Wu Research on feature point extraction and matching machine learning method based on light field imaging
Li et al. Elaborate scene reconstruction with a consumer depth camera
Dong et al. Real-time occlusion handling for dynamic augmented reality using geometric sensing and graphical shading
Maki et al. 3d model generation of cattle using multiple depth-maps for ict agriculture
CN116958415A (en) Focusing three-dimensional reconstruction method and system based on structured light
Güssefeld et al. Are reflectance field renderings appropriate for optical flow evaluation?
Kuronen et al. 3d hand movement measurement framework for studying human-computer interaction
Liu et al. RGB-D depth-map restoration using smooth depth neighborhood supports
Li A Geometry Reconstruction And Motion Tracking System Using Multiple Commodity RGB-D Cameras
Rodríguez A methodology to develop computer vision systems in civil engineering: Applications in material testing and fish tracking
Sobani et al. 3D model reconstruction from multi-views of 2D images using radon transform
Graber Realtime 3D reconstruction
Sespede et al. Semi-automatic post-processing of multi-view 2D-plus-depth video
Ji et al. Mixed reality depth contour occlusion using binocular similarity matching and three-dimensional contour optimisation
Wendelin Combining multiple depth cameras for reconstruction

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination