CN114166146A - Three-dimensional measurement method and equipment based on construction of encoded image projection - Google Patents

Three-dimensional measurement method and equipment based on construction of encoded image projection Download PDF

Info

Publication number
CN114166146A
CN114166146A CN202111468643.5A CN202111468643A CN114166146A CN 114166146 A CN114166146 A CN 114166146A CN 202111468643 A CN202111468643 A CN 202111468643A CN 114166146 A CN114166146 A CN 114166146A
Authority
CN
China
Prior art keywords
target object
light
image
coded
obtaining
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202111468643.5A
Other languages
Chinese (zh)
Other versions
CN114166146B (en
Inventor
黎达
张志辉
高三山
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hong Kong Polytechnic University HKPU
Shenzhen Research Institute HKPU
Original Assignee
Hong Kong Polytechnic University HKPU
Shenzhen Research Institute HKPU
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hong Kong Polytechnic University HKPU, Shenzhen Research Institute HKPU filed Critical Hong Kong Polytechnic University HKPU
Priority to CN202111468643.5A priority Critical patent/CN114166146B/en
Publication of CN114166146A publication Critical patent/CN114166146A/en
Application granted granted Critical
Publication of CN114166146B publication Critical patent/CN114166146B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01BMEASURING LENGTH, THICKNESS OR SIMILAR LINEAR DIMENSIONS; MEASURING ANGLES; MEASURING AREAS; MEASURING IRREGULARITIES OF SURFACES OR CONTOURS
    • G01B11/00Measuring arrangements characterised by the use of optical techniques
    • G01B11/24Measuring arrangements characterised by the use of optical techniques for measuring contours or curvatures

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Length Measuring Devices By Optical Means (AREA)

Abstract

The invention relates to the technical field of image processing, in particular to a three-dimensional measurement method and equipment based on construction of encoded image projection. Acquiring parameter information of a target object; obtaining matched coded light adapted to the target object according to the parameter information of the target object; and directing the matched coded light to the target object to construct a target image corresponding to the target object under the irradiation of the matched coded light. The invention adopts the coded light to irradiate the target object to form the target image, and the coded light can carry a large amount of characteristic information of the target object after being responded by the target object, so the resolution of the target image corresponding to the target object can be improved by adopting the coded light.

Description

Three-dimensional measurement method and equipment based on construction of encoded image projection
Technical Field
The invention relates to the technical field of image processing, in particular to a three-dimensional measurement method and equipment based on construction of encoded image projection.
Background
The white light is irradiated on an object, the object can perform diffuse reflection, specular reflection, refraction and other responses on the white light, and then the diffuse reflection light, the specular reflection light and the refraction light are collected by an optical instrument, so that the construction of an object image is completed. However, the resolution of the image constructed by the response light generated by the white light is low because the characteristic information of the object carried in the response light generated by the object for the white light is less.
In summary, the prior art achieves low resolution of the constructed image.
Thus, there is a need for improvements and enhancements in the art.
Disclosure of Invention
In order to solve the technical problems, the invention provides a three-dimensional measurement method and equipment based on constructed coded image projection, and solves the problem of low resolution of constructed images obtained in the prior art.
In order to achieve the purpose, the invention adopts the following technical scheme:
in a first aspect, the present invention provides a three-dimensional measurement method based on constructing a coded image projection, the three-dimensional measurement method including an image construction method, wherein the image construction method includes:
acquiring parameter information of a target object;
obtaining matched coded light adapted to the target object according to the parameter information of the target object;
and directing the matched coded light to the target object to construct a target image corresponding to the target object under the irradiation of the matched coded light.
In one implementation, the obtaining the matched coded light adapted to the target object according to the parameter information of the target object includes:
obtaining the object shape in the parameter information according to the parameter information of the target object;
and obtaining matched coded light matched with the target object according to the shape of the object.
In one implementation, the obtaining the matched coded light adapted to the target object according to the parameter information of the target object includes:
according to the parameter information of the target object, the object surface roughness and the object shape in the parameter information are obtained;
and obtaining matched coded light matched with the target object according to the roughness of the surface of the object and the shape of the object.
In one implementation, the obtaining the matched coded light adapted to the target object according to the parameter information of the target object includes:
obtaining predictive coding light corresponding to the parameter information according to the parameter information of the target object;
the predicted coded light is emitted to the target object, and a test response result of the target object to the predicted coded light is obtained;
obtaining a test image corresponding to the target object according to the test response result;
and obtaining matched coded light adapted to the target object according to the test image.
In one implementation, the obtaining, according to the test image, matched coded light adapted to the target object includes:
obtaining pixel information corresponding to the test image according to the test image;
adjusting attribute information corresponding to the predictive coding light according to pixel information corresponding to the test image until pixel information corresponding to the test image collected under the irradiation of the predictive coding light after the attribute information is adjusted meets a set condition, and obtaining the predictive coding light after the attribute information is adjusted;
and obtaining matched coded light matched with the target object according to the predicted coded light after the attribute information is adjusted.
In one implementation, the adjusting, according to pixel information corresponding to the test image, attribute information corresponding to the predictive coding light until pixel information corresponding to the test image, which is acquired under illumination of the predictive coding light after the attribute information is adjusted, satisfies a set condition to obtain the predictive coding light after the attribute information is adjusted includes:
according to the attribute information corresponding to the predictive coding light, obtaining the illumination intensity and/or the light distribution mode in the attribute information and/or the incident angle of the predictive coding light to the target object;
according to the test response result, obtaining diffuse reflection light and/or specular reflection light and/or refraction light of the target object to the predictive coding light in the test response result;
obtaining a test response image formed by the diffuse reflection light and/or the specular reflection light and/or the refraction light in the test image according to the test image;
and adjusting the illumination intensity and/or the light distribution mode corresponding to the predictive coding light and/or the incident angle of the predictive coding light to the target object according to the pixel information corresponding to the test response image until the pixel information corresponding to the test response image collected under the illumination of the predictive coding light after the illumination intensity and/or the light distribution mode and/or the incident angle are adjusted meets a set condition, so as to obtain the predictive coding light after the illumination intensity and/or the light distribution mode and/or the incident angle are adjusted.
In one implementation, the directing the matching coded light to the target object to construct a target image corresponding to the target object under the illumination of the matching coded light includes:
acquiring each local image corresponding to the target object from each angle of the target object;
and constructing a three-dimensional target depth image in the target image according to each local image.
In a second aspect, an embodiment of the present invention further provides an apparatus for constructing a three-dimensional measurement method for encoding image projection, where the apparatus includes the following components:
the data acquisition module is used for acquiring parameter information of a target object;
the coded light generating module is used for obtaining matched coded light matched with the target object according to the parameter information of the target object;
and the image generation module is used for directing the matched coded light to the target object and constructing a target image corresponding to the target object under the irradiation of the matched coded light.
In a third aspect, an embodiment of the present invention further provides a terminal device, where the terminal device includes a memory, a processor, and an image construction program that is stored in the memory and is executable on the processor, and when the processor executes the image construction program, the steps of the image construction method are implemented.
In a fourth aspect, the embodiment of the present invention further provides a computer-readable storage medium, where an image construction program is stored on the computer-readable storage medium, and when the image construction program is executed by a processor, the image construction program implements the steps of the image construction method described above.
Has the advantages that: the invention adopts the coded light to irradiate the target object to form the target image, and the coded light can carry a large amount of characteristic information of the target object after being responded by the target object, so the resolution of the target image corresponding to the target object can be improved by adopting the coded light. In addition, the invention obtains the coded light matched with the parameter information according to the parameter information of the target object, which belongs to the technical field of determining the coded light matched with the target object according to the prior knowledge of the target object, and the resolution of a target image formed by the target object under the irradiation of the coded light can be further improved only by adopting the coded light matched with the target object to irradiate the target object.
Drawings
FIG. 1 is an overall flow chart of the present invention;
fig. 2 is PID (pixel value distribution information) obtained by irradiation with normal light in the embodiment;
FIG. 3 is a PID obtained using coded light illumination in an embodiment;
fig. 4 is a schematic block diagram of an internal structure of a terminal device according to an embodiment of the present invention.
Detailed Description
The technical scheme of the invention is clearly and completely described below by combining the embodiment and the attached drawings of the specification. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
Researches show that when white light is irradiated on an object, the object can perform diffuse reflection, specular reflection, refraction and other responses on the white light, and then an optical instrument is adopted to collect the diffuse reflection light, the specular reflection light and the refraction light, so that the construction of an object image is completed. However, the resolution of the image constructed by the response light generated by the white light is low because the characteristic information of the object carried in the response light generated by the object for the white light is less. In summary, the prior art achieves low resolution of the constructed image.
In order to solve the technical problems, the invention provides a three-dimensional measurement method and equipment based on constructed coded image projection, and solves the problem of low resolution of constructed images obtained in the prior art. During specific implementation, firstly, according to the parameter information of the target object, the coded light matched with the target object is determined, then, the determined coded light is emitted to the target object, and finally, a target image formed by the target object under the irradiation of the coded light is collected. The target image obtained by the method has higher resolution.
For example, the target object a' with the parameter information a, and the coded light matched with the parameter information a is a; the target object B' with the parameter information B, and the coded light matched with the parameter information B is B; the different parameter information based on the priori knowledge statistics is matched with different coded lights, and when a target image corresponding to a target object a 'needs to be acquired, the coded light A needs to be used for irradiating the target object a', so that the target image with high resolution can be obtained; also for the target object B ', the coded light B needs to be used to illuminate the target object B'. There is a one-to-one correspondence between the parameter information and the coded light.
Exemplary method
The three-dimensional measurement method based on the constructed encoded image projection of the embodiment can be applied to terminal equipment, and the terminal equipment can be a terminal product with a data processing function, such as a computer. In this embodiment, as shown in fig. 1, the image construction method specifically includes the following steps:
and S100, acquiring parameter information of the target object.
In this embodiment, the parameter information is the parameter information of the geometric size. The target object is an optical device, and the surface of the optical device is a free-form surface, for example, the target object of the present embodiment may be a lens to be detected. Optical free-form surfaces have been widely used in the development of various products to achieve specifically designed optical and mechanical functions. Due to the low density and speed of the optical free-form surface measurement data, the complexity thereof poses considerable challenges for the control and quality evaluation of the machining process.
And S200, obtaining matched coded light matched with the target object according to the parameter information of the target object.
Step S200 includes two parts: a first part seeking predictive coded light suitable for the target object according to the parameter information of the target object; and the second part is that the target object is irradiated by the predictive coding light, the characteristic information of the image formed by the target object under the irradiation of the predictive coding light is collected, the predictive coding light is adjusted according to the characteristic information of the image until the characteristic information of the image meets the set condition, the adjusted predictive coding light is the matching coding light, and the matching coding light is the coding light which can be used for subsequently constructing the target image corresponding to the target object. Of course, the matching coded light matched with the target object may be obtained only according to the parameter information of the target object in this embodiment, that is, the second part may be omitted in this embodiment.
When step S200 includes only the first part, step S200 includes steps S201 and S202 as follows:
s201, obtaining the object shape and/or the object surface roughness in the parameter information according to the parameter information of the target object.
S202, obtaining matched coded light matched with the target object according to the shape of the object and/or the roughness of the surface of the object.
Steps S021 and S202 are to obtain the matching coded light directly according to the parameter information, i.e. the second part is omitted. And obtaining the matched coded light suitable for the target object through the corresponding relation between the object shape and the object surface roughness in the priori knowledge and the coded light, wherein the matched coded light comprises the intensity, the incident angle and the distribution shape of the coded light which are matched with the target object. The distribution shape of the coded light may be a fan shape.
For example, an object with a round surface and a rough surface in the prior knowledge is suitable for coded light with the intensity of a1, the incident angle of a2 and the light distribution mode of a 3; the object with the oval surface and the smooth surface is suitable for the coded light with the intensity of b1, the incident angle of b2 and the light distribution mode of b 3. After the shape and surface roughness of the target object are known, the coded light suitable for the target object can be selected based on the a priori knowledge.
When step S200 is composed of the first part and the second part, step S200 includes steps S203, S204, S205, S206 as follows:
s203, obtaining the predictive coding light corresponding to the parameter information according to the parameter information of the target object.
In this embodiment, a predictive coded light suitable for a target object is selected based on parameter information including the shape and surface roughness of the target object. At this time, the intensity, the incident angle and the distribution form of the predictive coded light are only an approximate range, and the required matching coded light can be obtained by adjusting the intensity, the incident angle and the distribution form of the predictive coded light in subsequent steps.
S204, the predicted coded light is emitted to the target object, and a test response result of the target object to the predicted coded light is obtained.
In the present embodiment, after the predictive coded light is irradiated to the target object, the predictive coded light is acted on the target object, and then, diffuse reflected light, specular reflected light, and refracted light are formed. The diffuse reflection light, the specular reflection light and the refraction light form a test response result of the target object to the predictive coding light.
And S205, obtaining a test image corresponding to the target object according to the test response result.
The test image is formed by collecting the diffuse reflected light, the specular reflected light, and the refracted light in step S204.
S206, obtaining the matching coded light adapted to the target object according to the test image, wherein the step S206 includes the following steps S2061, S2062, S2063, S2064, S2065, and S2066:
s2061, obtaining the pixel information corresponding to the test image according to the test image.
The test image of this example was obtained under coded light illumination, rather than white light. Coded light is chosen instead of white light for the following reasons:
the target object of the embodiment is a featureless free-form surface, so that the distribution of the PID (pixel value distribution information) is approximate under normal illumination, while the free-form surface is irradiated by coded light, and the PIDs of images formed by the free-form surface under different coded light irradiation are different, so that even if the PID changes due to slight change of the coded light, the PID changes, and the more obvious PID change, the coded light can be adjusted according to the change of the PID to ensure that the adjusted coded light is suitable for obtaining a target image subsequently. Fig. 2 is PID information obtained under normal illumination, fig. 3 is PID information obtained by irradiating a target object with coded light, and as can be seen from fig. 2 and fig. 3, the PID information obtained by the coded light is richer, and the richer the PID information is, the better the accuracy and efficiency of information matching are improved, and further the accuracy of 3D reconstruction is improved.
S2062, according to the attribute information corresponding to the predictive coding light, obtaining the illumination intensity and/or the light distribution mode in the attribute information and/or the incident angle of the predictive coding light to the target object.
Different illumination intensities, light distribution modes and different incidence directions all affect the finally obtained image quality, so that the illumination intensities, the light distribution modes and the incidence directions need to be adjusted, and the adjusted coded light can obtain an image with higher resolution.
In this embodiment, only one of the illumination intensity, the light distribution mode, and the incident direction in the attribute information may be adjusted, or all of the three may be adjusted.
S2063, according to the test response result, obtaining the diffuse reflection light and/or the specular reflection light and/or the refraction light of the target object to the predictive coding light in the test response result.
S2064, obtaining a test response image formed by the diffuse reflection light and/or the specular reflection light and/or the refraction light in the test image according to the test image.
When the predicted coded light irradiates on a target object, the predicted coded light can generate diffuse reflection, specular reflection and refraction, and the test response image is obtained by collecting the light. In this embodiment, only one or two of the diffuse reflected light, the specular reflected light, and the refracted light may be used to obtain the test response image.
S2065, according to the pixel information corresponding to the test response image, adjusting the illumination intensity and/or the light distribution mode corresponding to the prediction coding light and/or the incident angle of the prediction coding light to the target object until the pixel information corresponding to the test response image collected under the irradiation of the prediction coding light after the illumination intensity and/or the light distribution mode and/or the incident angle are adjusted meets the set condition, and obtaining the prediction coding light after the illumination intensity and/or the light distribution mode and/or the incident angle are adjusted.
The setting conditions in this embodiment are that the same-name point information matching condition is satisfied, and the three-dimensional reconstruction data processing condition is satisfied.
The different illumination intensities, the different light distribution modes and the different incidence angles of the predictive coding light all affect the Pixel Information (PID) corresponding to the image of the target object under the irradiation of the predictive coding light, and only the predictive coding light which enables the Pixel Information (PID) to meet the set conditions is the required coding light.
S2066, obtaining the matching coded light adapted to the target object according to the predicted coded light after the attribute information is adjusted.
The coded light modulated in step S2065 is the coded light suitable for constructing the target object image.
S300, directing the matched coded light to the target object, and constructing a target image corresponding to the target object under the irradiation of the coded light.
After the matching coded light is obtained in step S200, the matching coded light may be used to illuminate the target object, and then images formed by the target object under the illumination of the matching coded light are collected, and the images are used to construct the target image.
Step S300 includes steps S301 and S302 as follows:
s301, collecting each local image corresponding to the target object from each angle of the target object.
In this embodiment, the local image is a 4D light field picture of the target object, and each 4D light field picture is obtained by shooting the target object at a different viewing angle. There are slight differences in the 4D light-field pictures (elemental images) acquired at different viewing angles, and the information carried by the target object is reflected in these slight differences, which are the carriers for the transfer to the reconstruction process. These different element image points resulting from a single target point are called homologous points (CP). The difference of the same-name point (CP) at a specific depth is quantitatively expressed through system setting parameters and depth information variables thereof. The disparity information may be represented by the number of pixels multiplied by the size of a single pixel. This quantitative relationship is the fundamental theory of machine vision based measurement systems. By matching the CPs of different depth information, parallax information is extracted to represent the different depth information. The defocus information eliminated at each depth plane provides a tomographic three-dimensional reconstruction.
S302, constructing a three-dimensional target depth image in the target image according to each local image.
The detailed procedure of step S302 is as follows:
and constructing a depth network model through the 4D light field picture, and outputting an initial parallax estimation image through the depth network model by performing convolution on the 4D light field picture. Meanwhile, multi-view light field stereo matching can be performed by using the 4D light field picture, in the stereo matching process, cost construction is performed by using pixel information and eight-direction gradient information in a light field picture, and a parallax estimation picture, namely a reference parallax picture, with the minimum cost can be obtained by a winner's general algorithm.
The A-KAZE feature extraction operator is utilized to extract feature points in different light field pictures, and then depth matching of the same-name points is carried out to obtain depth information of the same-name feature points, and the A-KAZE parallax estimation image can be obtained.
And fusing the reference disparity map and the A-KAZE disparity estimation map obtained by matching with the depth network model in a convolution mode to obtain the final disparity map.
And in the network training process, comparing the finally obtained disparity map with a true value corresponding to the disparity map, and training the network by using the distance between the disparity map and the true value. The trained network can be directly used for parallax estimation, and then a depth image of the target object is obtained according to the optical parameters.
The depth image of the target object obtained in this embodiment includes various detailed problems of the target object, and whether the parameter performance of the target object meets the requirement or not can be detected through analysis of the depth image.
In summary, the target image is formed by irradiating the target object with the coded light, and the resolution of the target image corresponding to the target object can be improved by using the coded light because the response light formed after the coded light is responded by the target object can carry a large amount of characteristic information of the target object. In addition, the invention obtains the coded light matched with the parameter information according to the parameter information of the target object, which belongs to the technical field of determining the coded light matched with the target object according to the prior knowledge of the target object, and the resolution of a target image formed by the target object under the irradiation of the coded light can be further improved only by adopting the coded light matched with the target object to irradiate the target object.
Exemplary devices
The embodiment also provides a device based on the three-dimensional measurement method for constructing the encoded image projection, and the device comprises the following components:
the data acquisition module is used for acquiring parameter information of a target object;
the coded light generating module is used for obtaining matched coded light matched with the target object according to the parameter information of the target object;
an image generation module, configured to emit the coded light to the target object, and construct a target image corresponding to the target object under the coded light irradiation
Based on the above embodiments, the present invention further provides a terminal device, and a schematic block diagram thereof may be as shown in fig. 4. The terminal equipment comprises a processor, a memory, a network interface, a display screen and a temperature sensor which are connected through a system bus. Wherein the processor of the terminal device is configured to provide computing and control capabilities. The memory of the terminal equipment comprises a nonvolatile storage medium and an internal memory. The non-volatile storage medium stores an operating system and a computer program. The internal memory provides an environment for the operation of an operating system and computer programs in the non-volatile storage medium. The network interface of the terminal device is used for connecting and communicating with an external terminal through a network. The computer program is executed by a processor to implement an image construction method. The display screen of the terminal equipment can be a liquid crystal display screen or an electronic ink display screen, and the temperature sensor of the terminal equipment is arranged in the terminal equipment in advance and used for detecting the operating temperature of the internal equipment.
It will be understood by those skilled in the art that the block diagram of fig. 4 is only a block diagram of a part of the structure related to the solution of the present invention, and does not constitute a limitation to the terminal device to which the solution of the present invention is applied, and a specific terminal device may include more or less components than those shown in the figure, or may combine some components, or have different arrangements of components.
In one embodiment, a terminal device is provided, where the terminal device includes a memory, a processor, and an image construction method program stored in the memory and executable on the processor, and when the processor executes the image construction method program, the following operation instructions are implemented:
acquiring parameter information of a target object;
obtaining matched coded light adapted to the target object according to the parameter information of the target object;
and emitting the coded light to the target object to construct a target image corresponding to the target object under the irradiation of the coded light.
It will be understood by those skilled in the art that all or part of the processes of the methods of the embodiments described above can be implemented by hardware instructions of a computer program, which can be stored in a non-volatile computer-readable storage medium, and when executed, can include the processes of the embodiments of the methods described above. Any reference to memory, storage, databases, or other media used in embodiments provided herein may include non-volatile and/or volatile memory. Non-volatile memory can include read-only memory (ROM), Programmable ROM (PROM), Electrically Programmable ROM (EPROM), Electrically Erasable Programmable ROM (EEPROM), or flash memory. Volatile memory can include Random Access Memory (RAM) or external cache memory. By way of illustration and not limitation, RAM is available in a variety of forms such as Static RAM (SRAM), Dynamic RAM (DRAM), Synchronous DRAM (SDRAM), Double Data Rate SDRAM (DDRSDRAM), Enhanced SDRAM (ESDRAM), Synchronous Link DRAM (SLDRAM), Rambus Direct RAM (RDRAM), direct bus dynamic RAM (DRDRAM), and memory bus dynamic RAM (RDRAM).
In summary, the present invention discloses a three-dimensional measurement method and apparatus based on constructing encoded image projection, the method comprising: acquiring parameter information of a target object; obtaining matched coded light adapted to the target object according to the parameter information of the target object; and emitting the coded light to the target object to construct a target image corresponding to the target object under the irradiation of the coded light. The method can improve the resolution of the target image.
Finally, it should be noted that: the above examples are only intended to illustrate the technical solution of the present invention, but not to limit it; although the present invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; and such modifications or substitutions do not depart from the spirit and scope of the corresponding technical solutions of the embodiments of the present invention.

Claims (10)

1. A three-dimensional measurement method based on constructing a coded image projection, the three-dimensional measurement method comprising an image construction method, characterized in that the image construction method comprises:
acquiring parameter information of a target object;
obtaining matched coded light adapted to the target object according to the parameter information of the target object;
and directing the matched coded light to the target object to construct a target image corresponding to the target object under the irradiation of the matched coded light.
2. The three-dimensional measurement method based on constructing a coded image projection according to claim 1, wherein the obtaining of the matching coded light adapted to the target object according to the parameter information of the target object comprises:
obtaining the object shape in the parameter information according to the parameter information of the target object;
and obtaining matched coded light matched with the target object according to the shape of the object.
3. The three-dimensional measurement method based on constructing encoded image projection according to claim 1, wherein the obtaining of the matched encoded light adapted to the target object according to the geometric parameter information of the target object comprises:
according to the parameter information of the target object, the object surface roughness and the object shape in the parameter information are obtained;
and obtaining matched coded light matched with the target object according to the roughness of the surface of the object and the shape of the object.
4. The three-dimensional measurement method based on constructing a coded image projection according to claim 1, wherein the obtaining of the matching coded light adapted to the target object according to the parameter information of the target object comprises:
obtaining predictive coding light corresponding to the parameter information according to the parameter information of the target object;
the predicted coded light is emitted to the target object, and a test response result of the target object to the predicted coded light is obtained;
obtaining a test image corresponding to the target object according to the test response result;
and obtaining matched coded light adapted to the target object according to the test image.
5. The three-dimensional measurement method based on constructing a coded image projection according to claim 4, wherein obtaining the matching coded light adapted to the target object according to the test image comprises:
obtaining pixel information corresponding to the test image according to the test image;
adjusting attribute information corresponding to the predictive coding light according to pixel information corresponding to the test image until pixel information corresponding to the test image collected under the irradiation of the predictive coding light after the attribute information is adjusted meets a set condition, and obtaining the predictive coding light after the attribute information is adjusted;
and obtaining matched coded light matched with the target object according to the predicted coded light after the attribute information is adjusted.
6. The three-dimensional measurement method based on construction of a coded image projection according to claim 5, wherein the adjusting of the attribute information corresponding to the predictive coded light according to the pixel information corresponding to the test image until the pixel information corresponding to the test image collected under the irradiation of the predictive coded light after the adjustment of the attribute information satisfies a set condition to obtain the predictive coded light after the adjustment of the attribute information comprises:
according to the attribute information corresponding to the predictive coding light, obtaining the illumination intensity and/or the light distribution mode in the attribute information and/or the incident angle of the predictive coding light to the target object;
according to the test response result, obtaining diffuse reflection light and/or specular reflection light and/or refraction light of the target object to the predictive coding light in the test response result;
obtaining a test response image formed by the diffuse reflection light and/or the specular reflection light and/or the refraction light in the test image according to the test image;
and adjusting the illumination intensity and/or the light distribution mode corresponding to the predictive coding light and/or the incident angle of the predictive coding light to the target object according to the pixel information corresponding to the test response image until the pixel information corresponding to the test response image collected under the illumination of the predictive coding light after the illumination intensity and/or the light distribution mode and/or the incident angle are adjusted meets a set condition, so as to obtain the predictive coding light after the illumination intensity and/or the light distribution mode and/or the incident angle are adjusted.
7. The three-dimensional measurement method based on construction coded image projection as claimed in claim 1, wherein said directing the matching coded light to the target object, constructing a target image corresponding to the target object under the illumination of the matching coded light, comprises:
acquiring each local image corresponding to the target object from each angle of the target object;
and constructing a three-dimensional target depth image in the target image according to each local image.
8. An apparatus for constructing a three-dimensional measurement method of a projection of a coded image, the apparatus comprising:
the data acquisition module is used for acquiring parameter information of a target object;
the coded light generating module is used for obtaining matched coded light matched with the target object according to the parameter information of the target object;
and the image generation module is used for directing the matched coded light to the target object and constructing a target image corresponding to the target object under the irradiation of the matched coded light.
9. A terminal device, characterized in that the terminal device comprises a memory, a processor and an image construction program stored in the memory and executable on the processor, the processor implementing the steps of the image construction method according to any one of claims 1 to 7 when executing the image construction program.
10. A computer-readable storage medium, having stored thereon an image construction program which, when executed by a processor, implements the steps of the image construction method according to any one of claims 1 to 7.
CN202111468643.5A 2021-12-03 2021-12-03 Three-dimensional measurement method and device based on construction of coded image projection Active CN114166146B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111468643.5A CN114166146B (en) 2021-12-03 2021-12-03 Three-dimensional measurement method and device based on construction of coded image projection

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111468643.5A CN114166146B (en) 2021-12-03 2021-12-03 Three-dimensional measurement method and device based on construction of coded image projection

Publications (2)

Publication Number Publication Date
CN114166146A true CN114166146A (en) 2022-03-11
CN114166146B CN114166146B (en) 2024-07-02

Family

ID=80482864

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111468643.5A Active CN114166146B (en) 2021-12-03 2021-12-03 Three-dimensional measurement method and device based on construction of coded image projection

Country Status (1)

Country Link
CN (1) CN114166146B (en)

Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101466998A (en) * 2005-11-09 2009-06-24 几何信息学股份有限公司 Method and apparatus for absolute-coordinate three-dimensional surface imaging
CN101482398A (en) * 2009-03-06 2009-07-15 北京大学 Fast three-dimensional appearance measuring method and device
CN105931196A (en) * 2016-04-11 2016-09-07 天津大学 Fourier optical modeling-based coded aperture camera image restoration method
DE102016122515A1 (en) * 2015-11-27 2017-06-01 Ulrich Breitmeier Testing and / or calibration standard
CN107507135A (en) * 2017-07-11 2017-12-22 天津大学 Image reconstructing method based on coding aperture and target
CN108645353A (en) * 2018-05-14 2018-10-12 四川川大智胜软件股份有限公司 Three dimensional data collection system and method based on the random binary coding light field of multiframe
CN108986178A (en) * 2017-06-01 2018-12-11 宁波盈芯信息科技有限公司 A kind of random coded method for generating pattern and equipment for structure light coding
CN109540032A (en) * 2019-01-12 2019-03-29 吉林大学 A kind of non-contact laser detection revolving body cross section profile pattern error device
CN111637850A (en) * 2020-05-29 2020-09-08 南京航空航天大学 Self-splicing surface point cloud measuring method without active visual marker
TW202124938A (en) * 2019-12-19 2021-07-01 國立交通大學 Scattering detection apparatus
CN113587816A (en) * 2021-08-04 2021-11-02 天津微深联创科技有限公司 Array type large-scene structured light three-dimensional scanning measurement method and device

Patent Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101466998A (en) * 2005-11-09 2009-06-24 几何信息学股份有限公司 Method and apparatus for absolute-coordinate three-dimensional surface imaging
CN101482398A (en) * 2009-03-06 2009-07-15 北京大学 Fast three-dimensional appearance measuring method and device
DE102016122515A1 (en) * 2015-11-27 2017-06-01 Ulrich Breitmeier Testing and / or calibration standard
CN105931196A (en) * 2016-04-11 2016-09-07 天津大学 Fourier optical modeling-based coded aperture camera image restoration method
CN108986178A (en) * 2017-06-01 2018-12-11 宁波盈芯信息科技有限公司 A kind of random coded method for generating pattern and equipment for structure light coding
CN107507135A (en) * 2017-07-11 2017-12-22 天津大学 Image reconstructing method based on coding aperture and target
CN108645353A (en) * 2018-05-14 2018-10-12 四川川大智胜软件股份有限公司 Three dimensional data collection system and method based on the random binary coding light field of multiframe
CN109540032A (en) * 2019-01-12 2019-03-29 吉林大学 A kind of non-contact laser detection revolving body cross section profile pattern error device
TW202124938A (en) * 2019-12-19 2021-07-01 國立交通大學 Scattering detection apparatus
CN111637850A (en) * 2020-05-29 2020-09-08 南京航空航天大学 Self-splicing surface point cloud measuring method without active visual marker
CN113587816A (en) * 2021-08-04 2021-11-02 天津微深联创科技有限公司 Array type large-scene structured light three-dimensional scanning measurement method and device

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
DA LI, CHI FAI CHEUNG, BO WANG, MINGYU LIU: "A study of a priori knowledge-assisted multi-scopic metrology for freeform surface measurement", 15TH CIRP CONFERENCE ON COMPUTER AIDED TOLERANCING – CIRP CAT 2018, vol. 75, pages 337 - 342 *

Also Published As

Publication number Publication date
CN114166146B (en) 2024-07-02

Similar Documents

Publication Publication Date Title
CN109737874B (en) Object size measuring method and device based on three-dimensional vision technology
Meyer et al. Lasernet: An efficient probabilistic 3d object detector for autonomous driving
CN111091063B (en) Living body detection method, device and system
JP6855587B2 (en) Devices and methods for acquiring distance information from a viewpoint
EP3496383A1 (en) Image processing method, apparatus and device
JP6319329B2 (en) Surface attribute estimation using plenoptic camera
CN107635129B (en) Three-dimensional trinocular camera device and depth fusion method
CN109377551B (en) Three-dimensional face reconstruction method and device and storage medium thereof
CN111179329B (en) Three-dimensional target detection method and device and electronic equipment
CN109716348B (en) Processing multiple regions of interest independently
EP3382645B1 (en) Method for generation of a 3d model based on structure from motion and photometric stereo of 2d sparse images
KR101624120B1 (en) System and method for illuminating pattern light of structured light for measuring 3d forming
CN104581124A (en) Method and apparatus for generating depth map of a scene
CN109974623B (en) Three-dimensional information acquisition method and device based on line laser and binocular vision
CN115049528A (en) Hair image processing method, system, computer device, medium, and program product
CN111601097B (en) Binocular stereo matching method, device, medium and equipment based on double projectors
CN111160233B (en) Human face in-vivo detection method, medium and system based on three-dimensional imaging assistance
JP5336325B2 (en) Image processing method
CN114166146B (en) Three-dimensional measurement method and device based on construction of coded image projection
CN114140659A (en) Social distance monitoring method based on human body detection under view angle of unmanned aerial vehicle
CN114359891A (en) Three-dimensional vehicle detection method, system, device and medium
KR101775110B1 (en) Methods obtain high-quality textures and apparatus for the same
CN117830392B (en) Environmental object identification method and imaging system
US11010909B1 (en) Road surface information-based imaging environment evaluation method, device and system, and storage medium
CN110658618A (en) Method and device for fitting and focusing sample image, computer equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant