CN114166146B - Three-dimensional measurement method and device based on construction of coded image projection - Google Patents

Three-dimensional measurement method and device based on construction of coded image projection Download PDF

Info

Publication number
CN114166146B
CN114166146B CN202111468643.5A CN202111468643A CN114166146B CN 114166146 B CN114166146 B CN 114166146B CN 202111468643 A CN202111468643 A CN 202111468643A CN 114166146 B CN114166146 B CN 114166146B
Authority
CN
China
Prior art keywords
target object
light
image
matched
coded
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202111468643.5A
Other languages
Chinese (zh)
Other versions
CN114166146A (en
Inventor
黎达
张志辉
高三山
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hong Kong Polytechnic University HKPU
Shenzhen Research Institute HKPU
Original Assignee
Hong Kong Polytechnic University HKPU
Shenzhen Research Institute HKPU
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hong Kong Polytechnic University HKPU, Shenzhen Research Institute HKPU filed Critical Hong Kong Polytechnic University HKPU
Priority to CN202111468643.5A priority Critical patent/CN114166146B/en
Publication of CN114166146A publication Critical patent/CN114166146A/en
Application granted granted Critical
Publication of CN114166146B publication Critical patent/CN114166146B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01BMEASURING LENGTH, THICKNESS OR SIMILAR LINEAR DIMENSIONS; MEASURING ANGLES; MEASURING AREAS; MEASURING IRREGULARITIES OF SURFACES OR CONTOURS
    • G01B11/00Measuring arrangements characterised by the use of optical techniques
    • G01B11/24Measuring arrangements characterised by the use of optical techniques for measuring contours or curvatures

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Length Measuring Devices By Optical Means (AREA)

Abstract

The invention relates to the technical field of image processing, in particular to a three-dimensional measurement method and device based on construction of coded image projection. Acquiring parameter information of a target object; obtaining matched coded light matched with the target object according to the parameter information of the target object; and shooting the matched coded light to the target object, and constructing a target image corresponding to the target object under the irradiation of the matched coded light. The invention irradiates the target object with the coded light to form the target image, and the coded light can be used for improving the resolution of the target image corresponding to the target object because the response light formed after the coded light is responded by the target object can carry a large amount of characteristic information of the target object.

Description

Three-dimensional measurement method and device based on construction of coded image projection
Technical Field
The invention relates to the technical field of image processing, in particular to a three-dimensional measurement method and device based on construction of coded image projection.
Background
The white light is irradiated on the object, the object responds to the diffuse reflection, the specular reflection, the refraction and the like of the white light, and then the diffuse reflection light, the specular reflection light and the refraction light are collected by using an optical instrument, so that the construction of an object image is completed. But the image resolution built up by the response light generated by the white light is low because the feature information of the object carried in the response light generated by the object for the white light is small.
In summary, the prior art techniques have achieved low resolution of the build image.
Accordingly, there is a need for improvement and advancement in the art.
Disclosure of Invention
In order to solve the technical problems, the invention provides a three-dimensional measurement method and equipment based on the projection of a constructed coded image, which solve the problem of low resolution of the constructed image obtained in the prior art.
In order to achieve the above purpose, the present invention adopts the following technical scheme:
in a first aspect, the present invention provides a three-dimensional measurement method based on constructing a projection of an encoded image, the three-dimensional measurement method comprising an image construction method, wherein the image construction method comprises:
acquiring parameter information of a target object;
obtaining matched coded light matched with the target object according to the parameter information of the target object;
and shooting the matched coded light to the target object, and constructing a target image corresponding to the target object under the irradiation of the matched coded light.
In one implementation manner, the obtaining, according to the parameter information of the target object, the matched coded light adapted to the target object includes:
Obtaining the object shape in the parameter information according to the parameter information of the target object;
And obtaining matched coded light matched with the target object according to the shape of the object.
In one implementation manner, the obtaining, according to the parameter information of the target object, the matched coded light adapted to the target object includes:
Obtaining the surface roughness and the shape of the object in the parameter information according to the parameter information of the target object;
And obtaining the matched coded light matched with the target object according to the surface roughness degree of the object and the shape of the object.
In one implementation manner, the obtaining, according to the parameter information of the target object, the matched coded light adapted to the target object includes:
Obtaining predictive coded light corresponding to the parameter information according to the parameter information of the target object;
The predictive coding light is shot to the target object, and a test response result of the target object to the predictive coding light is obtained;
obtaining a test image corresponding to the target object according to the test response result;
and obtaining matched coded light matched with the target object according to the test image.
In one implementation, the obtaining, according to the test image, the matched coded light adapted to the target object includes:
obtaining pixel information corresponding to the test image according to the test image;
According to the pixel information corresponding to the test image, adjusting attribute information corresponding to the predictive coding light until the pixel information corresponding to the test image collected under the irradiation of the predictive coding light after the attribute information is adjusted meets a set condition, and obtaining the predictive coding light after the attribute information is adjusted;
And obtaining the matched coded light matched with the target object according to the predicted coded light after the attribute information is adjusted.
In one implementation manner, the adjusting the attribute information corresponding to the prediction coding light according to the pixel information corresponding to the test image until the pixel information corresponding to the test image collected under the irradiation of the prediction coding light after the attribute information is adjusted meets a set condition, so as to obtain the prediction coding light after the attribute information is adjusted, includes:
Obtaining illumination intensity and/or a light distribution mode in the attribute information and/or an incident angle of the predictive coding light to the target object according to the attribute information corresponding to the predictive coding light;
Obtaining diffuse reflection light and/or specular reflection light and/or refraction light of the target object in the test response result on the prediction coding light according to the test response result;
obtaining a test response image formed by the diffuse reflection light and/or the specular reflection light and/or the refraction light in the test image according to the test image;
according to the pixel information corresponding to the test response image, adjusting the illumination intensity and/or the light distribution mode corresponding to the predictive coding light and/or the incidence angle of the predictive coding light to the target object until the pixel information corresponding to the test response image collected under the predictive coding light after the illumination intensity and/or the light distribution mode and/or the incidence angle are adjusted meets the set condition, and obtaining the predictive coding light after the illumination intensity and/or the light distribution mode and/or the incidence angle are adjusted.
In one implementation manner, the directing the matching coded light to the target object, and constructing a target image corresponding to the target object under the irradiation of the matching coded light includes:
collecting each local image corresponding to the target object from each angle of the target object;
And constructing a three-dimensional target depth image in the target image according to each local image.
In a second aspect, an embodiment of the present invention further provides an apparatus for constructing a three-dimensional measurement method for projection of an encoded image, where the apparatus includes the following components:
the data acquisition module is used for acquiring parameter information of the target object;
the coded light generation module is used for obtaining matched coded light matched with the target object according to the parameter information of the target object;
and the image generation module is used for shooting the matched coded light to the target object and constructing a target image corresponding to the target object under the irradiation of the matched coded light.
In a third aspect, an embodiment of the present invention further provides a terminal device, where the terminal device includes a memory, a processor, and an image construction program stored in the memory and capable of running on the processor, and when the processor executes the image construction program, the steps of the image construction method described above are implemented.
In a fourth aspect, an embodiment of the present invention further provides a computer readable storage medium, where an image construction program is stored, where the image construction program, when executed by a processor, implements the steps of the image construction method described above.
The beneficial effects are that: the invention irradiates the target object with the coded light to form the target image, and the coded light can be used for improving the resolution of the target image corresponding to the target object because the response light formed after the coded light is responded by the target object can carry a large amount of characteristic information of the target object. In addition, the invention obtains the coded light matched with the parameter information according to the parameter information of the target object, which belongs to the technical field of determining the coded light matched with the target object according to the priori knowledge of the target object, and only the coded light matched with the target object is adopted to irradiate the target object, the resolution of the target image formed by the target object under the irradiation of the coded light can be further improved.
Drawings
FIG. 1 is an overall flow chart of the present invention;
fig. 2 is a diagram showing PID (pixel value distribution information) obtained by using normal light irradiation in the embodiment;
FIG. 3 is a diagram of the PID obtained by irradiation with the coded light in the example;
fig. 4 is a schematic block diagram of an internal structure of a terminal device according to an embodiment of the present invention.
Detailed Description
The technical scheme of the invention is clearly and completely described below with reference to the examples and the drawings. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
According to research, white light is irradiated on an object, the object responds to the white light in diffuse reflection, specular reflection, refraction and the like, and then the diffuse reflection light, the specular reflection light and the refraction light are collected by an optical instrument, so that the construction of an object image is completed. But the image resolution built up by the response light generated by the white light is low because the feature information of the object carried in the response light generated by the object for the white light is small. In summary, the prior art techniques have achieved low resolution of the build image.
In order to solve the technical problems, the invention provides a three-dimensional measurement method and equipment based on the projection of a constructed coded image, which solve the problem of low resolution of the constructed image obtained in the prior art. In the specific implementation, firstly, the coded light matched with the target object is determined according to the parameter information of the target object, then the determined coded light is directed to the target object, and finally, a target image formed by the target object under the irradiation of the coded light is collected. The target image obtained by the method has higher resolution.
For example, the target object a' with the parameter information a and the coded light matched with the parameter information a are a; a target object B' with the parameter information B, and the coded light matched with the parameter information B is B; the above-mentioned different parameter information based on priori knowledge statistics is matched with different coded lights, when the target image corresponding to the target object a 'needs to be collected, the coded light A needs to be used for irradiating the target object a', so that the target image with high resolution can be obtained; also for the target object B ', it is necessary to irradiate the target object B' with the coded light B. There is a one-to-one correspondence between the parameter information and the coded light.
Exemplary method
The three-dimensional measurement method based on the construction of the coded image projection of the embodiment can be applied to terminal equipment, and the terminal equipment can be a terminal product with a data processing function, such as a computer. In this embodiment, as shown in fig. 1, the image construction method specifically includes the following steps:
S100, acquiring parameter information of a target object.
In this embodiment, the parameter information is geometric parameter information. The target object is an optical device, and the surface of the optical device is a free-form surface, for example, the target object in this embodiment may be a lens to be detected. Optical freeform surfaces have been widely used in the development of various products to achieve specifically designed optical and mechanical functions. The complexity of optical freeform surface measurements presents considerable challenges for process control and quality assessment due to their low data density and speed.
And S200, obtaining the matched coded light matched with the target object according to the parameter information of the target object.
Step S200 includes two parts: the first part searches for predictive coded light suitable for the target object according to the parameter information of the target object; the second part is to irradiate the target object with the predictive coding light, collect the characteristic information of the image formed by the target object under the predictive coding light irradiation, and adjust the predictive coding light according to the characteristic information of the image until the characteristic information of the image meets the set condition, wherein the predictive coding light after adjustment is the matching coding light, and the matching coding light is the coding light which can be used for subsequently constructing the target image corresponding to the target object. Of course, the present embodiment may also obtain the matched coded light matched with the target object only according to the parameter information of the target object, that is, the present embodiment may omit the second portion.
When step S200 includes only the first portion, step S200 includes steps S201 and S202 as follows:
S201, according to the parameter information of the target object, obtaining the object shape and/or the object surface roughness in the parameter information.
S202, obtaining the matched coded light matched with the target object according to the shape and/or the surface roughness of the object.
Steps S021 and S202 are to obtain the matching coded light directly according to the parameter information, i.e. the second part is omitted. And obtaining the matched coded light suitable for the target object through the corresponding relation between the object shape and the surface roughness of the object and the coded light in the priori knowledge, wherein the matched coded light comprises the intensity, the incidence angle and the distribution shape of the coded light, and the intensity, the incidence angle and the distribution shape of the coded light are matched with the target object. The distribution shape of the coded light may be a sector.
For example, the object with a circular surface and rough surface in the prior knowledge is applicable to the coded light with the intensity a1, the incident angle a2 and the light distribution mode a 3; an object with an elliptical surface and a smooth surface is suitable for coded light with intensity b1, incidence angle b2 and light distribution mode b 3. After knowing the shape and surface roughness of the target object, the coded light suitable for the target object can be selected based on the a priori knowledge.
When the step S200 is composed of the first portion and the second portion, the step S200 includes steps S203, S204, S205, S206 as follows:
s203, obtaining the predictive coded light corresponding to the parameter information according to the parameter information of the target object.
In this embodiment, the predictive coded light applied to the target object is selected based on the parameter information including the shape and surface roughness of the target object. The intensity, incidence angle and distribution form of the predicted coded light are only a approximate range, and the predicted coded light can be adjusted in a subsequent step to obtain the required matched coded light.
And S204, the predictive coding light is shot to the target object, and a test response result of the target object to the predictive coding light is obtained.
In this embodiment, after the target object is irradiated with the predictive coded light, diffuse reflected light, specular reflected light, and refracted light are formed after the predictive coded light is acted on by the target object. The diffuse reflected light, the specular reflected light, and the refracted light constitute test response results of the target object to the predictive coded light.
S205, obtaining a test image corresponding to the target object according to the test response result.
The diffuse reflected light, specular reflected light, and refracted light in step S204 are collected, and a test image is formed.
S206, obtaining, according to the test image, matching coded light adapted to the target object, where step S206 includes the following steps S2061, S2062, S2063, S2064, S2065, S2066:
S2061, obtaining pixel information corresponding to the test image according to the test image.
The test image of this embodiment is obtained under the irradiation of the coded light, not white light. Coded light is selected instead of white light for the following reasons:
The target object in this embodiment is a free-form surface without features, so that the distribution of PID (pixel value distribution information) is similar under normal illumination, and the free-form surface is irradiated with coded light, so that the PID of the image formed by the free-form surface is different under different coded light irradiation, even if the coded light changes slightly, the PID changes more obviously, the coded light is adjusted according to the PID changes to ensure that the coded light after adjustment is suitable for the subsequent acquisition of the target image. Fig. 2 is PID information obtained under normal illumination, fig. 3 is PID information obtained by irradiating a target object with coded light, and it can be seen from fig. 2 and 3 that the PID information obtained by the coded light is richer, and the richer the PID information is, the more favorable the improvement of the accuracy and efficiency of information matching, thereby improving the accuracy of 3D reconstruction.
S2062, obtaining the illumination intensity and/or the light distribution mode in the attribute information and/or the incident angle of the predictive coding light to the target object according to the attribute information corresponding to the predictive coding light.
Different illumination intensity, light distribution mode and different incidence directions affect the quality of the finally obtained image, so that the illumination intensity, light distribution mode and incidence direction need to be adjusted, so that the adjusted coded light can obtain an image with higher resolution.
In this embodiment, only one of the illumination intensity, the light distribution manner, and the incident direction in the attribute information may be adjusted, or all three may be adjusted.
S2063, obtaining diffuse reflection light and/or specular reflection light and/or refraction light of the predictive coding light by the target object in the test response result according to the test response result.
S2064, obtaining a test response image formed by the diffuse reflection light and/or the specular reflection light and/or the refraction light in the test image according to the test image.
When the predictive coding light irradiates on the target object, the predictive coding light can generate diffuse reflection, specular reflection and refraction, and the test response image is obtained by collecting the light. In this embodiment, only one or any two of diffuse reflection light, specular reflection light and refraction light are used to obtain the test response image.
S2065, adjusting the illumination intensity and/or the light distribution mode corresponding to the predictive coding light and/or the incident angle of the predictive coding light to the target object according to the pixel information corresponding to the test response image until the pixel information corresponding to the test response image acquired under the predictive coding light after the illumination intensity and/or the light distribution mode and/or the incident angle are adjusted meets the set condition, and obtaining the predictive coding light after the illumination intensity and/or the light distribution mode and/or the incident angle are adjusted.
The setting conditions in the embodiment are that the matching conditions of the homonymous point information are satisfied and the processing conditions of the three-dimensional reconstruction data are satisfied.
The different illumination intensities, different light distribution modes and different incidence angles of the predictive coded light can affect the Pixel Information (PID) corresponding to the image of the target object under the irradiation of the predictive coded light, and only the predictive coded light which enables the Pixel Information (PID) to meet the set condition is the required coded light.
And S2066, obtaining the matched coded light adapted to the target object according to the predicted coded light after the attribute information is adjusted.
The coded light modulated in step S2065 is a coded light suitable for constructing an image of a target object.
And S300, the matched coded light is shot to the target object, and a target image corresponding to the target object under the irradiation of the coded light is constructed.
After the matching coded light is obtained in step S200, the target object may be irradiated with the matching coded light, and then an image formed by the target object irradiated with the matching coded light may be acquired again, and the target image may be constructed using these images.
Step S300 includes steps S301 and S302 as follows:
S301, collecting each local image corresponding to the target object from each angle of the target object.
In this embodiment, the partial image is a 4D light field picture of the target object, and each 4D light field picture is obtained by shooting the target object at different viewing angles. The 4D light field pictures (elemental images) acquired at different viewing angles have subtle differences, and the information carried by the target object is reflected in these subtle differences, which are carriers transferred to the reconstruction process. These different elemental image points produced by a single target point are referred to as homonymy points (CPs). The difference of a certain depth homonymy point (CP) is quantitatively expressed through the system setting parameters and the depth information variables thereof. The parallax information may be represented by multiplying the number of pixels by a single pixel size. This quantitative relationship is the underlying theory of machine vision based measurement systems. Disparity information is extracted by matching CPs of different depth information to represent the different depth information. The defocus information eliminated at each depth plane provides a tomographic three-dimensional reconstruction.
S302, constructing a three-dimensional target depth image in the target image according to each local image.
The detailed procedure of step S302 is as follows:
and constructing a depth network model through the 4D light field picture, and outputting an initial parallax estimation graph through convolving the 4D light field picture by the depth network model. Meanwhile, multi-view light field stereo matching can be performed by utilizing the 4D light field picture, in the stereo matching process, pixel information and eight-direction gradient information in a light field image are used for cost construction, and a parallax estimation picture with the minimum cost, namely a reference parallax picture, can be obtained through a winner general eating algorithm.
And extracting the characteristic points in different light field pictures by using an A-KAZE characteristic extraction operator, and further carrying out homonymous point depth matching to obtain depth information of the homonymous characteristic points, thereby obtaining an A-KAZE parallax estimation graph.
And fusing the reference parallax map obtained by matching and the A-KAZE parallax estimation map with the depth network model in a convolution mode, so that a final parallax map can be obtained.
In the network training process, the finally obtained disparity map is compared with the corresponding true value, and the distance between the disparity map and the true value is used for training the network. The trained network can be directly used for parallax estimation, and further, a depth image of the target object is obtained according to the optical parameters.
The depth image of the target object obtained in this embodiment includes various detail problems of the target object, and it is possible to detect whether the parameter performance of the target object meets the requirements by analyzing the depth image, so that the depth image is constructed in this embodiment to detect whether the processing of the target object is qualified.
In summary, the present invention irradiates the target object with the coded light to form the target image, and since the response light formed after the coded light is responded by the target object can carry a large amount of characteristic information of the target object, the resolution of the target image corresponding to the target object is improved by using the coded light. In addition, the invention obtains the coded light matched with the parameter information according to the parameter information of the target object, which belongs to the technical field of determining the coded light matched with the target object according to the priori knowledge of the target object, and only the coded light matched with the target object is adopted to irradiate the target object, the resolution of the target image formed by the target object under the irradiation of the coded light can be further improved.
Exemplary apparatus
The embodiment also provides a device based on the three-dimensional measurement method for constructing the coded image projection, which comprises the following components:
the data acquisition module is used for acquiring parameter information of the target object;
the coded light generation module is used for obtaining matched coded light matched with the target object according to the parameter information of the target object;
the image generation module is used for directing the coded light to the target object and constructing a target image corresponding to the target object under the irradiation of the coded light
Based on the above embodiment, the present invention also provides a terminal device, and a functional block diagram thereof may be shown in fig. 4. The terminal equipment comprises a processor, a memory, a network interface, a display screen and a temperature sensor which are connected through a system bus. Wherein the processor of the terminal device is adapted to provide computing and control capabilities. The memory of the terminal device comprises a nonvolatile storage medium and an internal memory. The non-volatile storage medium stores an operating system and a computer program. The internal memory provides an environment for the operation of the operating system and computer programs in the non-volatile storage media. The network interface of the terminal device is used for communicating with an external terminal through a network connection. The computer program is executed by a processor to implement an image construction method. The display screen of the terminal equipment can be a liquid crystal display screen or an electronic ink display screen, and the temperature sensor of the terminal equipment is preset in the terminal equipment and is used for detecting the running temperature of the internal equipment.
It will be appreciated by persons skilled in the art that the functional block diagram shown in fig. 4 is merely a block diagram of some of the structures associated with the present inventive arrangements and is not limiting of the terminal device to which the present inventive arrangements are applied, and that a particular terminal device may include more or fewer components than shown, or may combine some of the components, or may have a different arrangement of components.
In one embodiment, there is provided a terminal device including a memory, a processor, and an image construction method program stored in the memory and executable on the processor, the processor implementing the following operation instructions when executing the image construction method program:
acquiring parameter information of a target object;
obtaining matched coded light matched with the target object according to the parameter information of the target object;
and shooting the coded light to the target object to construct a target image corresponding to the target object under the irradiation of the coded light.
Those skilled in the art will appreciate that implementing all or part of the above described methods may be accomplished by way of a computer program stored on a non-transitory computer readable storage medium, which when executed, may comprise the steps of the embodiments of the methods described above. Any reference to memory, storage, database, or other medium used in embodiments provided herein may include non-volatile and/or volatile memory. The nonvolatile memory can include Read Only Memory (ROM), programmable ROM (PROM), electrically Programmable ROM (EPROM), electrically Erasable Programmable ROM (EEPROM), or flash memory. Volatile memory can include Random Access Memory (RAM) or external cache memory. By way of illustration and not limitation, RAM is available in a variety of forms such as Static RAM (SRAM), dynamic RAM (DRAM), synchronous DRAM (SDRAM), double Data Rate SDRAM (DDRSDRAM), enhanced SDRAM (ESDRAM), synchronous link (SYNCHLINK) DRAM (SLDRAM), memory bus (Rambus) direct RAM (RDRAM), direct memory bus dynamic RAM (DRDRAM), and memory bus dynamic RAM (RDRAM), among others.
In summary, the invention discloses a three-dimensional measurement method and equipment based on construction of coded image projection, wherein the method comprises the following steps: acquiring parameter information of a target object; obtaining matched coded light matched with the target object according to the parameter information of the target object; and shooting the coded light to the target object to construct a target image corresponding to the target object under the irradiation of the coded light. The method can improve the resolution of the target image.
Finally, it should be noted that: the above embodiments are only for illustrating the technical solution of the present invention, and are not limiting; although the invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical scheme described in the foregoing embodiments can be modified or some technical features thereof can be replaced by equivalents; such modifications and substitutions do not depart from the spirit and scope of the technical solutions of the embodiments of the present invention.

Claims (3)

1. A three-dimensional measurement method based on constructing coded image projections, the three-dimensional measurement method comprising an image construction method, characterized in that the image construction method comprises:
acquiring parameter information of a target object;
Obtaining matched coded light matched with the target object according to the parameter information of the target object, wherein the matched coded light comprises the intensity, the incidence angle and the distribution shape of the coded light which are matched with the target object;
the matching coded light is shot to the target object, and a target image corresponding to the target object under the irradiation of the matching coded light is constructed;
The obtaining the matched coded light adapted to the target object according to the parameter information of the target object comprises the following steps:
according to the parameter information of the target object, the parameter information comprises the object surface roughness and the shape of the target object, and the prediction coding light corresponding to the parameter information is obtained;
The predictive coding light is shot to the target object, and a test response result of the target object to the predictive coding light is obtained;
obtaining a test image corresponding to the target object according to the test response result;
obtaining matched coded light matched with the target object according to the test image;
The step of radiating the matching coded light to the target object to construct a target image corresponding to the target object under the irradiation of the matching coded light, comprising the following steps:
collecting each local image corresponding to the target object from each angle of the target object;
constructing a three-dimensional target depth image in the target image according to each local image, wherein the depth image contains details of a target object, and detecting the parameter performance of the target object through analysis of the depth image;
and obtaining matched coded light matched with the target object according to the test image, wherein the method comprises the following steps:
obtaining pixel information corresponding to the test image according to the test image;
Obtaining illumination intensity and/or a light distribution mode in the attribute information and/or an incident angle of the predictive coding light to the target object according to the attribute information corresponding to the predictive coding light;
Obtaining diffuse reflection light and/or specular reflection light and/or refraction light of the target object in the test response result on the prediction coding light according to the test response result;
obtaining a test response image formed by the diffuse reflection light and/or the specular reflection light and/or the refraction light in the test image according to the test image;
According to the pixel information corresponding to the test response image, adjusting the illumination intensity and/or the light distribution mode corresponding to the predictive coding light and/or the incident angle of the predictive coding light to the target object until the pixel information corresponding to the test response image collected under the predictive coding light after the illumination intensity and/or the light distribution mode and/or the incident angle are adjusted meets a set condition, so as to obtain the predictive coding light after the illumination intensity and/or the light distribution mode and/or the incident angle are adjusted, wherein the set condition is that the matching condition of homonymy point information is met;
And obtaining the matched coded light matched with the target object according to the predicted coded light after the attribute information is adjusted.
2. A terminal device comprising a memory, a processor and an image construction program stored in the memory and executable on the processor, the processor implementing the steps of the image construction method according to claim 1 when the image construction program is executed.
3. A computer-readable storage medium, on which an image construction program is stored, which, when being executed by a processor, implements the steps of the image construction method according to claim 1.
CN202111468643.5A 2021-12-03 2021-12-03 Three-dimensional measurement method and device based on construction of coded image projection Active CN114166146B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111468643.5A CN114166146B (en) 2021-12-03 2021-12-03 Three-dimensional measurement method and device based on construction of coded image projection

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111468643.5A CN114166146B (en) 2021-12-03 2021-12-03 Three-dimensional measurement method and device based on construction of coded image projection

Publications (2)

Publication Number Publication Date
CN114166146A CN114166146A (en) 2022-03-11
CN114166146B true CN114166146B (en) 2024-07-02

Family

ID=80482864

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111468643.5A Active CN114166146B (en) 2021-12-03 2021-12-03 Three-dimensional measurement method and device based on construction of coded image projection

Country Status (1)

Country Link
CN (1) CN114166146B (en)

Family Cites Families (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7929751B2 (en) * 2005-11-09 2011-04-19 Gi, Llc Method and apparatus for absolute-coordinate three-dimensional surface imaging
CN101482398B (en) * 2009-03-06 2011-03-30 北京大学 Fast three-dimensional appearance measuring method and device
DE102016122515A1 (en) * 2015-11-27 2017-06-01 Ulrich Breitmeier Testing and / or calibration standard
CN105931196B (en) * 2016-04-11 2018-10-19 天津大学 Coding aperture camera image restoration methods based on Fourier Optics modeling
US20180347967A1 (en) * 2017-06-01 2018-12-06 RGBDsense Information Technology Ltd. Method and apparatus for generating a random coding pattern for coding structured light
CN107507135B (en) * 2017-07-11 2020-04-24 天津大学 Image reconstruction method based on coding aperture and target
CN108645353B (en) * 2018-05-14 2020-09-01 四川川大智胜软件股份有限公司 Three-dimensional data acquisition system and method based on multi-frame random binary coding light field
CN109540032B (en) * 2019-01-12 2024-04-19 吉林大学 Non-contact laser detection revolution body section profile morphology error device
TW202124938A (en) * 2019-12-19 2021-07-01 國立交通大學 Scattering detection apparatus
CN111637850B (en) * 2020-05-29 2021-10-26 南京航空航天大学 Self-splicing surface point cloud measuring method without active visual marker
CN113587816B (en) * 2021-08-04 2024-07-26 天津微深联创科技有限公司 Array type large scene structured light three-dimensional scanning measurement method and device thereof

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
A study of a priori knowledge-assisted multi-scopic metrology for freeform surface measurement;Da Li, Chi Fai Cheung, Bo Wang, Mingyu Liu;15th CIRP Conference on Computer Aided Tolerancing – CIRP CAT 2018;第75卷;第337-342页 *

Also Published As

Publication number Publication date
CN114166146A (en) 2022-03-11

Similar Documents

Publication Publication Date Title
Meyer et al. Lasernet: An efficient probabilistic 3d object detector for autonomous driving
CN109737874B (en) Object size measuring method and device based on three-dimensional vision technology
CN107635129B (en) Three-dimensional trinocular camera device and depth fusion method
CN109377551B (en) Three-dimensional face reconstruction method and device and storage medium thereof
CN111091063A (en) Living body detection method, device and system
KR102122893B1 (en) System and method for autonomous crack evaluation of structure based on uav mounted-hybrid image scanning
CN110555811A (en) SAR image data enhancement method and device and storage medium
KR101624120B1 (en) System and method for illuminating pattern light of structured light for measuring 3d forming
CN113989758B (en) Anchor guide 3D target detection method and device for automatic driving
CN104079827A (en) Light field imaging automatic refocusing method
Yang et al. S $^ 3$-NeRF: Neural reflectance field from shading and shadow under a single viewpoint
Hu et al. Deep-learning assisted high-resolution binocular stereo depth reconstruction
CN110264527A (en) Real-time binocular stereo vision output method based on ZYNQ
CN111160233B (en) Human face in-vivo detection method, medium and system based on three-dimensional imaging assistance
CN114923665A (en) Image reconstruction method and image reconstruction test system for wave three-dimensional height field
CN110363734B (en) Thick sample microscopic fluorescence image reconstruction method and system
CN116543247A (en) Data set manufacturing method and verification system based on photometric stereo surface reconstruction
CN114166146B (en) Three-dimensional measurement method and device based on construction of coded image projection
JP5336325B2 (en) Image processing method
CN116977341A (en) Dimension measurement method and related device
CN114140659A (en) Social distance monitoring method based on human body detection under view angle of unmanned aerial vehicle
Pintus et al. Practical free-form RTI acquisition with local spot lights
CN117994504B (en) Target detection method and target detection device
CN117422750B (en) Scene distance real-time sensing method and device, electronic equipment and storage medium
CN118485702B (en) High-precision binocular vision ranging method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant