CN114820921A - Three-dimensional imaging method, device and equipment for dynamic object and storage medium - Google Patents

Three-dimensional imaging method, device and equipment for dynamic object and storage medium Download PDF

Info

Publication number
CN114820921A
CN114820921A CN202210270991.XA CN202210270991A CN114820921A CN 114820921 A CN114820921 A CN 114820921A CN 202210270991 A CN202210270991 A CN 202210270991A CN 114820921 A CN114820921 A CN 114820921A
Authority
CN
China
Prior art keywords
pixel
dynamic object
feature extraction
pixel points
model
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210270991.XA
Other languages
Chinese (zh)
Inventor
李东
陈铭鑫
田劲东
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen University
Original Assignee
Shenzhen University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen University filed Critical Shenzhen University
Priority to CN202210270991.XA priority Critical patent/CN114820921A/en
Publication of CN114820921A publication Critical patent/CN114820921A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2200/00Indexing scheme for image data processing or generation, in general
    • G06T2200/08Indexing scheme for image data processing or generation, in general involving all processing steps from image acquisition to 3D model generation

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Software Systems (AREA)
  • Evolutionary Computation (AREA)
  • Molecular Biology (AREA)
  • General Health & Medical Sciences (AREA)
  • Mathematical Physics (AREA)
  • Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Computing Systems (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Computer Graphics (AREA)
  • Geometry (AREA)
  • Length Measuring Devices By Optical Means (AREA)

Abstract

The application provides a three-dimensional imaging method, a three-dimensional imaging device, a three-dimensional imaging equipment and a storage medium for a dynamic object, wherein the method comprises the steps of performing stripe projection on the dynamic object according to two preset stripe frequencies and shooting a stripe image; inputting the fringe pattern into a preset image feature extraction model for feature extraction processing, and respectively obtaining a pixel real part distribution map and a pixel imaginary part distribution map of the dynamic object; according to the pixel real part distribution diagram and the pixel virtual part distribution diagram, performing phase calculation on the pixels of the dynamic object to obtain a folded phase diagram of the dynamic object, and performing phase unfolding processing on the folded phase diagram to obtain an absolute phase diagram of the dynamic object; and performing binocular matching and depth calculation on all pixel points of the dynamic object according to the absolute phase diagram to obtain three-dimensional coordinate values corresponding to the pixel points, and constructing a three-dimensional image of the target object according to the three-dimensional coordinate values corresponding to the pixel points. The method has the advantages of small number of required fringe patterns in the implementation process and high precision of three-dimensional imaging of the dynamic object.

Description

Three-dimensional imaging method, device and equipment for dynamic object and storage medium
Technical Field
The present disclosure relates to the field of optical three-dimensional measurement technologies, and in particular, to a method, an apparatus, a device, and a storage medium for three-dimensional imaging of a dynamic object.
Background
Fringe Projection Profilometry (FPP) is widely applied to the field of object three-dimensional measurement due to the characteristics of active type and non-contact measurement. The fringe analysis method for the static object has been deeply researched, popularized and applied, and with the development of technologies such as high-speed projection and camera shooting, the three-dimensional surface shape measurement of the dynamic object also becomes a research hotspot. In the existing three-dimensional imaging method, in order to ensure high-precision three-dimensional measurement of an object, a method of combining a phase shift method with time phase unwrapping is generally adopted, however, this method needs to project multiple frames of fringe patterns with different phase shift amounts, and the phase unwrapping also needs to encode multiple fringe frequencies, the processing process is time-consuming, although the precision is high, the requirement of a dynamic object on a high-speed changing scene cannot be met.
Disclosure of Invention
In view of this, embodiments of the present application provide a method, an apparatus, a device, and a storage medium for three-dimensional imaging of a dynamic object, which can implement high-precision dynamic three-dimensional imaging of the object by projecting two frames of fringe patterns, so that the number of fringe patterns required in the implementation process of three-dimensional imaging is small and the precision of three-dimensional imaging of the dynamic object is high.
A first aspect of an embodiment of the present application provides a method for three-dimensional imaging of a dynamic object, including:
performing fringe projection on the dynamic object according to two preset fringe frequencies and shooting a fringe pattern;
inputting the fringe pattern into a preset image feature extraction model for feature extraction processing, and respectively obtaining a pixel real part distribution map and a pixel imaginary part distribution map of the dynamic object;
according to the pixel real part distribution map and the pixel imaginary part distribution map, performing phase calculation on the pixels of the dynamic object to obtain a folded phase map of the dynamic object, and performing phase unfolding processing on the folded phase map to obtain an absolute phase map of the dynamic object;
and performing binocular matching and depth calculation on all pixel points of the dynamic object according to the absolute phase diagram to obtain three-dimensional coordinate values corresponding to the pixel points, and constructing a three-dimensional image of the dynamic object according to the three-dimensional coordinate values corresponding to the pixel points.
With reference to the first aspect, in a first possible implementation manner of the first aspect, before the step of inputting the fringe pattern into a preset image feature extraction model for feature extraction processing, and obtaining a pixel real part distribution map and a pixel imaginary part distribution map of the dynamic object respectively, the method further includes:
and determining a denoising threshold value of the fringe image according to the background intensity distribution in the fringe image, and denoising the fringe image according to the denoising threshold value.
With reference to the first aspect, in a second possible implementation manner of the first aspect, before the step of inputting the fringe pattern into a preset image feature extraction model for feature extraction processing, and obtaining a pixel real part distribution map and a pixel imaginary part distribution map of the dynamic object respectively, the method further includes:
collecting object fringe pattern samples under a plurality of different frequencies by adopting a pre-established sine fringe projection measuring system;
calculating real part data values and imaginary part data values of all pixel points representing the object in the object fringe pattern book by adopting a phase shift method;
and taking the object stripe pattern book as model input, and taking real part data values and pixel imaginary part data values of all pixel points representing the object in the object stripe pattern book as models to output and train a preset convolutional neural network model to a convergence state, thereby generating an image feature extraction model.
With reference to the second possible implementation manner of the first aspect, in a third possible implementation manner of the first aspect, the training a preset convolutional neural network model to a convergence state by using the object stripe pattern as a model input and using real part data values and pixel imaginary part data values of all pixel points representing an object in the object stripe pattern as a model output, and generating an image feature extraction model includes:
randomly distributing all collected object stripe patterns according to a preset distribution proportion to obtain a training sample set and a verification sample set;
for a first object fringe pattern sample distributed to the training sample set, training the preset convolutional neural network model by taking the first object fringe pattern sample as model input and taking real part data values and imaginary part data values of all pixel points representing an object in the first object fringe pattern sample as model output, and obtaining the trained convolutional neural network model;
inputting the second object fringe pattern sample to the trained convolutional neural network model for feature extraction processing aiming at the second object fringe pattern sample distributed to the verification sample set, and acquiring first real part data values and first imaginary part data values of all pixel points representing the object and output by the trained convolutional neural network model;
comparing the similarity between the first real part data value and the first imaginary part data value of all pixel points of the representation object output by the trained convolutional neural network model and the real part data value and the imaginary part data value of all pixel points of the representation object obtained by calculating the second object fringe pattern through a phase shift method to obtain a similarity value;
and comparing the similarity value with the similarity value of the previous iteration training, if the increase amplitude of the similarity is smaller than a preset threshold value, judging that the trained convolutional neural network model is trained to a convergence state, and generating the trained convolutional neural network model into an image feature extraction model.
With reference to the third possible implementation manner of the first aspect, in a fourth possible implementation manner of the first aspect, after the step of comparing the similarity value with a preset similarity threshold, and if the similarity is greater than the preset threshold, determining that the trained convolutional neural network model is trained to a convergence state, and generating the trained convolutional neural network model as an image feature extraction model, the method further includes:
constructing a first loss function of the image feature extraction model according to real part data values and imaginary part data values of all pixel points representing the object output by the image feature extraction model and real part data values and imaginary part data values of all pixel points representing the object, which are obtained by calculating the object fringe pattern through a phase shift method;
obtaining the modulation distribution of the object fringe pattern sample, and constructing a second loss function of the image feature extraction model;
and performing weighted integration on the first loss function and the second loss function to generate a total loss function of the image feature extraction model, and performing model optimization training on the image feature extraction model by using the total loss function.
With reference to the second possible implementation manner of the first aspect, in a fifth possible implementation manner of the first aspect, in the step of training a preset convolutional neural network model to a convergence state by using the object stripe pattern as a model input and using real part data values and pixel imaginary part data values of all pixel points representing an object in the object stripe pattern as a model output, and generating an image feature extraction model, the preset convolutional neural network model includes four layers of U-Net convolutional layer structures, where each layer of U-Net convolutional layer structure is configured with a residual dense network block, and the residual dense network block is used for implementing feature extraction processing operations.
With reference to the first aspect, in a sixth possible implementation manner of the first aspect, the residual error-dense network block includes a plurality of convolutional layers, where each convolutional layer is provided with a corresponding expansion rate.
A second aspect of embodiments of the present application provides a three-dimensional imaging apparatus of a dynamic object, including:
the image shooting module is used for carrying out stripe projection on the dynamic object according to the preset stripe frequency and shooting a stripe image;
the characteristic extraction module is used for inputting the fringe pattern into a preset image characteristic extraction model for characteristic extraction processing to respectively obtain a pixel real part distribution map and a pixel imaginary part distribution map of the dynamic object;
the phase image acquisition module is used for performing phase calculation on the pixel points of the dynamic object according to the pixel point real part distribution map and the pixel point imaginary part distribution map to obtain a folded phase image of the dynamic object, and performing phase unfolding processing on the folded phase image to obtain an absolute phase image of the dynamic object;
and the three-dimensional imaging module is used for performing binocular matching and depth calculation on all pixel points of the dynamic object according to the absolute phase diagram to obtain three-dimensional coordinate values corresponding to the pixel points so as to construct a three-dimensional image of the dynamic object according to the three-dimensional coordinate values corresponding to the pixel points.
A third aspect of embodiments of the present application provides an electronic device, which includes a memory, a processor, and a computer program stored in the memory and executable on the electronic device, where the processor implements the steps of the method for three-dimensional imaging of a dynamic object provided in the first aspect when executing the computer program.
A fourth aspect of embodiments of the present application provides a computer-readable storage medium storing a computer program which, when executed by a processor, implements the steps of the method for three-dimensional imaging of a dynamic object provided in the first aspect.
The three-dimensional imaging method and device, the electronic device and the storage medium for the dynamic object have the following beneficial effects:
the method comprises the steps of obtaining a fringe pattern of the dynamic object according to two preset fringe frequencies, inputting the obtained fringe pattern into an image feature extraction model trained in advance through a convolutional neural network for feature extraction processing, and obtaining a pixel real part distribution map and a pixel imaginary part distribution map of the dynamic object respectively through neural network learning. And performing phase calculation on the pixels of the dynamic object according to the real part distribution diagram and the virtual part distribution diagram of the pixels to obtain a folded phase diagram of the dynamic object, and performing phase unfolding processing on the folded phase diagram to obtain an absolute phase diagram of the dynamic object. And performing binocular matching and depth calculation processing on all pixel points of the dynamic object according to the absolute phase diagram to obtain three-dimensional coordinate values corresponding to the pixel points, and constructing and obtaining a three-dimensional image of the dynamic object through the three-dimensional coordinate values corresponding to the pixel points. According to the method, the analysis precision of a single-frame fringe frequency spectrum is improved by utilizing neural network training, and meanwhile, geometric constraint and virtual plane position presetting are performed by combining with the device structure of a sine fringe projection measurement system, so that double-frequency phase expansion is realized, and high-precision three-dimensional imaging of a dynamic object can be performed only by two frames of fringe images.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings needed to be used in the embodiments or the prior art descriptions will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without inventive exercise.
Fig. 1 is a flowchart of an implementation of a method for three-dimensional imaging of a dynamic object according to an embodiment of the present application;
fig. 2 is a flowchart of a first implementation of training an image feature extraction model in a three-dimensional imaging method of a dynamic object according to an embodiment of the present disclosure;
fig. 3 is a flowchart of a second implementation of training an image feature extraction model in a three-dimensional imaging method of a dynamic object according to an embodiment of the present disclosure;
fig. 4 is a flowchart of an implementation of model optimization training on an image feature extraction model in the three-dimensional imaging method of a dynamic object according to the embodiment of the present disclosure;
fig. 5 is a convolutional neural network structure diagram of an image feature extraction model in the three-dimensional imaging method of a dynamic object according to the embodiment of the present application;
fig. 6 is a block diagram of an infrastructure of a three-dimensional imaging apparatus for a dynamic object according to an embodiment of the present application;
fig. 7 is a block diagram of a basic structure of an electronic device according to an embodiment of the present disclosure.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the present application is described in further detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the present application and are not intended to limit the present application.
Referring to fig. 1, fig. 1 is a flowchart illustrating an implementation of a three-dimensional imaging method for a dynamic object according to an embodiment of the present disclosure. The details are as follows:
s11: and carrying out fringe projection on the dynamic object according to two preset fringe frequencies and shooting a fringe image.
In this embodiment, a pre-established sinusoidal fringe projection measurement system with a dual camera and a single projector is used to perform fringe projection and image shooting on a dynamic object, so as to obtain a fringe pattern of the dynamic object. Specifically, in a pre-established sine stripe projection measurement system, a plurality of selectable sine projection stripe frequencies are preset in combination with the system structures of the double camera and the single projector and a phase demodulation work flow required by the subsequent three-dimensional imaging of a dynamic object, the sine stripe projection measurement system can generate stripe sequences with different pixel periods through different frequencies and download the stripe sequences into the projector, and stripe projection actions of the projector and shooting actions of the camera on different scenes are synchronously triggered. In this embodiment, when three-dimensional imaging is performed on a dynamic object, a user may perform fringe projection and image capturing on the dynamic object at two frequencies preset by the sinusoidal fringe projection measurement system, so as to obtain two frames of fringe images of the dynamic object.
S12: inputting the fringe pattern into a preset image feature extraction model for feature extraction processing, and respectively obtaining a pixel real part distribution map and a pixel imaginary part distribution map of the dynamic object.
In this embodiment, the preset image feature extraction model is a convolutional neural network model constructed by using a U-Net network structure and a residual dense network block, the convolutional neural network model performs model training by using a large number of fringe patterns of objects photographed at different frequencies as training samples and trains to a convergence state to obtain the capability of extracting the real part and the imaginary part of the pixel point of the first-order frequency spectrum of the single-frame fringe pattern, and the convolutional neural network model trained to the convergence state is an image feature extraction model for performing feature extraction processing. In this embodiment, after obtaining the fringe pattern of the dynamic object, the fringe pattern of the dynamic object is input into the image feature extraction model to perform feature extraction processing, a pixel real part distribution map of the dynamic object is extracted from the fringe pattern of the dynamic object by using a channel for extracting a pixel real part of a single-frame fringe pattern first-order frequency spectrum in the image feature extraction model, and a pixel imaginary part distribution map of the dynamic object is extracted from the fringe pattern of the dynamic object by using a channel for extracting a pixel imaginary part of the single-frame fringe pattern first-order frequency spectrum in the image feature extraction model.
S13: and performing phase calculation on the pixel points of the dynamic object according to the pixel point real part distribution diagram and the pixel point imaginary part distribution diagram to obtain a folded phase diagram of the dynamic object, and performing phase unfolding processing on the folded phase diagram to obtain an absolute phase diagram of the dynamic object.
In this embodiment, the dynamic object is represented by a combination of a plurality of pixels in the fringe pattern, the real part distribution diagram of the pixels of the dynamic object includes real part item data values for representing all the pixels of the dynamic object, and the virtual part distribution diagram of the pixels of the dynamic object includes virtual part item data values for representing all the pixels of the dynamic object. And aiming at each pixel point, performing phase calculation by adopting an arc tangent operation method according to the real part item data value and the imaginary part item data value of the pixel point to obtain the folding phase of the pixel point, wherein the phase values of all the pixel points of the dynamic object are folded between (-pi, pi) through the arc tangent operation, so that the folding phase diagram of the dynamic object can be obtained by drawing the folding phases of all the pixel points of the dynamic object into one diagram. Specifically, the formula for calculating the folding phase of the pixel point by using the arc tangent operation method is as follows:
Figure BDA0003554656010000071
wherein, θ represents the folding phase value of the pixel, a represents the real part data value of the pixel, and b represents the imaginary part data value of the pixel.
In this embodiment, after obtaining the folded phase diagram, the geometric constraint relationship and the virtual plane position of the view are determined by the structures of the dual camera and the single projector in the sinusoidal fringe projection measurement system, and the folded phase diagram is expanded according to the geometric constraint relationship and the virtual plane position to obtain the absolute phase diagram of the dynamic object. For example, in this embodiment, because the phase values of all the pixel points of the dynamic object in the folded phase diagram are folded between (-pi, pi), and some pixel points may jump in the folded phase diagram, in this embodiment, a measurement plane with a known depth may be defined, and according to a geometric constraint relationship, the pixel points of the plane imaged on the camera may find a mapping region on the projector sensor plane, and obtain an absolute phase value of a virtual plane. The phase diagram of the virtual object plane can be used for absolute phase expansion of the low-frequency fringe folding phase diagram, the fringe order of a pixel point is determined through the relation between the current pixel point of the folding phase diagram and the pixel point at the corresponding position on the virtual plane absolute phase diagram, the folding phase value of the current pixel point is increased by the product of the fringe order and 2 pi, all the pixel points in the folding phase diagram are traversed in the mode, and then the expansion processing of the low-frequency folding phase diagram can be achieved. Specifically, the calculation formula for determining the fringe order is,
Figure BDA0003554656010000081
wherein phi min Is the minimum phase diagram of the virtual plane, phi is the folded phase diagram, ceil [ ·]Is a ceiling operation.
Aiming at the high-frequency fringe image, determining the fringe order of the pixel point by folding the relationship between the current pixel point of the phase image and the pixel point at the corresponding position on the low-frequency absolute phase image, increasing the product of the fringe order and 2 pi to the folding phase value of the current pixel point, traversing all the pixel points in the folding phase image in this way, and realizing the expansion processing of the low-frequency folding phase image. Specifically, the calculation formula for determining the fringe order is:
Figure BDA0003554656010000082
wherein, t 1 ,t 2 Fringe period, phi, representing two fringe sequences w And phi u Respectively representing a high-frequency folded phase and a low-frequency unfolded phase, round [ ·]Indicating a rounding operation nearby. In this way, the unfolding processing of the folded phase map can be realized, and a corresponding absolute phase map can be obtained.
S14: and performing binocular matching and depth calculation on all pixel points of the dynamic object according to the absolute phase diagram to obtain three-dimensional coordinate values corresponding to the pixel points, and constructing a three-dimensional image of the dynamic object according to the three-dimensional coordinate values corresponding to the pixel points.
In this embodiment, the absolute phase map includes absolute phase data values of all pixel points of the dynamic object, and an absolute phase value corresponding to each pixel point can be obtained according to the absolute phase map. Illustratively, for all pixel points in the absolute phase map, through binocular matching, a disparity value and a depth value of each pixel point which is successfully matched are calculated, so that a three-dimensional position relationship between the pixel points is determined. And then, a coordinate system of a three-dimensional space is constructed by combining calibration parameters of the sinusoidal fringe projection measurement system when a fringe pattern of the dynamic object is obtained, wherein the calibration parameters comprise calibration parameters of the double cameras and calibration parameters of the projector. And obtaining a three-dimensional coordinate value corresponding to each pixel point based on the coordinate system of the three-dimensional space, drawing the three-dimensional coordinate values corresponding to all the pixel points in the three-dimensional space to form a three-dimensional point cloud, and constructing and obtaining a three-dimensional image of the dynamic object by using the three-dimensional point cloud. For example, in this embodiment, the binocular matching process is based on a pinhole imaging model, acquiring projection points of projection center points and object points of two cameras in a binocular camera on imaging planes of the two cameras, respectively, after calibrating one of the cameras, establishing a ray equation according to the projection point of the object point on the imaging plane of the camera and the projection center point of the camera, performing another ray equation according to the projection point of the object point on the imaging plane of the other camera and the projection center point of the other camera, where the two ray equations represent two straight lines correspondingly, performing joint solution on the two ray equations to obtain an intersection point of the two straight lines, and setting a three-dimensional coordinate value of the intersection point as a three-dimensional coordinate value of the object point.
As can be seen from the above, the three-dimensional imaging method for a dynamic object provided in this embodiment first obtains a fringe pattern of the dynamic object according to two preset fringe frequencies, inputs the obtained fringe pattern into an image feature extraction model trained in advance through a convolutional neural network to perform feature extraction processing, and learns by using the neural network to obtain a pixel real part distribution map and a pixel imaginary part distribution map of the dynamic object, respectively. And then according to the pixel real part distribution map and the pixel virtual part distribution map, performing phase calculation on the pixels of the dynamic object to obtain a folded phase diagram of the dynamic object, and performing phase unfolding processing on the folded phase diagram to obtain an absolute phase diagram of the dynamic object. And finally, performing binocular matching and depth calculation processing on all pixel points of the dynamic object according to the absolute phase diagram to obtain three-dimensional coordinate values corresponding to the pixel points, and constructing and obtaining a three-dimensional image of the dynamic object through the three-dimensional coordinate values corresponding to the pixel points. According to the method, the analysis precision of a single-frame fringe frequency spectrum is improved by utilizing neural network training, and meanwhile, geometric constraint and virtual plane position presetting are performed by combining with the device structure of a sine fringe projection measurement system, so that double-frequency phase expansion is realized, and high-precision three-dimensional imaging of a dynamic object can be performed only by two frames of fringe images.
In some embodiments of the present application, the background intensity distribution may be represented as a mean value of pixel color values presented by all background pixel points in the fringe image. In this embodiment, the calculation formula of the background intensity distribution may be set as:
Figure BDA0003554656010000101
wherein, I n And (3) representing the intensity distribution diagram of the deformed fringe pattern with the same frequency and different phase shifts, wherein N is a phase shift step.
In this embodiment, a denoising threshold for denoising the fringe pattern may be determined according to the background intensity distribution in the fringe pattern, for example, the denoising threshold may be set as an average value of pixel color values presented by all background pixel points in the fringe pattern, so as to remove the background pixel points in the fringe pattern. In this embodiment, after obtaining the fringe pattern of the dynamic object, the pixel color values of all the pixels in the fringe pattern are obtained by scanning the fringe pattern, and based on the difference between the pixel color value reflected by the target object and the pixel color value reflected by the background, the pixel points for representing the background and the pixel points for representing the target object can be divided from the fringe pattern by the difference between the pixel color values of the pixel points. Furthermore, the fringe pattern can be denoised according to the denoising threshold, and pixel points which meet the denoising threshold condition in the fringe pattern can be eliminated, so that the pixel points which represent the background are eliminated as noise points, and only the pixel points which are used for representing the dynamic object are reserved in the fringe pattern. It can be understood that, in this embodiment, the color value of the pixel point reflected by the target object is generally within a range of a color value interval and has a jumping change with the difference of the color value of the pixel point of the background, so the pixel point can be determined as a noise point in the fringe image, and therefore, a noise-removing threshold can be set based on the color value interval of the pixel color value reflected by the target object obtained by scanning, so as to remove the noise point within the range of the pixel point region representing the dynamic object, for example, to remove the overexposed pixel point in the fringe image.
In some embodiments of the present application, please refer to fig. 2, and fig. 2 is a flowchart illustrating a first implementation of training an image feature extraction model in a three-dimensional imaging method of a dynamic object according to an embodiment of the present application. The details are as follows:
s21: and collecting object fringe pattern samples at a plurality of different frequencies by adopting a pre-built sine fringe projection measuring system.
In this embodiment, a plurality of different frequencies of the sine fringe projection are set in a pre-established sine fringe projection measurement system, for example, assuming that the horizontal direction resolution of a projector device in the sine fringe projection measurement system is 1280 pixels, fringe frequencies adopted by a dual-frequency phase unwrapping operation flow required when phase demodulation is performed in a subsequent three-dimensional imaging process are respectively 16 and 64, that is, fringe pixel periods are respectively 80 and 20. In order to ensure the accuracy of phase demodulation, the number of pixel cycles of the coding stripes in the phase demodulation process should be an integral multiple of the phase shift step number, a common factor is considered in combination with the requirement of the phase shift step number, and the phase shift step number can be selected to be 10 steps or 20 steps. In this embodiment, a pre-established sinusoidal fringe projection measurement system is used to perform fringe projection and shooting on a plurality of different dynamic objects according to each built-in frequency, so that a large number of object fringe patterns under a plurality of different frequencies can be collected, and all the collected object fringe patterns are used as object fringe pattern samples.
S22: and calculating real part data values and imaginary part data values of all pixel points representing the object in the object fringe pattern book by adopting a phase shift method.
In this embodiment, after obtaining the object fringe pattern samples, for each object fringe pattern sample, a phase shift method is used to calculate real part data values and imaginary part data values of all pixel points representing the object in the object fringe pattern sample. In this embodiment, the harmonic error is suppressed based on the number of phase shift steps, and the suppression of the N-order harmonic error is calculated by an N + 2-step phase shift method, so as to calculate real part data values and imaginary part data values of all pixel points representing the object in the object fringe pattern. Specifically, the collected object fringe pattern is calculated by a ten-step phase shift method, wherein a calculation formula may be set as:
Figure BDA0003554656010000111
Figure BDA0003554656010000112
where Im is represented as real data values, Re is represented as imaginary data values, B is represented as the modulation distribution of the object fringe pattern, φ is represented as the phase of the object fringe pattern, δ n Expressed as the amount of phase shift.
S23: and taking the object stripe pattern book as model input, and taking real part data values and pixel imaginary part data values of all pixel points representing the object in the object stripe pattern book as models to output and train a preset convolutional neural network model to a convergence state, thereby generating an image feature extraction model.
In this embodiment, a preset convolutional neural network model is trained by using an object fringe pattern sample as a model input and using real part data values and imaginary part data values of all pixels representing an object in the object fringe pattern sample calculated by a phase shift method as a model output until the convolutional neural network model is trained to a convergence state, and the convolutional neural network model trained to the convergence state is generated as an image feature extraction model.
In some embodiments of the present application, please refer to fig. 3, and fig. 3 is a flowchart illustrating a second implementation of training an image feature extraction model in a three-dimensional imaging method of a dynamic object according to an embodiment of the present application. The details are as follows:
s31: randomly distributing all collected object stripe patterns according to a preset distribution proportion to obtain a training sample set and a verification sample set;
s32: for a first object fringe pattern sample distributed to the training sample set, training the preset convolutional neural network model by taking the first object fringe pattern sample as model input and taking real part data values and imaginary part data values of all pixel points representing an object in the first object fringe pattern sample as model output, and obtaining the trained convolutional neural network model;
s33: inputting the second object fringe pattern sample to the trained convolutional neural network model for feature extraction processing aiming at the second object fringe pattern sample distributed to the verification sample set, and acquiring first real part data values and first imaginary part data values of all pixel points representing the object and output by the trained convolutional neural network model;
s34: comparing the similarity between the first real part data value and the first imaginary part data value of all pixel points of the representation object output by the trained convolutional neural network model and the real part data value and the imaginary part data value of all pixel points of the representation object obtained by calculating the second object fringe pattern through a phase shift method to obtain a similarity value;
s35: and comparing the similarity value with the similarity value of the previous iteration training, if the increase amplitude of the similarity is smaller than a preset threshold value, judging that the trained convolutional neural network model is trained to a convergence state, and generating the trained convolutional neural network model into an image feature extraction model.
In this embodiment, all collected object fringe pattern samples may be calculated as 8: and 2, randomly distributing according to the distribution proportion to obtain a training sample set and a verification sample set of the convolutional neural network model. Illustratively, the object fringe pattern samples assigned to the training sample set are distinguished as first object fringe pattern samples, and the object fringe pattern samples assigned to the verification sample set are distinguished as second object fringe pattern samples. At this time, for the first object fringe pattern sample allocated to the training sample set, the first object fringe pattern sample may be used as a model input, and real part data values and imaginary part data values of all pixel points representing the object in the first object fringe pattern sample obtained by calculation in advance through a phase shift method may be used as a model output to train the convolutional neural network model, so as to obtain the trained convolutional neural network model. After the trained convolutional neural network model is obtained, performing convergence verification on the model by using a second object fringe pattern sample distributed to a verification sample set, specifically, inputting the second object fringe pattern sample in the verification sample set to the trained convolutional neural network model for feature extraction processing, and obtaining a first real part data value and a first imaginary part data value which are output by the trained convolutional neural network model and represent all pixel points of an object. And the first real part data value and the first imaginary part data value output by the trained convolutional neural network model are used as the prediction result of the model. Then, the real part data value and the imaginary part data value of all pixel points representing the object, which are obtained by the second object fringe pattern through phase shift calculation, are used as expected results. And comparing the similarity of the first real part data value and the first imaginary part data value of all pixel points of the representation object output by the trained convolutional neural network model with the similarity of the real part data value and the imaginary part data value of all pixel points of the representation object obtained by calculating the second object fringe pattern through a phase shift method, so as to obtain the similarity between the prediction result and the expected result through comparison. And finally, comparing the similarity value obtained by comparison with the similarity value obtained by the previous iteration training, if the increase amplitude of the similarity is larger than a preset threshold value, indicating that the prediction result is close to the expected result, judging that the trained convolutional neural network model is trained to a convergence state, and generating the trained convolutional neural network model into an image feature extraction model.
In some embodiments of the present application, please refer to fig. 4, and fig. 4 is a flowchart illustrating an implementation of a model optimization training for an image feature extraction model in a three-dimensional imaging method of a dynamic object according to an embodiment of the present application. The details are as follows:
s41: and constructing a first loss function of the image feature extraction model according to the real part data values and the imaginary part data values of all the pixel points representing the object output by the image feature extraction model and the real part data values and the imaginary part data values of all the pixel points representing the object obtained by calculating the object fringe pattern through a phase shift method.
In this embodiment, the calculation formula of the first loss function is set as:
L 1 =||Im output -Im gt || 1 +||Re output -Re gt || 1
wherein, Re output Expressed as real part data values, Im, of all pixel points of the expressed object output by the trained convolutional neural network model output The imaginary data values, Re, representing all pixel points of the object output as a trained convolutional neural network model gt The real part data values Im of all pixel points representing the object are obtained by calculating the sample of the fringe pattern of the object through a phase shift method gt And the data values of imaginary parts of all pixel points of the object are obtained by calculating the sample of the object fringe pattern through a phase shift method.
S42: and obtaining the modulation distribution of the object in the object fringe pattern sample, and constructing a second loss function of the image feature extraction model.
In this embodiment, the calculation formula of the second loss function is set as:
Figure BDA0003554656010000141
wherein 5 is 5 intermediate layers selected from the neural network of the image feature extraction model, λ i Expressed as the weight parameter, phi, of the ith intermediate layer i The output of the i-th intermediate layer is shown,
Figure BDA0003554656010000142
the modulation distribution of the object is obtained by calculating real part data values and imaginary part data values of all pixel points of the object output according to the image feature extraction modelB is the modulation distribution of the object obtained by calculating the real part data value and the imaginary part data value of all the pixel points of the object obtained by the phase shift method, wherein the calculation formula for calculating the modulation distribution of the object is
Figure BDA0003554656010000143
S43: and performing weighted integration on the first loss function and the second loss function to generate a total loss function of the image feature extraction model, and performing model optimization training on the image feature extraction model by using the total loss function.
In this embodiment, the total loss function is expressed as:
Loss=L 1 +λL feat
where λ is represented as a weight parameter.
In this embodiment, after the total loss function is obtained, model optimization training is performed on the image feature extraction model by minimizing the loss obtained by calculating the total loss function.
Referring to fig. 5, fig. 5 is a schematic diagram of a convolutional neural network structure of an image feature extraction model in a three-dimensional imaging method of a dynamic object according to an embodiment of the present disclosure. As shown in fig. 5, a preset convolutional neural network model is provided with four layers of U-Net convolutional layer structures, the feature processing of each layer of U-Net convolutional layer structure is realized by using a residual dense network block (RDB), and features of different scales in a fringe pattern are extracted through different U-Net network structure layers.
In some embodiments of the present application, the residual dense network block includes 5 continuous convolutional layers, a corresponding expansion rate may be set for each convolutional layer in the residual dense network block, and the expansion rate is adopted to perform expansion convolution processing on each convolutional layer in the residual dense network block, so as to expand a sensing field of each layer of U-Net convolutional layer structure when performing feature processing, so that the features extracted by each layer of U-Net convolutional layer structure are more accurate and effective. Because the convolution kernels of the expansion convolution are discontinuous, the expansion rate of the continuously superposed expansion convolution cannot have a common divisor larger than 1, and in order to avoid the grid effect, the expansion rates of the convolution layers in the residual error dense network block are sequentially set to be 1, 2, 3, 2 and 1.
It should be understood that, the sequence numbers of the steps in the foregoing embodiments do not imply an execution sequence, and the execution sequence of each process should be determined by its function and inherent logic, and should not constitute any limitation to the implementation process of the embodiments of the present application.
In some embodiments of the present application, please refer to fig. 6, and fig. 6 is a block diagram of an infrastructure of a three-dimensional imaging apparatus for a dynamic object according to an embodiment of the present application. The apparatus in this embodiment comprises means for performing the steps of the method embodiments described above. The following description refers to the embodiments of the method. For convenience of explanation, only the portions related to the present embodiment are shown. As shown in fig. 6, the three-dimensional imaging apparatus of a dynamic object includes: an image capturing module 61, a feature extraction module 62, a phase map acquisition module 63, and a three-dimensional imaging module 64. Wherein: the image shooting module 61 is configured to perform fringe projection on the dynamic object according to two preset fringe frequencies and shoot a fringe image. The feature extraction module 62 is configured to input the fringe pattern into a preset image feature extraction model for feature extraction processing, so as to obtain a pixel real part distribution map and a pixel imaginary part distribution map of the dynamic object respectively. The phase diagram obtaining module 63 is configured to perform phase calculation on the pixels of the dynamic object according to the pixel real part distribution diagram and the pixel imaginary part distribution diagram to obtain a folded phase diagram of the dynamic object, and perform phase unfolding processing on the folded phase diagram to obtain an absolute phase diagram of the dynamic object. The three-dimensional imaging module 64 is configured to perform binocular matching and depth calculation on all pixel points of the dynamic object according to the absolute phase map, to obtain a three-dimensional coordinate value corresponding to each pixel point, and to construct a three-dimensional image of the dynamic object according to the three-dimensional coordinate value corresponding to each pixel point.
It should be understood that the above-mentioned three-dimensional imaging device for a dynamic object corresponds to the above-mentioned three-dimensional imaging method for a dynamic object one by one, and the details are not repeated here.
In some embodiments of the present application, please refer to fig. 7, and fig. 7 is a basic structural block diagram of an electronic device according to an embodiment of the present application. As shown in fig. 7, the electronic apparatus 7 of this embodiment includes: a processor 71, a memory 72 and a computer program 73, for example a program of a method for three-dimensional imaging of a dynamic object, stored in said memory 72 and executable on said processor 71. The processor 71, when executing said computer program 73, carries out the steps in the various embodiments of the method for three-dimensional imaging of various dynamic objects described above. Alternatively, the processor 71 implements the functions of the modules in the embodiment corresponding to the three-dimensional imaging device for a dynamic object when executing the computer program 73. Please refer to the description related to the embodiment, which is not repeated herein.
Illustratively, the computer program 73 may be divided into one or more modules (units) that are stored in the memory 72 and executed by the processor 71 to accomplish the present application. The one or more modules may be a series of computer program instruction segments capable of performing specific functions, which are used to describe the execution of the computer program 73 in the electronic device 7. For example, the computer program 73 may be divided into an image capturing module, a feature extraction module, a phase map acquisition module, and a three-dimensional imaging module, each of which functions specifically as described above.
The electronic device may include, but is not limited to, a processor 71, a memory 72. It will be appreciated by those skilled in the art that fig. 7 is merely an example of the electronic device 7, and does not constitute a limitation of the electronic device 7, and may include more or less components than those shown, or combine certain components, or different components, for example, the electronic device may also include input output devices, network access devices, buses, etc.
The Processor 71 may be a Central Processing Unit (CPU), other general purpose Processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), an off-the-shelf Programmable Gate Array (FPGA) or other Programmable logic device, discrete Gate or transistor logic device, discrete hardware component, etc. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
The memory 72 may be an internal storage unit of the electronic device 7, such as a hard disk or a memory of the electronic device 7. The memory 72 may also be an external storage device of the electronic device 7, such as a plug-in hard disk, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash memory Card (Flash Card), and the like, which are provided on the electronic device 7. Further, the memory 72 may also include both an internal storage unit and an external storage device of the electronic device 7. The memory 72 is used for storing the computer program and other programs and data required by the electronic device. The memory 72 may also be used to temporarily store data that has been output or is to be output.
It should be noted that, for the information interaction, execution process, and other contents between the above-mentioned devices/units, the specific functions and technical effects thereof are based on the same concept as those of the embodiment of the method of the present application, and specific reference may be made to the part of the embodiment of the method, which is not described herein again.
The embodiments of the present application further provide a computer-readable storage medium, where a computer program is stored, and when the computer program is executed by a processor, the computer program implements the steps in the above-mentioned method embodiments. In this embodiment, the computer-readable storage medium may be nonvolatile or volatile.
The embodiments of the present application provide a computer program product, which when running on a mobile terminal, enables the mobile terminal to implement the steps in the above method embodiments when executed.
It will be apparent to those skilled in the art that, for convenience and brevity of description, only the above-mentioned division of the functional units and modules is illustrated, and in practical applications, the above-mentioned function distribution may be performed by different functional units and modules according to needs, that is, the internal structure of the apparatus is divided into different functional units or modules to perform all or part of the above-mentioned functions. Each functional unit and module in the embodiments may be integrated in one processing unit, or each unit may exist alone physically, or two or more units are integrated in one unit, and the integrated unit may be implemented in a form of hardware, or in a form of software functional unit. In addition, specific names of the functional units and modules are only for convenience of distinguishing from each other, and are not used for limiting the protection scope of the present application. The specific working processes of the units and modules in the system may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
The integrated modules/units, if implemented in the form of software functional units and sold or used as separate products, may be stored in a computer readable storage medium. Based on such understanding, all or part of the flow in the method of the embodiments described above can be realized by a computer program, which can be stored in a computer-readable storage medium and can realize the steps of the embodiments of the methods described above when the computer program is executed by a processor. Wherein the computer program comprises computer program code, which may be in the form of source code, object code, an executable file or some intermediate form, etc. The computer-readable medium may include: any entity or device capable of carrying the computer program code, recording medium, usb disk, removable hard disk, magnetic disk, optical disk, computer Memory, Read-Only Memory (ROM), Random Access Memory (RAM), electrical carrier wave signals, telecommunications signals, software distribution medium, and the like. It should be noted that the computer readable medium may contain other components which may be suitably increased or decreased as required by legislation and patent practice in jurisdictions, for example, in some jurisdictions, computer readable media which may not include electrical carrier signals and telecommunications signals in accordance with legislation and patent practice.
In the above embodiments, the descriptions of the respective embodiments have respective emphasis, and reference may be made to the related descriptions of other embodiments for parts that are not described or illustrated in a certain embodiment.
The above-mentioned embodiments are only used for illustrating the technical solutions of the present application, and not for limiting the same; although the present application has been described in detail with reference to the foregoing embodiments, it should be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; such modifications and substitutions do not substantially depart from the spirit and scope of the embodiments of the present application and are intended to be included within the scope of the present application.

Claims (10)

1. A method of three-dimensional imaging of a dynamic object, comprising:
performing fringe projection on the dynamic object according to two preset fringe frequencies and shooting a fringe pattern;
inputting the fringe pattern into a preset image feature extraction model for feature extraction processing, and respectively obtaining a pixel real part distribution map and a pixel imaginary part distribution map of the dynamic object;
according to the pixel real part distribution map and the pixel imaginary part distribution map, performing phase calculation on the pixels of the dynamic object to obtain a folded phase map of the dynamic object, and performing phase unfolding processing on the folded phase map to obtain an absolute phase map of the dynamic object;
and performing binocular matching and depth calculation on all pixel points of the dynamic object according to the absolute phase diagram to obtain three-dimensional coordinate values corresponding to the pixel points, and constructing a three-dimensional image of the dynamic object according to the three-dimensional coordinate values corresponding to the pixel points.
2. The method as claimed in claim 1, wherein before the step of inputting the fringe pattern into a preset image feature extraction model for feature extraction processing to obtain a pixel real part distribution map and a pixel imaginary part distribution map of the dynamic object, respectively, the method further comprises:
and determining a denoising threshold value of the fringe image according to the background intensity distribution in the fringe image, and denoising the fringe image according to the denoising threshold value.
3. The method as claimed in claim 1, wherein before the step of inputting the fringe pattern into a preset image feature extraction model for feature extraction processing to obtain a pixel real part distribution map and a pixel imaginary part distribution map of the dynamic object, respectively, the method further comprises:
collecting object fringe pattern samples under a plurality of different frequencies by adopting a pre-established sine fringe projection measuring system;
calculating real part data values and imaginary part data values of all pixel points representing the object in the object fringe pattern book by adopting a phase shift method;
and taking the object stripe pattern book as model input, and taking real part data values and pixel imaginary part data values of all pixel points representing the object in the object stripe pattern book as models to output and train a preset convolutional neural network model to a convergence state, thereby generating an image feature extraction model.
4. The method according to claim 3, wherein the step of training the predetermined convolutional neural network model to a convergence state by using the object stripe pattern as a model input and using real data values and imaginary data values of all pixels representing the object in the object stripe pattern as a model output to generate the image feature extraction model comprises:
randomly distributing all collected object stripe patterns according to a preset distribution proportion to obtain a training sample set and a verification sample set;
for a first object fringe pattern sample distributed to the training sample set, training the preset convolutional neural network model by taking the first object fringe pattern sample as model input and taking real part data values and imaginary part data values of all pixel points representing an object in the first object fringe pattern sample as model output, and obtaining the trained convolutional neural network model;
inputting the second object fringe pattern sample to the trained convolutional neural network model for feature extraction processing aiming at the second object fringe pattern sample distributed to the verification sample set, and acquiring first real part data values and first imaginary part data values of all pixel points representing the object and output by the trained convolutional neural network model;
comparing the similarity between the first real part data value and the first imaginary part data value of all pixel points of the representation object output by the trained convolutional neural network model and the real part data value and the imaginary part data value of all pixel points of the representation object obtained by calculating the second object fringe pattern through a phase shift method to obtain a similarity value;
and comparing the similarity value with the similarity value of the previous iteration training, if the increase amplitude of the similarity is smaller than a preset threshold value, judging that the trained convolutional neural network model is trained to a convergence state, and generating the trained convolutional neural network model into an image feature extraction model.
5. The method according to claim 4, wherein the step of comparing the similarity value with a similarity value of a previous iteration training, determining that the trained convolutional neural network model has been trained to a convergence state if the increase of the similarity is smaller than a preset threshold, and generating the trained convolutional neural network model as an image feature extraction model further includes:
constructing a first loss function of the image feature extraction model according to real part data values and imaginary part data values of all pixel points representing the object output by the image feature extraction model and real part data values and imaginary part data values of all pixel points representing the object, which are obtained by calculating the object fringe pattern through a phase shift method;
obtaining the modulation distribution of the object fringe pattern sample, and constructing a second loss function of the image feature extraction model;
and performing weighted integration on the first loss function and the second loss function to generate a total loss function of the image feature extraction model, and performing model optimization training on the image feature extraction model by using the total loss function.
6. The method according to claim 3, wherein in the step of training a preset convolutional neural network model to a convergence state to generate the image feature extraction model, the preset convolutional neural network model comprises four layers of U-Net convolutional layers, wherein each layer of U-Net convolutional layer is configured with a residual dense network block, and the residual dense network block is used for implementing feature extraction processing operation.
7. The method of claim 6, wherein the residual dense network block comprises a plurality of convolutional layers, wherein each convolutional layer has a corresponding expansion ratio.
8. An apparatus for three-dimensional imaging of a dynamic object, comprising:
the image shooting module is used for carrying out stripe projection on the dynamic object according to the preset stripe frequency and shooting a stripe image;
the characteristic extraction module is used for inputting the fringe pattern into a preset image characteristic extraction model for characteristic extraction processing to respectively obtain a pixel real part distribution map and a pixel imaginary part distribution map of the dynamic object;
the phase image acquisition module is used for performing phase calculation on the pixel points of the dynamic object according to the pixel point real part distribution map and the pixel point imaginary part distribution map to obtain a folded phase image of the dynamic object, and performing phase unfolding processing on the folded phase image to obtain an absolute phase image of the dynamic object;
and the three-dimensional imaging module is used for performing binocular matching and depth calculation on all pixel points of the dynamic object according to the absolute phase diagram to obtain three-dimensional coordinate values corresponding to the pixel points so as to construct a three-dimensional image of the dynamic object according to the three-dimensional coordinate values corresponding to the pixel points.
9. An electronic device comprising a memory, a processor and a computer program stored in the memory and executable on the processor, characterized in that the steps of the method according to any of claims 1 to 7 are implemented when the computer program is executed by the processor.
10. A computer-readable storage medium, in which a computer program is stored which, when being executed by a processor, carries out the steps of a method according to any one of claims 1 to 7.
CN202210270991.XA 2022-03-18 2022-03-18 Three-dimensional imaging method, device and equipment for dynamic object and storage medium Pending CN114820921A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210270991.XA CN114820921A (en) 2022-03-18 2022-03-18 Three-dimensional imaging method, device and equipment for dynamic object and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210270991.XA CN114820921A (en) 2022-03-18 2022-03-18 Three-dimensional imaging method, device and equipment for dynamic object and storage medium

Publications (1)

Publication Number Publication Date
CN114820921A true CN114820921A (en) 2022-07-29

Family

ID=82531898

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210270991.XA Pending CN114820921A (en) 2022-03-18 2022-03-18 Three-dimensional imaging method, device and equipment for dynamic object and storage medium

Country Status (1)

Country Link
CN (1) CN114820921A (en)

Similar Documents

Publication Publication Date Title
Jeon et al. Accurate depth map estimation from a lenslet light field camera
US10584963B2 (en) System and methods for shape measurement using dual frequency fringe pattern
KR101554241B1 (en) A method for depth map quality enhancement of defective pixel depth data values in a three-dimensional image
US20120176478A1 (en) Forming range maps using periodic illumination patterns
CN110702034A (en) High-light-reflection surface three-dimensional surface shape measuring method, server and system
CN109945802B (en) Structured light three-dimensional measurement method
CN111080662A (en) Lane line extraction method and device and computer equipment
CN107346041B (en) Method and device for determining grating parameters of naked eye 3D display equipment and electronic equipment
JP6598673B2 (en) Data processing apparatus and method
CN108648222B (en) Method and device for improving spatial resolution of structured light depth data
KR102516495B1 (en) Methods and apparatus for improved 3-d data reconstruction from stereo-temporal image sequences
US11512946B2 (en) Method and system for automatic focusing for high-resolution structured light 3D imaging
CN109661815A (en) There are the robust disparity estimations in the case where the significant Strength Changes of camera array
CN115205360A (en) Three-dimensional outer contour online measurement and defect detection method of composite stripe projection steel pipe and application
CN114526692A (en) Structured light three-dimensional measurement method and device based on defocusing unwrapping
CN114627244A (en) Three-dimensional reconstruction method and device, electronic equipment and computer readable medium
CN112802084B (en) Three-dimensional morphology measurement method, system and storage medium based on deep learning
CN116977671A (en) Target tracking method, device, equipment and storage medium based on image space positioning
CN110378948B (en) 3D model reconstruction method and device and electronic equipment
CN109859313B (en) 3D point cloud data acquisition method and device, and 3D data generation method and system
CN114820921A (en) Three-dimensional imaging method, device and equipment for dynamic object and storage medium
CN115908705A (en) Three-dimensional imaging method and device based on special codes
CN115345993A (en) Three-dimensional reconstruction method, device and system
CN110853087B (en) Parallax estimation method, device, storage medium and terminal
CN114897959A (en) Phase unwrapping method based on light field multi-view constraint and related components

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination