CN111145344A - Structured light measuring method for snow carving 3D reconstruction - Google Patents

Structured light measuring method for snow carving 3D reconstruction Download PDF

Info

Publication number
CN111145344A
CN111145344A CN201911405700.8A CN201911405700A CN111145344A CN 111145344 A CN111145344 A CN 111145344A CN 201911405700 A CN201911405700 A CN 201911405700A CN 111145344 A CN111145344 A CN 111145344A
Authority
CN
China
Prior art keywords
structured light
image
reconstruction
probability
constructing
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201911405700.8A
Other languages
Chinese (zh)
Other versions
CN111145344B (en
Inventor
刘万村
张晓琳
唐文彦
张立国
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Harbin Institute of Technology
Harbin Engineering University
Harbin Vocational and Technical College
Original Assignee
Harbin Institute of Technology
Harbin Engineering University
Harbin Vocational and Technical College
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Harbin Institute of Technology, Harbin Engineering University, Harbin Vocational and Technical College filed Critical Harbin Institute of Technology
Priority to CN201911405700.8A priority Critical patent/CN111145344B/en
Publication of CN111145344A publication Critical patent/CN111145344A/en
Application granted granted Critical
Publication of CN111145344B publication Critical patent/CN111145344B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01BMEASURING LENGTH, THICKNESS OR SIMILAR LINEAR DIMENSIONS; MEASURING ANGLES; MEASURING AREAS; MEASURING IRREGULARITIES OF SURFACES OR CONTOURS
    • G01B11/00Measuring arrangements characterised by the use of optical techniques
    • G01B11/24Measuring arrangements characterised by the use of optical techniques for measuring contours or curvatures
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01BMEASURING LENGTH, THICKNESS OR SIMILAR LINEAR DIMENSIONS; MEASURING ANGLES; MEASURING AREAS; MEASURING IRREGULARITIES OF SURFACES OR CONTOURS
    • G01B11/00Measuring arrangements characterised by the use of optical techniques
    • G01B11/24Measuring arrangements characterised by the use of optical techniques for measuring contours or curvatures
    • G01B11/2433Measuring arrangements characterised by the use of optical techniques for measuring contours or curvatures for measuring outlines by shadow casting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/06Ray-tracing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/13Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/136Segmentation; Edge detection involving thresholding
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery
    • G06T7/521Depth or shape recovery from laser ranging, e.g. using interferometry; from the projection of structured light
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02ATECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE
    • Y02A90/00Technologies having an indirect contribution to adaptation to climate change
    • Y02A90/10Information and communication technologies [ICT] supporting adaptation to climate change, e.g. for weather forecasting or climate simulation

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Computer Graphics (AREA)
  • Optics & Photonics (AREA)
  • Geometry (AREA)
  • Software Systems (AREA)
  • Image Analysis (AREA)

Abstract

A structured light measuring method for snow carving 3D reconstruction relates to the technical field of structured light measurement, and aims to solve the problem that in the prior art, when snow carving 3D reconstruction is measured, the measurement accuracy is low due to the influence of an operation scene, strong environmental noise and special surface optical characteristics. The whole processing process of the invention integrates the key problems in the structured light image into a space-time tracking frame, and the measurement result gives consideration to robustness, accuracy and real-time property.

Description

Structured light measuring method for snow carving 3D reconstruction
Technical Field
The invention relates to the technical field of structured light measurement, in particular to a structured light measurement method for snow carving 3D reconstruction.
Background
The snow carving is a special ice and snow carving artistic form in north China, particularly, the 'snow carving artwork' created by an artist on site by combining the current thought and inspiration of the author according to the shape and the texture of a snow blank and combining the surrounding scenes has extremely high artistic value and economic value, and the digital construction of the snow carving artwork is an important means for breaking through a space-time fence and realizing the information propagation and development of ice and snow art and is an indispensable part in the digital archiving and displaying of an ice and snow museum.
The existing ice and snow sculpture digital filing mainly adopts a camera to shoot videos and photos of the ice and snow sculpture, although the two-dimensional images record much original information of the sculpture, the existing two-dimensional plane data information is difficult to meet the requirements of filing and researching more detailed cultural relics from the viewpoint of art archive storage and vivid reproduction of artworks. The three-dimensional information for each sculpture is recorded in detail, which is not possible on a video camera. Designers and related units need to store the snow carving artwork digitally on a computer so as to reproduce the original appearance of the sculpture through technologies such as 3D (three-dimensional) movies and 3D printing.
However, because the snow carving is directly exposed in the outdoor environment, environmental noises such as sunlight, shadow, reflection and the like are visible everywhere, the snow carving has a large volume, scanning cannot be completed at one time, the surface reflection is strong, most areas have single color, and compared with the traditional 3D measurement, the biggest difficult problem of 3D reconstruction of the snow carving is complex operation scenes, strong environmental noises and special surface optical characteristics.
Disclosure of Invention
The purpose of the invention is: aiming at the problem of low measurement accuracy caused by the influence of an operation scene, strong environmental noise and special surface optical characteristics during snow carving 3D reconstruction measurement in the prior art, the structured light measurement method for snow carving 3D reconstruction is provided.
The technical scheme adopted by the invention to solve the technical problems is as follows:
a structured light measurement method for snow carving 3D reconstruction comprises the following steps:
the method comprises the following steps: firstly, arranging a structured light sensor, wherein the structured light sensor consists of a camera and a linear laser, then calibrating the structured light sensor according to a Zhang Zhengyou calibration method, and finally collecting a video sequence mixed with noise;
step two: distinguishing environmental noise in a video sequence by adopting a maximum likelihood estimation method;
step three: constructing an RGB weight function to carry out color space transformation, and generating a monochromatic value image;
step four: carrying out threshold segmentation on the monochromatic value image;
step five: performing edge detection on the monochromatic value image, and quantizing the light bars into point pairs ([ t, p)n(t)],[t,pn+1(t)]) Calculating the position of the center of gravity of the brightness by adopting a gravity center meter algorithm, and taking the position as the state x of the potential position of the light barn(t), n denotes the state index at step t, n ∈ [1, X (t)]X (t) represents the total number of state quantities in the t step, and the distance between the point pairs is defined as an observed quantity yn(t)=|pn(t)-pn+1(t) | results in a spatial context, where [ t, p |)n(t)]The coordinates of the edge pixel points in the t step are obtained;
step six: establishing inter-frame association by using spatial constraints of spatial context, and then constructing parameters: initial probability distribution, state transition probability distribution and observation probability distribution, and finally constructing an HMM model on a spatial axis, namely an S-HMM;
step seven: maximizing the probability of generating the observation sequence Y ═ { Y (1), …, Y (n) } in the step five, extracting the light bar center line through Viterbi decoding;
step eight: establishing a global and regional comprehensive search strategy, and defining a plurality of suspected light bar regions as a series of sub-windows wn(α, I, Δ, s), where α is the window size, I denotes the average light bar luminance within the window, Δ is the rate of change from the previous frame tracking result, and s is the scale factor;
step nine: defining each window as a state quantity of the current frame, taking characteristic parameters in the window as observed quantities, and constructing parameters: the method comprises the steps of (1) initializing probability distribution, state transition probability distribution and observation probability distribution, then constructing an HMM model on a time axis, namely T-HMM, and obtaining the optimal tracking track of light bars through Viterbi decoding;
step ten: inputting video information collected by a structured light sensor into a constructed ST-HMM, solving the global optimal solution of the model, and outputting a result which is a scanning track of a series of smooth light strip central lines;
step eleven: and (4) converting the extracted and tracked light strip tracking track into an image coordinate system, a sensor coordinate system and a world coordinate system to obtain point cloud data of the surface of the snow carving, and forming a 3D shape.
Further, the maximum likelihood estimation method specifically comprises the following steps: firstly, the parameter vector theta is used12,…,θcRepresenting various noises, and constructing a sample set D ═ D by using a plurality of calibrated noise images1,D2…DcSolving the parameter vector of which the distribution density p (D | theta) reaches the maximum value
Figure BDA0002348562750000021
I.e. the estimated noise.
Further, the monochrome value image is a gray scale image, and the RGB weight coefficient of the gray scale image is (w)r,wg,wb) From the decision of the maximum likelihood estimation noise classification result, the original color image andthe conversion relation of the monochromatic value image is as follows: fij=wrRij+wgGij+wbBij
Wherein, F is the single color value image obtained after conversion, and i and j respectively represent the row-column index of the pixel.
Further, the weight function is defined using the concept of kurtosis, and the formula is:
Figure BDA0002348562750000022
wherein, κ4Which is the fourth order cumulative amount of the luminance distribution,
Figure BDA0002348562750000023
is a second order cumulative quantity k2Square of (d), mu4And σ4The fourth order central moment of the image brightness and the probability distribution variance are respectively.
Further, the weight function is solved by using a minimum entropy model, and the weight function is solved by using the order
Figure BDA0002348562750000031
Solving the maximum value of K.
Further, the specific steps of constructing the parameters in the sixth step are as follows:
(1) the probability of generating the observation sequence Y is defined as:
Figure BDA0002348562750000032
(2) the initial probability distribution is defined as an equiprobable distribution, i.e.
Figure BDA0002348562750000033
(3) Equation of observation probability
Figure BDA0002348562750000034
Wherein, TeIs defined empiricallyThe upper limit of the width threshold value is,
(4) the probability of state transition is
Figure RE-GDA0002436684510000035
Wherein d isijBetween states is Euclidean distance, kijIs the difference in luminance mean between states;
Figure BDA0002348562750000036
and
Figure BDA0002348562750000037
are respectively dijAnd kijThe variance of (a); w is a1And w2Is a pair of weights obtained by training data through a maximum likelihood estimation method or a maximum posterior probability estimation method, and w1+w2=1。
Further, the specific steps of constructing the parameters in the ninth step are as follows:
(1) the initial probability distribution is defined as an equiprobable distribution, i.e.
Figure BDA0002348562750000038
(2) Equation of observation probability
Figure BDA0002348562750000039
Wherein, TeIs an upper width threshold that is empirically defined,
(3) the probability of state transition is
Figure RE-GDA00024366845100000310
Wherein w1,2,3,4The window sizes α, β, the weights, δ, of the luminance mean I and the rate of change Δ, respectivelyα 2、δβ 2、δI 2And deltaΔ 2Representing a phaseObserved differences between neighboring states, σα 2、σβ 2、σI 2And σΔ 2Is the variance of the above observations.
The invention has the beneficial effects that:
1. on the premise of not changing the structure of the structured light sensor, the anti-noise capability of the system is improved by a digital image processing means, the precision and the real-time performance of a measuring result are ensured, the sensor can work in a field complex environment, the application range of the structured light sensor is expanded, and the measuring accuracy is improved;
2. in the whole processing process, key problems (noise interference, light strip deformation and the like) in the structured light image are integrated into a space-time tracking frame, and the measurement result gives consideration to robustness, accuracy and instantaneity;
3. the invention adopts the structured light sensor for measurement, has convenient operation, lower price compared with a laser scanner, and reduces the measurement cost.
Drawings
FIG. 1 is a schematic diagram of the overall process of the present invention;
FIG. 2a is a structured light scanning snowflake image under sunlight;
FIG. 2b is a color histogram over R space;
FIG. 2c is a color histogram over G space;
FIG. 2d is a color histogram over B-space;
FIG. 3a is an original image acquired by a sensor;
FIG. 3b is a graph of red components collected by the sensor;
FIG. 3c is a grayscale image acquired by the sensor;
FIG. 3d is a monochromatic value image acquired by the sensor;
FIG. 4 is a diagram illustrating the state transition relationship of an S-HMM using any three rows of pixels as an example;
FIG. 5 is a diagram of a T-HMM model and light bar tracking;
fig. 6 is a 3D snow carving reconstruction result diagram.
Detailed Description
The first embodiment is as follows: specifically describing the present embodiment with reference to fig. 1, the structured light measurement method for 3D reconstruction of snow carving according to the present embodiment includes the following steps:
the method comprises the following steps: firstly, arranging a structured light sensor, wherein the structured light sensor consists of a camera and a linear laser, then calibrating the structured light sensor according to a Zhang Zhengyou calibration method, and finally collecting a video sequence mixed with noise;
step two: distinguishing environmental noise in a video sequence by adopting a maximum likelihood estimation method;
step three: constructing an RGB weight function to carry out color space transformation, and generating a monochromatic value image;
step four: carrying out threshold segmentation on the monochromatic value image;
step five: performing edge detection on the monochromatic value image, and quantizing the light bars into point pairs ([ t, p)n(t)],[t,pn+1(t)]) Calculating the position of the center of gravity of the brightness by adopting a gravity center meter algorithm, and taking the position as the state x of the potential position of the light barn(t), n denotes the state index at step t, n ∈ [1, X (t)]X (t) represents the total number of state quantities in the t step, and the distance between the point pairs is defined as an observed quantity yn(t)=|pn(t)-pn+1(t) | results in a spatial context, where [ t, p |)n(t)]The coordinates of the edge pixel points in the t step are obtained;
step six: establishing inter-frame association by using spatial constraints of spatial context, and then constructing parameters: initial probability distribution, state transition probability distribution and observation probability distribution, and finally constructing an HMM model on a spatial axis, namely an S-HMM;
step seven: maximizing the probability of generating the observation sequence Y ═ { Y (1), …, Y (n) } in the step five, extracting the light bar center line through Viterbi decoding;
step eight: establishing a global and regional comprehensive search strategy, and defining a plurality of suspected light bar regions as a series of sub-windows wn(α, I, Δ, s), where α is the window size, I denotes the average light bar luminance within the window, Δ is the rate of change from the previous frame tracking result, and s is the scale factor;
step nine: defining each window as a state quantity of the current frame, taking characteristic parameters in the window as observed quantities, and constructing parameters: the method comprises the steps of (1) initializing probability distribution, state transition probability distribution and observation probability distribution, then constructing an HMM model on a time axis, namely T-HMM, and obtaining the optimal tracking track of light bars through Viterbi decoding;
step ten: inputting video information collected by a structured light sensor into a constructed ST-HMM, solving the global optimal solution of the model, and outputting a result which is a scanning track of a series of smooth light strip central lines;
step eleven: and (4) converting the extracted and tracked light strip tracking track into an image coordinate system, a sensor coordinate system and a world coordinate system to obtain point cloud data of the surface of the snow carving, and forming a 3D shape.
The measuring steps of the invention are as follows:
the first step is as follows: image acquisition
The method is characterized in that a fixed-mode structured light measuring device with obvious noise resistance is used, the device is composed of a light source (a laser or a projector and the like) + a camera (monocular or binocular), a sensor system is calibrated by adopting a classical Zhang Zhengyou calibration method, 3D shape and surface structure information of a measured target are obtained, and a video sequence with noise is generated, and is shown in figure 1.
The second step is that: image pre-processing
1. Noise classification
The light bar region in the structured light image is the image foreground, while other disturbances affecting light bar extraction are considered noise. The noise source of the field complex environment is mainly sunlight and has a few shadows, surface colors of snow sculptures, surface reflection, occasional colored illumination noise and the like. Various noises have additive noise properties on images, namely, the noises and image signals are in a mixed superposition relationship, and the analysis of the multi-source noises by constructing an optical path propagation model is very difficult.
And analyzing the noise characteristics, wherein illumination and surface color noise have global property in spatial distribution, and shadow, surface texture and reflection belong to local noise. Sunlight and shadow noise typically have similar R, G, B components in color distribution, while colored illumination noise and target surface color noise typically have higher color values in a certain color dimension. The reflection noise is related to the lighting conditions and the target surface characteristics and appears as local high brightness in spatial distribution. As can be seen from fig. 2, in sunlight the light bar is flooded by a lot of ambient noise; the luminance distribution and the histogram over the three color spaces are all very close, with the histogram being in a single peak state. Conversely, in the case of colored lighting or interference of target surface colors, the similarity of the three histograms is greatly reduced. In the case of surface reflection, the similarity of the luminance spatial distribution is reduced.
Noise classification using maximum likelihood estimation with a parameter vector θ12,…,θcRepresenting various noises, and constructing a sample set D ═ D by a plurality of calibrated noise images1,D2…Dc}. By solving for the desired value of θ, i.e. the vector of parameters that maximizes p (D | θ)
Figure BDA0002348562750000061
An optimal noise class estimate is obtained, which gives a qualitative description of the ambient noise and guides the decision of the color value spatial transformation.
2. Generating a monochromatic value image
Color histograms and vector spaces are heavily used for image understanding and analysis, with significant success. But the RGB vector space sometimes does not easily reveal the information we need. Compared with the nonlinear transformation, the linear transformation does not cause large jump, does not have singular value points and discontinuity, and takes less time. The gray scale map is the most typical single color value image generated by linear transformation, the conversion relation is shown as formula (1), and the RGB weight (coefficient) of the general gray scale image is: (w)r,wg,wb)=(0.3, 0.59,0.11)。
Fij=wrRij+wgGij+wbBij(1)
Where F is the monochromatic value image obtained after the conversion, and i and j represent the row-column index of the pixel, respectively. The invention adopts the monochromatic value technology thatBy solving the optimal GRB weight function (i.e. solving for (w)r,wg,wb) The best combination of (2) from the original color image, rather than simply taking a color component or converting to a grayscale image. The solved weight function should be able to maximize the signal-to-noise ratio.
On a fixed-mode structured light source, represented by a laser, the projected energy distribution of the light bar is relatively concentrated and presents a concise structure. Therefore, we have established a weighting function that makes use of these two characteristics as much as possible, so that after conversion, the light bars in the image are distinguished from the background. Therefore, the weighting function should be a function that measures contrast. The pixels in the light bar have high brightness value, and the waveform of the light bar area is steep in brightness distribution and the kurtosis is high. Therefore, the use of kurtosis concept to define the weighting function is a reasonable choice, as shown in equation (2).
Figure BDA0002348562750000071
The kurtosis K is defined herein as the fourth order cumulative amount K of the luminance distribution4Divided by the second order cumulative quantity k2Square of (d), mu4And σ2The fourth order central moment of the image brightness and the probability distribution variance are respectively. The special case is: the kurtosis of a normal distribution is 0. K characterizes the degree of energy concentration and the degree of structuring of the image. The larger K is, the more orderly the brightness value distribution of the converted monochromatic value image is, the higher the corresponding structuralization degree and energy concentration degree are, and the larger the light intensity change gradient is; the sharper the light bar region as a whole, the smaller the entropy. When K reaches a maximum, only a few elements in the image pixel value matrix have a large value, while other elements approach zero values. Therefore, a minimum entropy model can be used to solve the optimal weight function. To obtain the maximum value of K, only order
Figure BDA0002348562750000072
Obviously, solving equation (3) is very tedious, but uniform sampling(three coefficients cannot be 0 at the same time), and then the exhaustive solution is a simpler method. Since the sunlight noise has similar RGB components, (w) is takenr,wg,wb) The result of the process (1, -1,0) is shown in fig. 3 d. Compared with a red component image and a gray-scale image, the single-color value image obviously has higher signal-to-noise ratio, and after threshold segmentation, the light bar area can be segmented completely.
The third step: establishing spatial constraints for light bar extraction
The markov model is a time dependent state transition model, which in particular can be interpreted as the state of an event x (t) occurring at a certain time t directly depending on the event x (t-1) at time t-1. Each state of the model is determined, but in many cases the state quantity cannot be directly obtained, but can be indirectly judged through some observed quantity, and the state quantity of the model is regarded as a series of hidden states. The HMM is a statistical model that can indirectly estimate the state quantity from the observed quantity.
As local information, there must be some specific contextual association between the light bar itself and the background of its vicinity. These associations include two basic spatial constraints: continuity and uniqueness; 1. the light bar is taken as a curve (or a cluster), and should be continuous and smooth on the image, and jumping or discontinuous points indicate that the occlusion condition occurs; 2. the bars cannot appear in multiple locations on the same frame image (parallel or grid bars are considered as a whole). On the basis, each column (or row) of pixels on the image can be regarded as a time node, and the state transition process of the light bars is regarded as an N-step state transition process from left to right. Firstly, quantizing the characteristics of the light bars on the basis of threshold segmentation; secondly, defining an observation probability and a transition probability equation of the S-HMM, and updating model parameters in real time; finally, the optimal distribution path of the light bars is obtained through Viterbi decoding.
1. Feature selection
And performing edge detection on the monochromatic value image, and combining adjacent edge pixel points on each row (column) in pairs to form a point pair, wherein the light bar presents a series of point pairs. The distance between these pairs of points is relatively small and the positions are relatively concentrated. In addition, the luminance information between the point pair positions is also very similar on the corresponding monochromatic value map. Based on these observable features, the center of gravity (center of mass) method can be used to calculate the position of the center of gravity of the luminance between each point pair, and the light bars are subdivided into a series of skeleton lines. Of course, under noise interference, there cannot be only one center of gravity point on each column (row), and the skeleton lines of the light bars are necessarily accompanied by individual noise points.
All the gravity center positions obtained on each column are taken as the state set of the potential positions of the light bars, and each gravity center point is taken as a state xn(t), n denotes the state index at step t, n ∈ [1, X (t)]And X (t) represents the total number of state quantities in the t step. The number of centers of gravity on each column cannot be exactly the same, so the entire state array x is a non-homogeneous matrix. It is assumed that these states can be observed through an observation sequence Y ═ { Y (1), …, Y (n) }. The coordinate of the edge pixel point in the t step is [ t, p ]n(t)]. Pair of points ([ t, p)n(t)],[t,pn+1(t)]) The distance between the two is determined as the observed quantity y of the S-HMMn(t):
yn(t)=|pn(t)-pn+1(t)| (4)
Point pair ([ t, p)n(t)],[t,pn+1(t)]) Mean value of m between all pixelsn(t), this mean amount will be used to calculate the state transition probability.
S-HMM parameter construction and updating
The purpose of building the S-HMM model is to select a set of state sequences xn(t)={xn(1),…,xn(N) as an optimal path for the light bars to extend, the probability of which resulting in the observation sequence Y is defined as:
Figure BDA0002348562750000081
in order to not ignore every state at the initial step as much as possible, in S-HMM the initial probability distribution is defined as an equal probability distribution, i.e. an equal probability distribution
Figure BDA0002348562750000082
The observed quantity of S-HMM is only two cases: light and non-light bars. The probability of observation can thus be designed in the form of a bernoulli distribution (0-1 distribution). Observation probability equation:
Figure BDA0002348562750000083
here TeIs an upper width threshold that is empirically defined. Under strong light interference, the light bars appear very "thin" in the image, which has a large effect on the threshold segmentation, easily resulting in it being too "thin" that it is segmented. Therefore, in order to avoid missing any possible state quantity as much as possible, a lower threshold is not set in equation (7), although in an ideal environment, increasing a lower threshold may increase the operation speed of the system to some extent.
In fig. 4, open boxes represent edge points, and solid boxes represent gravity points (state quantities of the S-HMM). PijRepresenting the transition probability from the ith state at step t-1 to the jth state at step t.
The state transition relationship of the S-HMM is shown in FIG. 4, and the transition probability thereof is based on the spatial position information between the states of the adjacent steps
Figure RE-GDA0002436684510000084
Where d isijBetween states is Euclidean distance, kijIs the difference in luminance mean between states;
Figure RE-GDA0002436684510000091
and
Figure RE-GDA0002436684510000092
are respectively dijAnd kijThe variance of (a); w is a1And w2Is a pair of weights for weighing the importance of space distance and brightness difference between states, and can be estimated by maximum likelihood method orMaximum a posteriori probability estimation training data is obtained, and has w1+w2=1。
Where d isijBetween states is Euclidean distance, kijIs the difference in luminance mean between states;
Figure BDA0002348562750000092
and
Figure BDA0002348562750000093
are respectively dijAnd kijThe variance of (a); w is a1And w2Is a pair of weights for weighing the importance of space distance and brightness difference between states, can be obtained by training data through a maximum likelihood estimation method or a maximum posterior probability estimation method, and has w1+w2=1。
As can be seen from formulas (5) to (8): the parameters of the S-HMM on each frame image are initial probability distribution pi and state transition matrix A ═ aij]N×MAnd the state observation probability matrix B ═ Br]Are adaptively adjusted according to the illumination, color and position information of the light bar and its environment. The stripe centerline extraction problem can be solved by maximizing equation (5), and the Viterbi algorithm can be used to solve the decoding problem of this S-HMM.
The fourth step: establishing time constraints for light bar tracking
Similar to spatial context correlation, there is also a strong temporal relationship between the light bars and their background in a sequence of video frames. The method has high reference value for processing light bar shading caused by the concave-convex surface of the snow carving, scale change caused by the movement of the structured light sensor and light bar extraction errors caused by light reflection of individual areas. Traditional target tracking considers: the local context of the current frame can be helpful for predicting the position of the light bar in the next frame, the shape and the color of the target between adjacent frames do not change greatly, the change rate is relatively stable, and the target position does not change suddenly. The structured light image also has the above time context, but the snowcarving itself reflects light like a shield, which easily causes abrupt change of the position and the shape of the light bar, so that the tracking method limited to the search area increases the calculation speed, but the tracking result is affected. And the target search aiming at the whole image space can reduce the efficiency, but can ensure the tracking precision.
The method integrates area search and global search, and defines a plurality of suspected light bar areas (consisting of optimal solutions and a plurality of suboptimal solutions in S-HMM decoding results) as a series of sub-windows wnThe method comprises the steps of (α, I, Δ, s), wherein α is the window size, I represents the average value of the brightness of light bars in the window, Δ is the change rate of the tracking result relative to the previous frame, s is a scale factor, defining each window as a state quantity of the current frame, namely the position of an implied light bar, and using characteristic parameters in the window as observed quantities, and further constructing a group of HMM models on a time axis, namely T-HMM, as shown in fig. 5.
On a T-HMM, the initial probability follows the probability distribution of equation (6); the observation probability still adopts Bernoulli distribution to facilitate calculation; the transition probability is described by equation (9):
Figure RE-GDA0002436684510000093
wherein w1,2,3,4The window sizes α, β, the weights, δ, of the luminance mean I and the rate of change Δ, respectivelyα 2、δβ 2、δI 2And deltaΔ 2Representing the observed difference between adjacent states, σα 2、σβ 2、σI 2And σΔ 2Is the variance of the above observations. From equation (9), it can be seen that the transition matrix between adjacent frames is also dynamic and adaptively adjusted according to the change of the observation amount. Thus, the ST-HMM is constructed, and decoding can still be realized through a Viterbi algorithm.
The fifth step: three-dimensional reconstruction
And inputting video information acquired by the structured light sensor into the constructed ST-HMM, and outputting a result which is a series of smooth scanning tracks of the central line of the light bar and is also a global optimal solution of the model. The temporal and spatial information combination processed by the ST-HMM is the space-time context, and the model fully respects the space-time constraint condition of the objective world. And (3) according to the inherent calibration equation of the sensor, converting the extracted and tracked light strip tracking track on an image coordinate system, a sensor coordinate system and a world coordinate system to obtain point cloud data of the surface of the snow carving to form a 3D shape, wherein the experimental result is shown in figure 6, and the measuring process is completed.
It should be noted that the detailed description is only for explaining and explaining the technical solution of the present invention, and the scope of protection of the claims is not limited thereby. It is intended that all such modifications and variations be included within the scope of the invention as defined in the following claims and the description.

Claims (7)

1. A structured light measurement method for snow carving 3D reconstruction is characterized by comprising the following steps:
the method comprises the following steps: firstly, arranging a structured light sensor, wherein the structured light sensor consists of a camera and a linear laser, then calibrating the structured light sensor according to a Zhang Zhengyou calibration method, and finally collecting a video sequence mixed with noise;
step two: distinguishing environmental noise in a video sequence by adopting a maximum likelihood estimation method;
step three: constructing an RGB weight function to carry out color space transformation, and generating a monochromatic value image;
step four: carrying out threshold segmentation on the monochromatic value image;
step five: performing edge detection on the monochromatic value image, and quantizing the light bars into point pairs ([ t, p)n(t)],[t,pn+1(t)]) Calculating the position of the center of gravity of the brightness by adopting a gravity center meter algorithm, and taking the position as the state x of the potential position of the light barn(t), n denotes the state index at step t, n ∈ [1, X (t)]X (t) represents the total number of state quantities in the t step, and the distance between the point pairs is defined as an observed quantity yn(t)=|pn(t)-pn+1(t) | results in a spatial context, where [ t, p |)n(t)]The coordinates of the edge pixel points in the t step are obtained;
step six: establishing inter-frame association by using spatial constraints of spatial context, and then constructing parameters: initial probability distribution, state transition probability distribution and observation probability distribution, and finally constructing an HMM model on a spatial axis, namely an S-HMM;
step seven: maximizing the probability of generating the observation sequence Y ═ { Y (1), …, Y (n) } in the step five, extracting the light bar center line through Viterbi decoding;
step eight: establishing a global and regional comprehensive search strategy, and defining a plurality of suspected light bar regions as a series of sub-windows wn(α, I, Δ, s), where α is the window size, I denotes the average light bar luminance within the window, Δ is the rate of change from the previous frame tracking result, and s is the scale factor;
step nine: defining each window as a state quantity of the current frame, taking characteristic parameters in the window as observed quantities, and constructing parameters: the method comprises the steps of (1) initializing probability distribution, state transition probability distribution and observation probability distribution, then constructing an HMM model on a time axis, namely T-HMM, and obtaining the optimal tracking track of light bars through Viterbi decoding;
step ten: inputting video information collected by a structured light sensor into a constructed ST-HMM, solving a global optimal solution of a model, and outputting a result which is a scanning track of a series of smooth light strip center lines;
step eleven: and (4) converting the extracted and tracked light strip tracking track into an image coordinate system, a sensor coordinate system and a world coordinate system to obtain point cloud data of the surface of the snow carving, and forming a 3D shape.
2. The structured light measurement method for snow carving 3D reconstruction as claimed in claim 1 is characterized in that the maximum likelihood estimation method comprises the following specific steps: firstly, the parameter vector theta is used12,…,θcRepresenting various noises, and then constructing a sample set D ═ D by a plurality of calibrated noise images1,D2…DcSolving the parameter vector of which the distribution density p (D | theta) reaches the maximum value
Figure RE-FDA0002436684500000011
I.e. the estimated noise.
3. Root of herbaceous plantThe structured light measurement method for snow carving 3D reconstruction as set forth in claim 1 is characterized in that the monochromatic value image is a gray scale image, and the RGB weight coefficient of the gray scale image is (w)r,wg,wb) And deciding by a maximum likelihood estimation noise classification result, wherein the conversion relation between the original color image and the monochromatic value image is as follows: fij=wrRij+wgGij+wbBij
Wherein, F is the single color value image obtained after conversion, and i and j respectively represent the row-column index of the pixel.
4. A structured light measurement method for snow carving 3D reconstruction as claimed in claim 3 characterized in that the weight function is defined using the kurtosis concept, the formula is:
Figure RE-FDA0002436684500000021
wherein, κ4Which is the fourth order cumulative amount of the luminance distribution,
Figure RE-FDA0002436684500000022
is the square of the second order cumulative amount κ 2, μ4And σ4The fourth order central moment of the image brightness and the probability distribution variance are respectively.
5. The structured light measurement method for snow carving 3D reconstruction as claimed in claim 4 characterized in that the weight function is solved with a minimum entropy model by letting
Figure RE-FDA0002436684500000023
Solving the maximum value of K.
6. The structured light measurement method for snow carving 3D reconstruction as claimed in claim 1 is characterized in that the concrete steps of constructing parameters in the sixth step are as follows:
(1) the probability of generating the observation sequence Y is defined as:
Figure RE-FDA0002436684500000024
(2) the initial probability distribution is defined as an equiprobable distribution, i.e.
Figure RE-FDA0002436684500000025
(3) Equation of observation probability
Figure RE-FDA0002436684500000026
Wherein, TeIs an upper width threshold that is empirically defined.
(4) The probability of state transition is
Figure RE-FDA0002436684500000027
Wherein d isijBetween states is Euclidean distance, kijIs the difference in luminance mean between states;
Figure RE-FDA0002436684500000028
and
Figure RE-FDA0002436684500000029
are respectively dijAnd kijThe variance of (a); w is a1And w2Is a pair of weights obtained by training data through a maximum likelihood estimation method or a maximum posterior probability estimation method, and w1+w2=1。
7. The structured light measurement method for snow carving 3D reconstruction as claimed in claim 1, characterized in that the specific steps of constructing parameters in the ninth step are as follows:
(1) the initial probability distribution is defined as an equiprobable distribution, i.e.
Figure RE-FDA0002436684500000031
(2) Equation of observation probability
Figure RE-FDA0002436684500000032
Wherein, TeIs an upper width threshold that is empirically defined,
(3) the probability of state transition is
Figure RE-FDA0002436684500000033
Wherein w1,2,3,4The window sizes α, β, the weights, δ, of the luminance mean I and the rate of change Δ, respectivelyα 2、δβ 2、δI 2And deltaΔ 2Representing the observed difference between adjacent states, σα 2、σβ 2、σI 2And σΔ 2Is the variance of the above observations.
CN201911405700.8A 2019-12-30 2019-12-30 Structured light measuring method for snow carving 3D reconstruction Active CN111145344B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911405700.8A CN111145344B (en) 2019-12-30 2019-12-30 Structured light measuring method for snow carving 3D reconstruction

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911405700.8A CN111145344B (en) 2019-12-30 2019-12-30 Structured light measuring method for snow carving 3D reconstruction

Publications (2)

Publication Number Publication Date
CN111145344A true CN111145344A (en) 2020-05-12
CN111145344B CN111145344B (en) 2023-03-28

Family

ID=70522801

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911405700.8A Active CN111145344B (en) 2019-12-30 2019-12-30 Structured light measuring method for snow carving 3D reconstruction

Country Status (1)

Country Link
CN (1) CN111145344B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114164790A (en) * 2021-12-27 2022-03-11 哈尔滨职业技术学院 Intelligent pavement ice and snow clearing and compacting equipment and using method thereof

Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20040220769A1 (en) * 2003-05-02 2004-11-04 Yong Rui System and process for tracking an object state using a particle filter sensor fusion technique
AU2007237336A1 (en) * 2001-05-15 2008-01-10 Psychogenics Inc. Systems and methods for monitoring behaviour informatics
US20100310157A1 (en) * 2009-06-05 2010-12-09 Samsung Electronics Co., Ltd. Apparatus and method for video sensor-based human activity and facial expression modeling and recognition
CN102110296A (en) * 2011-02-24 2011-06-29 上海大学 Method for tracking moving target in complex scene
JP2013207530A (en) * 2012-03-28 2013-10-07 Sony Corp Information processing device, information processing method and program
CN104392414A (en) * 2014-11-04 2015-03-04 河海大学 Establishing method of regional CORS coordinate time series noise model
CN104457741A (en) * 2014-12-08 2015-03-25 燕山大学 Human arm movement tracing method based on ant colony algorithm error correction
CN105842642A (en) * 2016-03-17 2016-08-10 天津大学 Fractional anisotropy microstructure characteristic extraction method based on kurtosis tensor and apparatus thereof
US20160305884A1 (en) * 2013-12-23 2016-10-20 Max-Planck-Gesellschaft Zur Foerderung Der Wissenschaften E.V. High-resolution fluorescence microscopy using a structured beam of excitation light
CN106096615A (en) * 2015-11-25 2016-11-09 北京邮电大学 A kind of salient region of image extracting method based on random walk
CN107680120A (en) * 2017-09-05 2018-02-09 南京理工大学 Tracking Method of IR Small Target based on rarefaction representation and transfer confined-particle filtering

Patent Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
AU2007237336A1 (en) * 2001-05-15 2008-01-10 Psychogenics Inc. Systems and methods for monitoring behaviour informatics
US20040220769A1 (en) * 2003-05-02 2004-11-04 Yong Rui System and process for tracking an object state using a particle filter sensor fusion technique
US20100310157A1 (en) * 2009-06-05 2010-12-09 Samsung Electronics Co., Ltd. Apparatus and method for video sensor-based human activity and facial expression modeling and recognition
CN102110296A (en) * 2011-02-24 2011-06-29 上海大学 Method for tracking moving target in complex scene
JP2013207530A (en) * 2012-03-28 2013-10-07 Sony Corp Information processing device, information processing method and program
US20160305884A1 (en) * 2013-12-23 2016-10-20 Max-Planck-Gesellschaft Zur Foerderung Der Wissenschaften E.V. High-resolution fluorescence microscopy using a structured beam of excitation light
CN104392414A (en) * 2014-11-04 2015-03-04 河海大学 Establishing method of regional CORS coordinate time series noise model
CN104457741A (en) * 2014-12-08 2015-03-25 燕山大学 Human arm movement tracing method based on ant colony algorithm error correction
CN106096615A (en) * 2015-11-25 2016-11-09 北京邮电大学 A kind of salient region of image extracting method based on random walk
CN105842642A (en) * 2016-03-17 2016-08-10 天津大学 Fractional anisotropy microstructure characteristic extraction method based on kurtosis tensor and apparatus thereof
CN107680120A (en) * 2017-09-05 2018-02-09 南京理工大学 Tracking Method of IR Small Target based on rarefaction representation and transfer confined-particle filtering

Non-Patent Citations (11)

* Cited by examiner, † Cited by third party
Title
XIAOLIN ZHOU*等: ""route matching research based on roadless navigation data improvements on hidden markov model"", 《CSAE2019》 *
张焕龙等: "基于区域协方差矩阵和2DPCA学习的视频跟踪方法研究", 《计算机科学》 *
李国友等: ""基于Kinect的动态手势识别算法改进与实现"", 《高技术通讯》 *
杨登科: "欧洲地区IGS基准站坐标时间序列各分量方向最优噪声模型变化探讨", 《全球定位系统》 *
林晓等: "基于自适应权值的点云三维物体重建算法研究", 《图学学报》 *
梅天灿;仲思东;何对燕;: "可变环境光照条件下的结构光条纹检测" *
陆兵等: "基于隐马尔可夫模型和分块特征匹配的目标跟踪算法", 《激光与光电子学进展》 *
陈彦军等: "基于结构光技术的动物内脏三维重建研究", 《智能计算机与应用》 *
陈晨等: "香港CORS站坐标时间序列分析研究", 《全球定位系统》 *
马波等: "基于HMM的卡尔曼蛇跟踪", 《计算机辅助设计与图形学学报》 *
高洪涛等: "基于峭度的重叠峰解析新方法", 《分析化学》 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114164790A (en) * 2021-12-27 2022-03-11 哈尔滨职业技术学院 Intelligent pavement ice and snow clearing and compacting equipment and using method thereof
CN114164790B (en) * 2021-12-27 2022-05-10 哈尔滨职业技术学院 Intelligent pavement ice and snow clearing and compacting equipment and using method thereof

Also Published As

Publication number Publication date
CN111145344B (en) 2023-03-28

Similar Documents

Publication Publication Date Title
CN111563447B (en) Crowd density analysis and detection positioning method based on density map
CN110675418B (en) Target track optimization method based on DS evidence theory
CN104850850B (en) A kind of binocular stereo vision image characteristic extracting method of combination shape and color
CN108648161B (en) Binocular vision obstacle detection system and method of asymmetric kernel convolution neural network
CN110188835B (en) Data-enhanced pedestrian re-identification method based on generative confrontation network model
CN106875437B (en) RGBD three-dimensional reconstruction-oriented key frame extraction method
CN110689562A (en) Trajectory loop detection optimization method based on generation of countermeasure network
CN104978567B (en) Vehicle checking method based on scene classification
CN106991686B (en) A kind of level set contour tracing method based on super-pixel optical flow field
CN112686928B (en) Moving target visual tracking method based on multi-source information fusion
Chen et al. Tracking of moving object based on optical flow detection
CN104063871B (en) The image sequence Scene Segmentation of wearable device
CN111027415A (en) Vehicle detection method based on polarization image
CN109063549A (en) High-resolution based on deep neural network is taken photo by plane video moving object detection method
CN113486894A (en) Semantic segmentation method for satellite image feature component
CN116883588A (en) Method and system for quickly reconstructing three-dimensional point cloud under large scene
CN116468769A (en) Depth information estimation method based on image
CN114926826A (en) Scene text detection system
Li et al. Blinkflow: A dataset to push the limits of event-based optical flow estimation
CN111523494A (en) Human body image detection method
CN111145344B (en) Structured light measuring method for snow carving 3D reconstruction
CN106709977B (en) Automatic light source arrangement method based on scene night scene graph
Zhang et al. Robust water level measurement method based on computer vision
Jacobs et al. Two cloud-based cues for estimating scene structure and camera calibration
CN114882095A (en) Object height online measurement method based on contour matching

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant