CN114332755B - Power generation incinerator monitoring method based on binocular three-dimensional modeling - Google Patents

Power generation incinerator monitoring method based on binocular three-dimensional modeling Download PDF

Info

Publication number
CN114332755B
CN114332755B CN202111529937.4A CN202111529937A CN114332755B CN 114332755 B CN114332755 B CN 114332755B CN 202111529937 A CN202111529937 A CN 202111529937A CN 114332755 B CN114332755 B CN 114332755B
Authority
CN
China
Prior art keywords
image
binocular
power generation
coordinate system
camera
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202111529937.4A
Other languages
Chinese (zh)
Other versions
CN114332755A (en
Inventor
刘涛
戴苗武
陈金浩
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanjing Hanyuan Technology Co ltd
Original Assignee
Nanjing Hanyuan Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanjing Hanyuan Technology Co ltd filed Critical Nanjing Hanyuan Technology Co ltd
Priority to CN202111529937.4A priority Critical patent/CN114332755B/en
Publication of CN114332755A publication Critical patent/CN114332755A/en
Application granted granted Critical
Publication of CN114332755B publication Critical patent/CN114332755B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Image Processing (AREA)

Abstract

The invention discloses a power generation incinerator monitoring method based on binocular three-dimensional modeling, which comprises the following steps: collecting an image of the combustion state of combustible materials in the power generation incinerator and preprocessing the image; recognizing combustible matters in the preprocessed image by using a recognition algorithm, and reconstructing the volume distribution of the internal combustion matters by using a three-dimensional reconstruction algorithm; and outputting data related to the capacity state of the combustion object in the combustion furnace according to the rebuilt internal combustion object volume distribution, so as to realize the monitoring of the power generation incinerator. The method reflects the working condition in the actual combustion furnace, and from the visual angle, the method assists the staff to reasonably arrange the movement time and the mode of the feeder to work, thereby improving the production efficiency.

Description

Power generation incinerator monitoring method based on binocular three-dimensional modeling
Technical Field
The invention relates to the technical field of incinerator real-time monitoring, in particular to a power generation incinerator monitoring method based on binocular three-dimensional modeling.
Background
In the prior art, TOF/structured light based on infrared light cannot reconstruct a highly combustible object (so as to continuously release infrared light to interfere with the imaging of an algorithm, and the binocular is a visible light wave band), and the depth reconstruction imaging quality is stable relative to other sensors; the laser radar has high cost and is not suitable for a scene, so a method is needed to monitor the capacity of the power generation incinerator in real time and efficiently, and the intelligent degree of the power generation incinerator is improved.
Disclosure of Invention
This section is intended to outline some aspects of embodiments of the invention and to briefly introduce some preferred embodiments. Some simplifications or omissions may be made in this section as well as in the description summary and in the title of the application, to avoid obscuring the purpose of this section, the description summary and the title of the invention, which should not be used to limit the scope of the invention.
The present invention has been made in view of the above-described problems occurring in the prior art.
Therefore, the technical problems solved by the invention are as follows: the prior art can not reconstruct highly combustible objects or has high cost, inapplicable scenes and low production efficiency.
In order to solve the technical problems, the invention provides the following technical scheme: collecting an image of the combustion state of combustible materials in the power generation incinerator and preprocessing the image; recognizing combustible matters in the preprocessed image by using a recognition algorithm, and reconstructing the volume distribution of the internal combustion matters by using a three-dimensional reconstruction algorithm; and outputting data related to the capacity state of the combustion object in the combustion furnace according to the rebuilt internal combustion object volume distribution, so as to realize the monitoring of the power generation incinerator.
As a preferable scheme of the binocular three-dimensional modeling-based power generation incinerator monitoring method, the invention comprises the following steps: the image preprocessing process comprises the steps of collecting a preset quantity of pictures in a normal light state for constructing a training image enhancement data set; training by using a self-supervision learning method to obtain an image enhancement result of self-adaptive light supplementing and denoising; and realizing automatic brightness improvement and detail enhancement of the low-light image by using a low-light image enhancement network based on multi-exposure image depth fusion.
As a preferable scheme of the binocular three-dimensional modeling-based power generation incinerator monitoring method, the invention comprises the following steps: the formula for generating the multi-exposure image is:
I i =min(k i *I orig ,1) i∈(1,n)
wherein I is i For exposing the enhanced image, I orig K is the original low-light image i The min operation is carried out for one value among n exposure rates selected between the low-illumination image and the reference image, so that the image is prevented from being too bright caused by overexposure;
the fusion characteristics were generated as follows:
f max =max(f 1 ,f 2 ,L f n )
f fusion =w*concat(f max ,f avg )
wherein f max For the maximum filtering characteristic to be the most efficient,f avg for averaging filter characteristics, f fusion As fusion characteristics, concat is a merging operation in the neural network;
the loss function of the neural network training adopts L1 norm loss between the output illumination enhancement graph and the standard graph:
I high =Exfusion(I ex1 ,I ex2 ,L I exn )
loss Exfusion =||I high -I gt || 1
wherein I is high To enhance images for illumination, I ex1 ~I exn For n images with increased exposure values, the extension is the whole illumination enhancement network, loss Exfusion Loss function for multi-exposure image fusion network, I gt For the standard control image, L is the intermediate omitted variable.
As a preferable scheme of the binocular three-dimensional modeling-based power generation incinerator monitoring method, the invention comprises the following steps: the device also comprises a binocular camera which is used as a collecting tool of the combustion state image of the combustible in the power generation incinerator; defining a binocular camera calibration algorithm in the binocular camera as follows: defining internal parameters within the binocular camera system as: (f) x ,f y ,k 1 ,k 2 ,k 3 ,p 1 ,p 2 ,u 0 ,v 0 ) The external parameters of the binocular camera system are: (R, T); and inputting the calibration parameters into the camera parameter configuration file, and completing coordinate conversion by utilizing a double-target calibration strategy.
As a preferable scheme of the binocular three-dimensional modeling-based power generation incinerator monitoring method, the invention comprises the following steps: the double-target strategy comprises the steps of constructing a monocular imaging model, wherein the monocular imaging model comprises a world coordinate system, a camera coordinate system, an image coordinate system and a pixel coordinate system; the conversion relation of the monocular imaging model from the world coordinate system to the pixel coordinate system is as follows:
wherein,,representing camera reference matrix, r= [ R ] 3×3 T 3×1 ]The matrix of the external parameters is represented,representing world coordinate system coordinates>Representing pixel coordinate system coordinates.
As a preferable scheme of the binocular three-dimensional modeling-based power generation incinerator monitoring method, the invention comprises the following steps: correcting tangential and radial distortions generated by the camera, the correction formula comprising,
as a preferable scheme of the binocular three-dimensional modeling-based power generation incinerator monitoring method, the invention comprises the following steps: according to the coordinate axis mapping relation of the pixel coordinate system and the world coordinate system and defining Z of the calibration chessboard plane in the world coordinate system w =0, simplifying the conversion relationship includes,
the homography matrix to be calibrated is as follows:
the formula for simplifying the coordinate relation is as follows:
as a preferable scheme of the binocular three-dimensional modeling-based power generation incinerator monitoring method, the invention comprises the following steps: the left binocular image and the right binocular image of the binocular camera are subjected to three-dimensional matching algorithm pixel-by-pixel calculation by utilizing a semi-global matching algorithm, wherein the three-dimensional matching algorithm comprises the steps of matching by utilizing a pixel point matching function based on mutual information MI:
wherein q is the pixel point to be matched under different parallax values of p in the polar line direction;
adding a local parallax smoothing term to the matching cost of each pixel point to construct a matching energy function for reducing the matching ambiguity, wherein the energy function is in the form of:
wherein, P1 and P2 are parallax jump penalty coefficients, P1< P2, for constraining pixel parallax local smoothness;
iteratively updating the matching energy function of each pixel point under different parallax values by using a one-dimensional dynamic programming algorithm;
and (5) parallax refining.
As a preferable scheme of the binocular three-dimensional modeling-based power generation incinerator monitoring method, the invention comprises the following steps: the disparity refinement includes calculating a corresponding disparity value of a pixel cost aggregation minimum value based on a winner-to-king policyDetecting and removing error matching points according to the parallax consistency principle of the corresponding matching points under the left view and the right view; and calculating the subpixel parallax by carrying out a local bilinear interpolation model on the parallax d value.
As a preferable scheme of the binocular three-dimensional modeling-based power generation incinerator monitoring method, the invention comprises the following steps: based on the parallax map d, the camera calibration parameters and the binocular baseline distance B, calculating the depth value of the image pixel point according to the geometric relationship of triangulation:
Z=Bf/d
obtaining three-dimensional coordinates (X, Y, Z) under a corresponding camera coordinate system according to the camera internal reference recovery pixel points (u, v, 1):
the invention has the beneficial effects that: the method reflects the working condition in the actual combustion furnace, and from the visual angle, the method assists the staff to reasonably arrange the movement time and the mode of the feeder to work, thereby improving the production efficiency.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings that are needed in the description of the embodiments will be briefly described below, it being obvious that the drawings in the following description are only some embodiments of the present invention, and that other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art. Wherein:
FIG. 1 is a schematic flow diagram of a three-dimensional reconstruction algorithm of a method for monitoring a power generation incinerator based on binocular three-dimensional modeling according to an embodiment of the present invention;
FIG. 2 is a schematic diagram of a binocular three-dimensional modeling power generation incinerator monitoring method based on the binocular three-dimensional modeling power generation incinerator monitoring method according to an embodiment of the present invention;
fig. 3 is a diagram of a low-light image enhancement network structure of multi-exposure image depth fusion of a power generation incinerator monitoring method based on binocular three-dimensional modeling according to an embodiment of the present invention.
Detailed Description
So that the manner in which the above recited objects, features and advantages of the present invention can be understood in detail, a more particular description of the invention, briefly summarized above, may be had by reference to the embodiments, some of which are illustrated in the appended drawings. All other embodiments, which can be made by one of ordinary skill in the art based on the embodiments of the present invention without making any inventive effort, shall fall within the scope of the present invention.
In the following description, numerous specific details are set forth in order to provide a thorough understanding of the present invention, but the present invention may be practiced in other ways other than those described herein, and persons skilled in the art will readily appreciate that the present invention is not limited to the specific embodiments disclosed below.
Further, reference herein to "one embodiment" or "an embodiment" means that a particular feature, structure, or characteristic can be included in at least one implementation of the invention. The appearances of the phrase "in one embodiment" in various places in the specification are not necessarily all referring to the same embodiment, nor are separate or alternative embodiments mutually exclusive of other embodiments.
While the embodiments of the present invention have been illustrated and described in detail in the drawings, the cross-sectional view of the device structure is not to scale in the general sense for ease of illustration, and the drawings are merely exemplary and should not be construed as limiting the scope of the invention. In addition, the three-dimensional dimensions of length, width and depth should be included in actual fabrication.
Also in the description of the present invention, it should be noted that the orientation or positional relationship indicated by the terms "upper, lower, inner and outer", etc. are based on the orientation or positional relationship shown in the drawings, are merely for convenience of describing the present invention and simplifying the description, and do not indicate or imply that the apparatus or elements referred to must have a specific orientation, be constructed and operated in a specific orientation, and thus should not be construed as limiting the present invention. Furthermore, the terms "first, second, or third" are used for descriptive purposes only and are not to be construed as indicating or implying relative importance.
The terms "mounted, connected, and coupled" should be construed broadly in this disclosure unless otherwise specifically indicated and defined, such as: can be fixed connection, detachable connection or integral connection; it may also be a mechanical connection, an electrical connection, or a direct connection, or may be indirectly connected through an intermediate medium, or may be a communication between two elements. The specific meaning of the above terms in the present invention will be understood in specific cases by those of ordinary skill in the art.
Example 1
Referring to fig. 1 to 3, for one embodiment of the present invention, there is provided a power generation incinerator monitoring method based on binocular three-dimensional modeling, including:
s1: and collecting an image of the combustion state of the combustible in the power generation incinerator and preprocessing the image.
The acquisition tool of the combustion state image of the combustible in the power generation incinerator is a binocular camera;
the binocular camera calibration algorithm in the binocular camera is defined as follows:
defining internal parameters within a binocular camera system is: (f) x ,f y ,k 1 ,k 2 ,k 3 ,p 1 ,p 2 ,u 0 ,v 0 ) The external parameters of the binocular camera system are: (R, T);
and inputting the calibration parameters into a camera parameter configuration file, and completing coordinate conversion by utilizing a double-target calibration strategy.
Wherein, the dual targeting strategy comprises:
constructing a monocular imaging model, wherein the monocular imaging model comprises a world coordinate system, a camera coordinate system, an image coordinate system and a pixel coordinate system;
the conversion relation of the monocular imaging model from the world coordinate system to the pixel coordinate system is as follows:
wherein,,representing camera reference matrix, r= [ R ] 3×3 T 3×1 ]Representing external reference momentThe array of which is arranged in a row,representing world coordinate system coordinates>Representing pixel coordinate system coordinates.
However, due to the processing and lens imaging causes tangential and radial distortions, x ', y' can be corrected by the following formula:
according to the coordinate axis mapping relation of the pixel coordinate system and the world coordinate system and defining and calibrating Z of the chessboard plane in the world coordinate system w =0, the simplified conversion relationship includes,
the homography matrix to be calibrated is as follows:
the formula for simplifying the coordinate relation is as follows:
further, the Zhang Zhengyou camera calibration method is adopted to obtain the internal and external parameters:
(1) And respectively shooting checkerboard calibration pictures attached to a fixed plane at different angles and distances by using a binocular camera.
(2) For each shot checkerboard picture, feature points (corner points) of all checkerboards in the picture are detected.
(3) And (3) knowing the pixel coordinates of the angular points and the space coordinates of the checkerboard, optimizing by using a maximum likelihood estimation method, and calculating to obtain an internal parameter M and an external parameter R of the camera.
(4) And deducing and solving the internal parameters and the external parameters of the camera based on the length invariance of the homography matrix rotation vector and the dot product characteristic of the rotation vector.
Further, a binocular imaging device is utilized to obtain real-time binocular images of the power generation incinerator, the real-time binocular images are transmitted to an edge computing server through an intelligent gateway and a router to be used and stored, in order to improve accuracy and operation efficiency of a three-dimensional matching algorithm, a background operation image correction algorithm is used for respectively carrying out image correction processing on images to be subjected to binocular matching, and firstly, image contrast enhancement and image denoising are carried out; next, based on the calibration parameters (f x ,f y ,k 1 ,k 2 ,k 3 ,p 1 ,p 2 ,u 0 ,v 0 ) Performing image distortion removal correction; and finally, performing binocular stereo correction by using the (R, T) and the internal reference matrix M, wherein the corrected image is used for rapid stereo matching algorithm calculation, and simultaneously, for conveniently tracing historical data, configuring NVR local storage video data information and corrected image data.
Specifically, the image preprocessing process includes: in the actual field, under the normal combustion condition in the combustion furnace, except that the flame area is high, the image brightness of other areas is low, a large amount of noise and detail information are lost, in order to better recover the imaging quality of the image in the furnace, the invention adopts a convolution neural network-based method to realize image enhancement, firstly, a certain amount of pictures are collected in a normal light (uniform light of artificial irradiation) state for constructing a training image enhancement data set, the self-supervision learning method is utilized for training to obtain self-adaptive light supplementing and denoising image enhancement results, and the low-illumination image enhancement network based on multi-exposure image depth fusion is utilized for realizing automatic brightness enhancement and detail enhancement of the low-illumination image, wherein the structure of the low-illumination image enhancement network based on multi-exposure image depth fusion is shown in a figure 3.
More specifically, the low-light image enhancement network based on multi-exposure image depth fusion multiplies the low-light image itself by different exposure enhancement factors k in the brightness enhancement network to generate images with different exposure values, and the method for generating the multi-exposure image is shown in the following formula:
I i =min(k i *I orig ,1) i∈(1,n)
wherein I is i For exposing the enhanced image, I orig K is the original low-light image i The min operation is carried out for one value among n exposure rates selected between the low-illumination image and the reference image, so that the image is prevented from being too bright caused by overexposure;
because the parts with higher exposure quality of each exposure image are different, the ideal parts of each exposure image are fused together to obtain the optimal exposure image, and the brightness enhancement based on the fusion of the multiple exposure images can achieve the aim. The multi-exposure image fusion network essentially comprises a coding and decoding network structure (a downsampling coding extracting feature, an upsampling decoding recovering image according to the feature), and the difference is that each layer of convolution (all convolution layers in the coding and decoding structure) feature images of the multi-exposure image are fused by utilizing a fusion module, each exposure image considers the characteristics of other exposure images when extracting the feature and utilizing the feature, so that the most obvious feature of the exposure image is extracted, finally, the most obvious feature of all exposure images is fused to obtain the most ideal illumination enhancement result, in order to enable the fusion module to accurately recover the image details from a dark area and enable the color distribution to be more similar to that of a true contrast image, each fusion module respectively fuses each branch feature by adopting a structure of maximum filtering (finding more image details) and average filtering (minimizing color offset) on the input multi-branch feature, wherein the generation of the fusion feature is shown in the following formula:
f max =max(f 1 ,f 2 ,L f n )
f fusion =w*concat(f max ,f avg )
wherein f max For maximum filtering characteristics, f avg For averaging filter characteristics, f fusion As fusion characteristics, concat is a merging operation in the neural network;
the characteristics obtained after the fusion of each fusion block and the special type before the fusion are unified as the input of the next layer convolution, and the low illumination enhancement output based on the multi-exposure image fusion is shown in the following formula:
I high =Exfusion(I ex1 ,I ex2 ,L I exn )
the L1 norm loss between the output illumination enhancement graph and the standard graph is adopted by the network training loss function, and the following formula is shown:
loss Exfusion =||I high -I gt || 1
wherein I is high To enhance images for illumination, I ex1 ~I exn For n images with increased exposure values, the extension is the whole illumination enhancement network, loss Exfusion Loss function for multi-exposure image fusion network, I gt For the standard control image, L is the intermediate omitted variable.
S2: and identifying combustible matters in the preprocessed image by using an identification algorithm, and reconstructing the volume distribution of the internal combustion matters by using a three-dimensional reconstruction algorithm.
In the edge calculation server, a stereo matching algorithm is performed on the left and right binocular images using a semi-global matching algorithm (SemiGlobal Matching), and the algorithm outputs a parallax d result (the coordinate deviation of the matching points of the left and right images).
The method for calculating the left and right binocular images of the binocular camera pixel by utilizing a semi-global matching algorithm comprises the following steps:
matching is carried out by utilizing a pixel point matching function based on the mutual information amount MI:
wherein q is the pixel point to be matched under different parallax values of p in the polar line direction;
adding a local parallax smoothing term to the matching cost of each pixel point to construct a matching energy function for reducing the matching ambiguity, wherein the energy function is in the form of:
wherein, P1 and P2 are parallax jump penalty coefficients, P1< P2, for constraining pixel parallax local smoothness;
iteratively updating the matching energy function of each pixel point under different parallax values by using a one-dimensional dynamic programming algorithm;
and (5) parallax refining.
Further, the disparity refinement includes:
calculating corresponding disparity value of pixel cost aggregation minimum value based on WTA (winner-to-King) strategy
Detecting and removing error matching points according to the parallax consistency principle of the corresponding matching points under the left view and the right view;
and calculating the subpixel parallax by carrying out a local bilinear interpolation model on the parallax d value.
Further, based on the disparity map d, the camera calibration parameter f= (f) x +f y ) And/2 and binocular baseline distance B (Ty in the external reference matrix T), and calculating the depth value of the image pixel point according to the geometric relationship of triangulation:
Z=Bf/d
recovering pixel points (u, v, 1) according to the camera internal parameters to obtain three-dimensional coordinates (X, Y, Z) under a corresponding camera coordinate system:
s3: and outputting data related to the capacity state of the combustion object in the combustion furnace according to the rebuilt internal combustion object volume distribution, so as to realize the monitoring of the power generation incinerator.
The invention provides a binocular three-dimensional modeling power generation incinerator monitoring method, which is characterized in that three-dimensional visualization based on binocular stereoscopic imaging reflects the working condition in an actual combustion furnace, so that the production efficiency is improved, a capacity recognition technology system utilizes a binocular camera to monitor the combustion state of combustible materials in the combustion furnace at fixed time, internal image data is transmitted to a video intelligent analysis micro-server for recognition and analysis, the volume distribution of the internal combustion materials is rebuilt through a three-dimensional reconstruction algorithm, and finally three-dimensional point cloud, a visible light color image and a depth image in the furnace are output to reflect the capacity state of the combustion materials in the combustion furnace, so that the working time and the mode of a feeder are reasonably arranged by assisting workers from the visual angle.
Example 2
The embodiment is another embodiment of the present invention, and the embodiment is different from the first embodiment in that a verification test of a monitoring method of a power generation incinerator based on binocular three-dimensional modeling is provided, in order to verify and explain the technical effects adopted in the method, the embodiment adopts the traditional technical scheme to carry out a comparison test with the method of the present invention, and the test results are compared by means of scientific proof to verify the true effects of the method.
In this example, the capacity of the power generation incinerator will be measured and compared in real time by using the conventional method and the method, and the comparison results are shown in the following table.
Table 1: depth ranging method comparison table.
As can be seen from comparison of the depth ranging method in Table 1, the actual working temperature of the combustion furnace is higher, the structure light and time flight depth camera, the laser radar and the like cannot work normally under the high temperature condition based on the active light source ranging equipment, but the visible light is less influenced by the high temperature, and binocular stereoscopic vision is taken as one of important research branches of machine vision, and through years of research and development, a relatively complete theoretical system is formed, and the applicability of the system in various scenes and extreme working environments is widely verified, so that the combustion furnace combustion material capacity recognition system recovers the combustion material space information in the furnace based on the binocular stereoscopic vision three-dimensional reconstruction technology of the visible light, and further the recognition purpose of effective monitoring capacity is achieved.
It should be noted that the above embodiments are only for illustrating the technical solution of the present invention and not for limiting the same, and although the present invention has been described in detail with reference to the preferred embodiments, it should be understood by those skilled in the art that the technical solution of the present invention may be modified or substituted without departing from the spirit and scope of the technical solution of the present invention, which is intended to be covered in the scope of the claims of the present invention.

Claims (8)

1. The utility model provides a power generation incinerator monitoring method based on binocular three-dimensional modeling, which is characterized by comprising the following steps:
collecting an image of the combustion state of combustible materials in the power generation incinerator and preprocessing the image;
the image pre-processing procedure includes the steps of,
acquiring a preset quantity of images in a normal light state for constructing a training image enhancement data set;
training by using a self-supervision learning method to obtain an image enhancement result of self-adaptive light supplementing and denoising;
the automatic brightness improvement and detail enhancement of the low-light image are realized by utilizing a low-light image enhancement network based on multi-exposure image depth fusion;
the formula for generating the multi-exposure image is:
I i =min(k i *I orig ,1) i∈(1,n)
wherein I is i For exposing the enhanced image, I orig K is the original low-light image i The min operation is carried out for one value among n exposure rates selected between the low-illumination image and the reference image, so that the image is prevented from being too bright caused by overexposure;
the fusion characteristics were generated as follows:
f max =max(f 1 ,f 2 ,L f n )
f fusion =w*concat(f max ,f avg )
wherein f max For maximum filtering characteristics, f avg For averaging filter characteristics, f fusion As fusion characteristics, concat is a merging operation in the neural network;
the loss function of the neural network training adopts L1 norm loss between the output illumination enhancement graph and the standard graph:
I high =Exfusion(I ex1 ,I ex2 ,L I exn )
loss Exfusion =||I high -I gt || 1
wherein I is high To enhance images for illumination, I ex1 ~I exn For n images with increased exposure, the extension is the entire illumination enhancement network, loss Exfusion Loss function for multi-exposure image fusion network, I gt For a standard control image, L is an intermediate omission variable;
recognizing combustible matters in the preprocessed image by using a recognition algorithm, and reconstructing the volume distribution of the internal combustion matters by using a three-dimensional reconstruction algorithm;
and outputting data related to the capacity state of the combustion object in the combustion furnace according to the rebuilt internal combustion object volume distribution, so as to realize the monitoring of the power generation incinerator.
2. The method for monitoring the power generation incinerator based on binocular three-dimensional modeling according to claim 1, wherein the method comprises the following steps: also included is a method of manufacturing a semiconductor device,
the acquisition tool of the combustion state image of the combustible in the power generation incinerator is a binocular camera;
defining a binocular camera calibration algorithm in the binocular camera as follows:
defining internal parameters within the binocular camera system as: (f) x ,f y ,k 1 ,k 2 ,k 3 ,p 1 ,p 2 ,u 0 ,v 0 ) The external parameters of the binocular camera system are: (R, T);
and inputting the calibration parameters into the camera parameter configuration file, and completing coordinate conversion by utilizing a double-target calibration strategy.
3. The method for monitoring the power generation incinerator based on binocular three-dimensional modeling according to claim 2, wherein the method comprises the following steps: the dual targeting strategy includes, among others,
constructing a monocular imaging model, wherein the monocular imaging model comprises a world coordinate system, a camera coordinate system, an image coordinate system and a pixel coordinate system;
the conversion relation of the monocular imaging model from the world coordinate system to the pixel coordinate system is as follows:
wherein,,representing camera reference matrix, r= [ R ] 3×3 T 3×1 ]Representing the external parameter matrix, < >>Representing world coordinate system coordinates>Representing pixel coordinate system coordinates.
4. A binocular three dimensional modeling based power generation incinerator monitoring method according to any one of claims 1 to 3, characterized in that: correcting tangential and radial distortions generated by the camera, the correction formula comprising,
5. the method for monitoring the power generation incinerator based on binocular three-dimensional modeling according to claim 4, wherein the method comprises the following steps: according to the coordinate axis mapping relation of the pixel coordinate system and the world coordinate system and defining Z of the calibration chessboard plane in the world coordinate system w =0, simplifying the conversion relationship includes,
the homography matrix to be calibrated is as follows:
the formula for simplifying the coordinate relation is as follows:
6. the method for monitoring the power generation incinerator based on binocular three-dimensional modeling according to claim 5, wherein the method comprises the following steps: the left and right binocular images of the binocular camera are subjected to stereo matching algorithm pixel-by-pixel calculation by using a semi-global matching algorithm, comprising,
matching is carried out by utilizing a pixel point matching function based on the mutual information amount MI:
C MI (p,d)=-MI(I 1p ,I 2q* )
wherein q is p is the pixel point to be matched under different parallax values in the polar line direction, and d is the parallax map;
adding a local parallax smoothing term to the matching cost of each pixel point to construct a matching energy function for reducing the matching ambiguity, wherein the energy function is in the form of:
wherein P is 1 And P 2 Is the parallax jump punishment coefficient, P 1 <P 2 For constraining the pixel disparity local smoothness;
iteratively updating the matching energy function of each pixel point under different parallax values by using a one-dimensional dynamic programming algorithm;
and (5) parallax refining.
7. The method for monitoring the power generation incinerator based on binocular three-dimensional modeling according to claim 6, wherein the method comprises the following steps: the said disparity refinement includes that,
calculating a corresponding disparity value d=argmin E of a pixel point cost aggregation minimum value based on a winner king strategy p
Detecting and removing error matching points according to the parallax consistency principle of the corresponding matching points under the left view and the right view;
and calculating the subpixel parallax by carrying out a local bilinear interpolation model on the parallax d value.
8. The method for monitoring the power generation incinerator based on binocular three-dimensional modeling according to claim 7, wherein the method comprises the following steps: based on the parallax map d, the camera calibration parameters and the binocular baseline distance B, calculating the depth value of the image pixel point according to the geometric relationship of triangulation:
Z=Bf/d
obtaining three-dimensional coordinates (X, Y, Z) under a corresponding camera coordinate system according to the camera internal reference recovery pixel points (u, v, 1):
CN202111529937.4A 2021-12-06 2021-12-06 Power generation incinerator monitoring method based on binocular three-dimensional modeling Active CN114332755B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111529937.4A CN114332755B (en) 2021-12-06 2021-12-06 Power generation incinerator monitoring method based on binocular three-dimensional modeling

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111529937.4A CN114332755B (en) 2021-12-06 2021-12-06 Power generation incinerator monitoring method based on binocular three-dimensional modeling

Publications (2)

Publication Number Publication Date
CN114332755A CN114332755A (en) 2022-04-12
CN114332755B true CN114332755B (en) 2023-07-25

Family

ID=81050202

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111529937.4A Active CN114332755B (en) 2021-12-06 2021-12-06 Power generation incinerator monitoring method based on binocular three-dimensional modeling

Country Status (1)

Country Link
CN (1) CN114332755B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117495174B (en) * 2023-11-03 2024-07-19 睿智合创(北京)科技有限公司 Foreground data monitoring method and system of scoring card model

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112924033A (en) * 2021-01-26 2021-06-08 广州康达环保技术有限公司 Method for monitoring combustion flame state of garbage incinerator
CN113284251A (en) * 2021-06-11 2021-08-20 清华大学深圳国际研究生院 Cascade network three-dimensional reconstruction method and system with self-adaptive view angle

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111798515B (en) * 2020-06-30 2024-01-12 大连亚泰华光电技术有限公司 Stereoscopic vision monitoring method for recognizing incineration condition
CN111967206B (en) * 2020-08-18 2024-05-28 北京首创环境科技有限公司 Method, system and application for constructing three-dimensional temperature field of waste heat boiler
CN113096029A (en) * 2021-03-05 2021-07-09 电子科技大学 High dynamic range image generation method based on multi-branch codec neural network

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112924033A (en) * 2021-01-26 2021-06-08 广州康达环保技术有限公司 Method for monitoring combustion flame state of garbage incinerator
CN113284251A (en) * 2021-06-11 2021-08-20 清华大学深圳国际研究生院 Cascade network three-dimensional reconstruction method and system with self-adaptive view angle

Also Published As

Publication number Publication date
CN114332755A (en) 2022-04-12

Similar Documents

Publication Publication Date Title
CN111047510B (en) Large-field-angle image real-time splicing method based on calibration
CN109685913B (en) Augmented reality implementation method based on computer vision positioning
CN110782394A (en) Panoramic video rapid splicing method and system
WO2016037486A1 (en) Three-dimensional imaging method and system for human body
CN113192179B (en) Three-dimensional reconstruction method based on binocular stereo vision
CN113538569B (en) Weak texture object pose estimation method and system
CN103337094A (en) Method for realizing three-dimensional reconstruction of movement by using binocular camera
CN106155299B (en) A kind of pair of smart machine carries out the method and device of gesture control
CN107833181A (en) A kind of three-dimensional panoramic image generation method and system based on zoom stereoscopic vision
CN111028295A (en) 3D imaging method based on coded structured light and dual purposes
CN111027415B (en) Vehicle detection method based on polarization image
CN111461963B (en) Fisheye image stitching method and device
CN106355621A (en) Method for acquiring depth information on basis of array images
CN111047709A (en) Binocular vision naked eye 3D image generation method
CN114332689A (en) Citrus identification and positioning method, device, equipment and storage medium
CN114332755B (en) Power generation incinerator monitoring method based on binocular three-dimensional modeling
CN113971691A (en) Underwater three-dimensional reconstruction method based on multi-view binocular structured light
CN110211220A (en) The image calibration suture of panorama fish eye camera and depth reconstruction method and its system
CN113284249B (en) Multi-view three-dimensional human body reconstruction method and system based on graph neural network
CN117671159A (en) Three-dimensional model generation method and device, equipment and storage medium
CN116437205B (en) Depth of field expansion method and system for multi-view multi-focal length imaging
CN111161399B (en) Data processing method and assembly for generating three-dimensional model based on two-dimensional image
CN116503553A (en) Three-dimensional reconstruction method and device based on binocular vision and diffusion model
CN114998532B (en) Three-dimensional image visual transmission optimization method based on digital image reconstruction
CN111797682A (en) Cross-modal feature learning and face synthesis method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant