CN108182706A - The monitoring method and system of a kind of incinerated matter - Google Patents
The monitoring method and system of a kind of incinerated matter Download PDFInfo
- Publication number
- CN108182706A CN108182706A CN201711298090.7A CN201711298090A CN108182706A CN 108182706 A CN108182706 A CN 108182706A CN 201711298090 A CN201711298090 A CN 201711298090A CN 108182706 A CN108182706 A CN 108182706A
- Authority
- CN
- China
- Prior art keywords
- point
- image
- neural network
- cause
- dimensional coordinate
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/70—Determining position or orientation of objects or cameras
- G06T7/73—Determining position or orientation of objects or cameras using feature-based methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
- G06F18/241—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
- G06F18/2411—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on the proximity to a decision surface, e.g. support vector machines
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/20—Image preprocessing
- G06V10/26—Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
- G06V10/267—Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion by performing operations on regions, e.g. growing, shrinking or watersheds
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/46—Descriptors for shape, contour or point-related descriptors, e.g. scale invariant feature transform [SIFT] or bags of words [BoW]; Salient regional features
- G06V10/467—Encoded features or binary features, e.g. local binary patterns [LBP]
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- Multimedia (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Evolutionary Computation (AREA)
- Evolutionary Biology (AREA)
- General Engineering & Computer Science (AREA)
- Bioinformatics & Computational Biology (AREA)
- Artificial Intelligence (AREA)
- Life Sciences & Earth Sciences (AREA)
- Image Analysis (AREA)
- Fire-Detection Mechanisms (AREA)
Abstract
The present invention provides a kind of monitoring method and system of incinerated matter, and method includes:Acquire the image information of each Image Acquisition point in monitoring area;Image procossing is carried out to described image information, obtains the two-dimensional coordinate of the burning things which may cause a fire disaster point in image;According to the two-dimensional coordinate of burning things which may cause a fire disaster point and the space coordinate of collection point, the three-dimensional coordinate of burning things which may cause a fire disaster point is obtained by space orientation;The present invention can realize the Accurate Segmentation of castoff burning under non-structure environment, and positioning, the present invention can be combined with existing monitoring system, it is converted by the coordinate of the burning things which may cause a fire disaster point in arranged monocular-camera space coordinate and the pictorial information of acquisition, obtain the three-dimensional coordinate in the space of object, the present invention has real-time, the advantages of object location is accurate, violation picture can be captured in real time, precise positioning burns the spatial position of point in violation of rules and regulations, enforcement evidence is provided for law enforcement, so that relevant departments carry out real time monitoring and precisely law enforcement, be conducive to the improvement of atmosphere quality.
Description
Technical field
The present invention relates to environmental monitoring field more particularly to the monitoring methods and system of a kind of incinerated matter.
Background technology
With the quickening of process of industrialization step, environmental problem has become the problem of people are very important, wherein rubbish
Rubbish process problem is even more important, and is that simple stack fills, but this mode hidden danger is very big, mesh to the processing mode of rubbish in the past
Before, waste incineration has gradually become the main means of China's processing rubbish, but waste incineration can be inevitably to surrounding
Environment impact, especially without the waste incineration of any processing, endanger the stalk of bigger, especially rural area with
Meaning abandon, unordered open incineration phenomenon it is extremely serious, generated after burning a large amount of flue dust, smog, carbon monoxide, carbon dioxide,
The harmful substances such as sulfur dioxide make local air quality severe exacerbation, have become one of main arch-criminal of haze weather, simultaneously
Respiratory tract, lung and eye disease etc. are induced, brings a series of environmental health problems.
Therefore, accelerate to promote stalk comprehensive utilization, to stalk, random open incineration problem carries out effective monitoring, and carries out reality
When identify and position, the illegal evidence of capture terminal is transmitted to platform, while bring enforcement evidence to law enfrocement official in real time, it is right
The improvement of atmosphere quality has great importance.But it is big for the open incineration problem supervision difficulty of non-structure environment,
Its environment area is big, and randomness is strong, it is difficult to accomplish artificial effective supervision.Therefore, it is necessary to a kind of new technological means, Neng Goujie
Existing monitoring system is closed, by monocular localization method, carries out targeting accuracy positioning, has reached crawl violation picture, essence in real time
Certainly the spatial position of point is burned in position in violation of rules and regulations, so that relevant departments carry out real-time and precisely law enforcement.
Invention content
In view of the foregoing deficiencies of prior art, the present invention provides a kind of monitoring method and system of incinerated matter, with solution
Certainly above-mentioned technical problem.
The monitoring method of incinerated matter provided by the invention, including:
Acquire the image information of each Image Acquisition point in monitoring area;
Image procossing is carried out to described image information, obtains the two-dimensional coordinate of the burning things which may cause a fire disaster point in image;
According to the two-dimensional coordinate of burning things which may cause a fire disaster point and the space coordinate of collection point, the three-dimensional of burning things which may cause a fire disaster point is obtained by space orientation and is sat
Mark.
Further, described image processing includes carrying out region segmentation to the image information of acquisition by SVM, obtains incinerated matter
Region, and pass through region centroid algorithm and obtain the regional center for burning object image.
Further, the LBP codings of each pixel in image information are extracted, form LBP figures, the LBP seals record is each
The LBP values and color characteristic of pixel;
Schemed according to the LBP, obtain the encoded radio of different object pictures and color characteristic and be marked;
The encoded radio of the different object pictures is trained, obtains the parameter of SVM classifier;
The pixel of unknown encoded radio is input in the SVM classifier trained and is identified, obtains region segmentation result,
The region segmentation result includes zone of origin and background area;
By region centroid algorithm, the two-dimensional coordinate of zone of origin moderate heat source point is obtained.
Further, the space orientation includes:Monocular ranging localization model is established, passes through the monocular ranging localization model
By the three-dimensional coordinate that the two dimensional coordinate map under the image coordinate system residing for burning things which may cause a fire disaster point is road surface coordinate system, the monocular ranging is determined
Bit model is represented by equation below:
Wherein, x, y, z are respectively the D coordinates value of target point mapping point in the coordinate system of road surface, x0, y0Respectively target
Point image coordinate system two-dimensional coordinate value, γ be camera setting angle, zqIt is burnt for camera optical axis rectilinear direction and road surface plane
Coordinate value in the road surface coordinate system of point.
Further, the spatial position installed according to the three-dimensional coordinate of the road surface coordinate system of burning things which may cause a fire disaster point and monocular distance measuring device,
Obtain the spatial position of burning things which may cause a fire disaster point.
Further, described image processing further includes:
Establish deep neural network model;
Described image information is inputted into deep neural network model, obtains in image and contains the general of waste incineration characteristic information
Rate;
The identification of rubbish object burning is completed according to the probability.
Further, the model includes rubbish identification deep neural network submodel, smog identification deep neural network
Model and flare identification deep neural network submodel;The waste incineration characteristic information include junk information, smog information and
Flare information.
Further, described image information is inputted into deep neural network model, obtained respectively in image containing rubbish, smog
With the probability of flare, and it with preset threshold value is compared respectively, the identification that rubbish object burns is completed according to comparison result.
Further, it further includes and deep neural network model is trained, the training includes:
Rubbish identification deep neural network submodel, smog identification deep neural network submodel and flare is obtained respectively to know
Three penalty values are carried out joint trainings by the penalty values of other deep neural network submodel output, and by the new damage of joint training
Mistake value propagates back to identification deep neural network submodel, smog identification deep neural network submodel and flare identification depth
Neural network submodel.
The present invention also provides a kind of monitoring system of incinerated matter, including:
Image acquisition units, for acquiring the image information of each Image Acquisition point in monitoring area;
Image processing unit for carrying out image procossing to described image information, obtains the two dimension of the burning things which may cause a fire disaster point in image
Coordinate;
Space orientation unit for the two-dimensional coordinate and the space coordinate of collection point according to burning things which may cause a fire disaster point, passes through space orientation
Obtain the three-dimensional coordinate of burning things which may cause a fire disaster point.
Beneficial effects of the present invention:The monitoring method and system of incinerated matter in the present invention, can realize non-structure environment
The Accurate Segmentation of lower castoff burning and positioning, the present invention can be combined with existing monitoring system, by arranged
Monocular-camera space coordinate and acquisition pictorial information in the coordinate of burning things which may cause a fire disaster point converted, obtain the space of object
Three-dimensional coordinate, the present invention has real-time, the advantages of object location is accurate, can capture violation picture, precisely in real time
Positioning in violation of rules and regulations burn point spatial position, for law enforcement provide enforcement evidence, so as to relevant departments carry out real time monitoring and precisely
Law enforcement, is conducive to the improvement of atmosphere quality.
Description of the drawings
Fig. 1 is the flow diagram of the monitoring method of incinerated matter in the embodiment of the present invention.
Fig. 2 is the principle schematic of monocular ranging in the monitoring method of incinerated matter in the embodiment of the present invention.
Specific embodiment
Illustrate embodiments of the present invention below by way of specific specific example, those skilled in the art can be by this specification
Disclosed content understands other advantages and effect of the present invention easily.The present invention can also pass through in addition different specific realities
The mode of applying is embodied or practiced, the various details in this specification can also be based on different viewpoints with application, without departing from
Various modifications or alterations are carried out under the spirit of the present invention.It should be noted that in the absence of conflict, following embodiment and implementation
Feature in example can be combined with each other.
It should be noted that the diagram provided in following embodiment only illustrates the basic structure of the present invention in a schematic way
Think, component count, shape and size when only display is with related component in the present invention rather than according to actual implementation in schema then
It draws, kenel, quantity and the ratio of each component can be a kind of random change during actual implementation, and its assembly layout kenel
It is likely more complexity.
As shown in Figure 1, the monitoring method of the incinerated matter in the present embodiment, including:
Acquire the image information of each Image Acquisition point in monitoring area;
Image procossing is carried out to described image information, obtains the two-dimensional coordinate of the burning things which may cause a fire disaster point in image;
According to the two-dimensional coordinate of burning things which may cause a fire disaster point and the space coordinate of collection point, the three-dimensional of burning things which may cause a fire disaster point is obtained by space orientation and is sat
Mark.
In the present embodiment, by using image processing techniques, achieve the purpose that terminal video dynamic monitors, the present embodiment
The illegal burning problem of intelligent capture is carried out using image processing algorithm, and carries out real time propelling movement, by monocular localization method,
Targeting accuracy positioning is carried out, has reached crawl violation picture, precise positioning burning things which may cause a fire disaster point in real time, i.e., burns the space bit of point in violation of rules and regulations
It puts, so that relevant departments in real time and precisely enforce the law.
The image procossing of the present embodiment includes carrying out region segmentation to the image information of acquisition by SVM, obtains incinerated matter
Region, and pass through region centroid algorithm and obtain the regional center for burning object image, Image Acquisition can be by existing monitoring
System, such as spherical camera are completed to obtain incinerated matter scene picture, and the area of incinerated matter, background area are obtained by the method for label
The area is extracted color characteristic by domain, and textural characteristics are simultaneously marked, and will finally be obtained classifier parameters and be carried out image segmentation,
Reach the accurate segmentation of fire disaster target and background.Using area central point algorithm obtains fire's point of origin point in image coordinate system
Two dimensional image coordinate is transformed into three-dimensional space coordinate by corresponding points standardization, has obtained burning things which may cause a fire disaster apart from camera by coordinate
Accurate spatial locations.
Region segmentation is carried out by SVM (support vector machines) in the present embodiment, obtains and burns object area, is mainly included as follows
Step:
Step 1:LBP (Local Binary Patterns, the local binary patterns) codings of each pixel are extracted, so as to
A LBP figure is formed, and extracts its color characteristic, which records the LBP values and color characteristic of each pixel.Pass through Freehandhand-drawing
ROI regions respectively obtain flame, smog, the LBP code patterns of background and color characteristic and are marked, then being encoded
Value is trained, and obtains the parameter of the hyperplane function of SVM classifier, it is possible to carry out the identification in next step to three classes.
Step 2:The pixel of unknown encoded radio extracted, it is put into trained complete grader and is identified, it can
With separate flame, smog, three kinds of regions of background pixel.
Step 3:The zone of origin split by grader and background area obtain cut zone, after by asking region
Barycenter obtains regional center location drawing picture coordinate system.
Preferably, in the present embodiment, image is handled using Image Moments algorithms, zoning image
Barycenter, geometrical properties such as direction, while the high-order of Mpq has rotational invariance, can be used for realizing image match stop, calculate
Method is represented by equation below:
Wherein, MpqFor the moment of the orign of image, x, y are respectively the position of region point, and f (x, y) is corresponding gray value, p, q
Respectively corresponding exponent number, a1, a2, b1, b2Boundary for corresponding region.
Low order M00, M01, M10 can be used for calculating barycenter, and M11 after centralization, M02, M20 can be used for zoning
Direction/angle.
First, the profile information included in image is found out using findContour () function, then all profiles are carried out
Traversal, and calculate the torque (Moment) of each profile, it is possible to obtain the centroid position of object:
Wherein, M00, M01, M10The low-order moment (a few rank squares of corresponding direction are obtained in above-mentioned formula) in corresponding x, y direction
In the present embodiment, the three-dimensional coordinate of burning things which may cause a fire disaster point is obtained by monocular camera space-location method, it is necessary first to
Monocular ranging localization model is established, by the monocular ranging localization model by the two dimension under the image coordinate system residing for burning things which may cause a fire disaster point
Coordinate is mapped as the three-dimensional coordinate of road surface coordinate system, and monocular ranging localization model is represented by equation below:
Wherein, x, y, z are respectively the D coordinates value of target point mapping point in the coordinate system of road surface, x0, y0Respectively target
Point image coordinate system two-dimensional coordinate value, γ be camera setting angle, zqIt is burnt for camera optical axis rectilinear direction and road surface plane
Coordinate value in the road surface coordinate system of point.
For simplified model in the present embodiment, model is done first it is assumed hereinafter that:Assuming that road surface is ideal plane;Assuming that it takes the photograph
Camera is undistorted;Camera angle position is fixed, non-jitter.As shown in Figure 1, the present embodiment can be incited somebody to action according to pinhole imaging system principle
Monocular camera model simplification is video camera perspective geometry model.O ' are camera focus in figure, and dotted line represents optical axis, and xOy is
The plane of delineation, x " Oy " are road plane, and Ox ' y ' z ' are intermediate conversion coordinate system;Plane of delineation size is known as W × H;Laterally
Field angle is α;Camera mounting height is L;Camera setting angle is γ;Using overfocus and be parallel to the plane of the plane of delineation as
Reference planes (coordinate system), then have:Road surface normal vectorQ coordinates are Q (0,0, z on road surfaceq), then road
Plane equation is represented by:
ysinγ+(z-zq) cos γ=0 (formula 4)
In formula,It can be obtained according to geometrical relationship derivation:
A point coordinates P (x on the plane of delineation0,y0, 0) mapping point for P " (x, y, z), principal point is Ο ' (0,0, s), then directly
Line O ' P direction vectors are:
In formula,It can be obtained according to geometrical relationship derivation:
Then linear equation is represented by:
Simultaneous (4-1) and (4-5) two formula obtain:
T is brought into formulas (4-5) up to P " putting the coordinate P under Oxyz " (x ", y ", z ").
Coordinate system moves to principal point O', becauseThen translation matrix is:
Then coordinate system is around x-axis rotation angle γ, then spin matrix:
Finally, coordinate system translates L along z-axis negative direction, then translation matrix is:
It is represented by under P " in Ox " y " z " coordinate system:
P″*=(P ", 1) T1·Rot·T2(formula 13)
It finally can obtain point (x under image coordinate system0,y0,z0) to the mapping relations such as (formula 3) of road surface coordinate system (x, y, z)
It is shown, wherein, (x, the y, z) that finally obtains is the three-dimensional coordinate of target point.
In the present embodiment, image procossing is further included is identified to burning picture, and the step of identification is:
Establish deep neural network model;
Described image information is inputted into deep neural network model, obtains in image and contains the general of waste incineration characteristic information
Rate;
The identification of rubbish object burning is completed according to the probability.
In the present embodiment, deep neural network model mainly includes rubbish identification deep neural network submodel, smog is known
Other deep neural network submodel and flare identification deep neural network submodel;The waste incineration characteristic information includes rubbish
Information, smog information and flare information, the situation of change burned due to rubbish object are burned smoke condition, burnt because of rubbish type
The difference for seedling situation of making a fire and it is different, need the sample photo for collecting the rubbish burned inherently extremely difficult, to
The situation for covering these variations is even more difficulty.The present embodiment is asked by the way that rubbish object burning identification problem is first converted into three sons
Topic goes to solve these three subproblems using three sub-networks respectively, i.e., one deep neural network for rubbish identification, one
The problem of deep neural network of smog identification, the deep neural network of a flare identification, each network here is directed to, is all
Compare targetedly, a large amount of picture samples of each case can be also obtained on disclosed database and network, such as rubbish object photo,
Smog photo and flare photo are all very more, and network each in this way can complete basic training.
In the present embodiment by described image information input deep neural network model, respectively obtain image in containing rubbish,
The probability of smog and flare, and it is compared respectively with preset threshold value, complete what rubbish object burned according to comparison result
Basis identification the characteristics of burning according to rubbish object, generally all includes " rubbish ", " smog ", " flare " this feature, and rubbish object is burnt
After the photo of burning is separately input to these three networks, each network can export a probability value, when the probability value of three networks
It is the just very big of the probability that rubbish object burns in the case of all very big.Preferably, three networks output in the present embodiment is general
Rate value is Pg, Ps, Pf respectively, i.e., the method that judgement is burned there are rubbish object can be following several method:
1)Pg>threshold1,and Ps+Pf>threshold2
2)Pg*(Ps+Pf)>threshold3
3)Pg*Ps*Pf>threshold4
Rubbish object is carried out by three of the above method and burns identification, three kinds of methods, which can be used alone to mix, to be made
With, naturally it is also possible to it adopts and carries out data processing with other methods, obtain final rubbish object identification probability.
In the present embodiment, it further includes and deep neural network model is trained, training includes:Rubbish is obtained respectively to know
Other deep neural network submodel, smog identification deep neural network submodel and flare identification deep neural network submodel are defeated
Three penalty values are carried out joint training, and the new penalty values of joint training are propagated back to identification depth by the penalty values gone out
Neural network submodel, smog identification deep neural network submodel and flare identification deep neural network submodel.Pass through
Practise adequately to burn sample photo using limited rubbish object, and whole network is effectively trained, is carried using above-mentioned
Although the situation that the identification rubbish object that the result that the method arrived obtains can be substantially burns, but if rubbish object is made full use of to burn
The characteristics of, then it can obtain the characteristics of more preferable.Deep neural network is to remove training network, each net using the method for backpropagation
Last layer can all have one loss value to current sample of a loss layers of output when network training, this value propagates backward to
Each layer before realizes the training of whole network to update network parameter, in order to enable rubbish identification depth nerve before
Network submodel, smog identification deep neural network submodel and flare identification deep neural network submodel can preferably be known
Three network associations are trained by the picture that other rubbish object burns, the present embodiment together, and method is that there are three individually in original
On the basis of trained network, for the samples pictures that rubbish object burns, if the loss values of each network output are respectively
These three loss values are merged to obtain a comprehensive Loss_total, then this by Loss_g, Loss_s, Loss_f
Loss_total propagates backward to three networks respectively, and adjustment is trained to the weight parameter of three networks.Finally trained
To three networks identify according still further to the method in above-mentioned basis identification.Preferably, the fusion of the Loss in the present embodiment
Method includes
1) Loss_total=Loss_g+Loss_s+Loss_f
Or
2) Loss_total=Loss_g* (Loss_s+Loss_f)
Certainly, those skilled in the art should could be aware that, the above method is preferably fusion method, but be not limited to
State several method.
Correspondingly, the present embodiment also provides a kind of monitoring system of incinerated matter, including:
Image acquisition units, for acquiring the image information of each Image Acquisition point in monitoring area;
Image processing unit for carrying out image procossing to described image information, obtains the two dimension of the burning things which may cause a fire disaster point in image
Coordinate;
Space orientation unit for the two-dimensional coordinate and the space coordinate of collection point according to burning things which may cause a fire disaster point, passes through space orientation
Obtain the three-dimensional coordinate of burning things which may cause a fire disaster point.
Image processing unit in the present embodiment includes processor, memory, transceiver and communication interface, memory and logical
Letter interface connect with processor and transceiver and completes mutual communication, and for storing computer program, communication connects memory
For mouth for communicating, processor and transceiver make electric terminal perform above based on depth god for running computer program
Rubbish object through network burns each step of recognition methods.
In the present embodiment, memory may include random access memory (RandomAccessMemory, abbreviation
RAM), it is also possible to further include nonvolatile memory (non-volatilememory), for example, at least a magnetic disk storage.
Above-mentioned processor can be general processor, including central processing unit (CentralProcessingUnit, letter
Claim CPU), network processing unit (NetworkProcessor, abbreviation NP) etc.;It can also be digital signal processor
(DigitalSignalProcessing, abbreviation DSP), application-specific integrated circuit (ApplicationSpecificIntegratedC
Ircuit, abbreviation ASIC), field programmable gate array (Field-ProgrammableGateArray, abbreviation FPGA) or
Other programmable logic device, discrete gate or transistor logic, discrete hardware components.
The above-described embodiments merely illustrate the principles and effects of the present invention, and is not intended to limit the present invention.It is any ripe
The personage for knowing this technology all can carry out modifications and changes under the spirit and scope without prejudice to the present invention to above-described embodiment.Cause
This, those of ordinary skill in the art is complete without departing from disclosed spirit and institute under technological thought such as
Into all equivalent modifications or change, should by the present invention claim be covered.
Claims (10)
1. a kind of monitoring method of incinerated matter, which is characterized in that including:
Acquire the image information of each Image Acquisition point in monitoring area;
Image procossing is carried out to described image information, obtains the two-dimensional coordinate of the burning things which may cause a fire disaster point in image;
According to the two-dimensional coordinate of burning things which may cause a fire disaster point and the space coordinate of collection point, the three-dimensional coordinate of burning things which may cause a fire disaster point is obtained by space orientation.
2. the monitoring method of incinerated matter according to claim 1, it is characterised in that:Described image processing includes passing through SVM
Region segmentation is carried out to the image information of acquisition, obtains and burns object area, and passes through region centroid algorithm and obtains burning object image
Regional center.
3. the monitoring method of incinerated matter according to claim 2, it is characterised in that:
The LBP codings of each pixel in image information are extracted, form LBP figures, the LBP seals record the LBP values of each pixel
And color characteristic;
Schemed according to the LBP, obtain the encoded radio of different object pictures and color characteristic and be marked;
The encoded radio of the different object pictures is trained, obtains the parameter of SVM classifier;
The pixel of unknown encoded radio is input in the SVM classifier trained and is identified, obtains region segmentation result, it is described
Region segmentation result includes zone of origin and background area;
By region centroid algorithm, the two-dimensional coordinate of zone of origin moderate heat source point is obtained.
4. the monitoring method of incinerated matter according to claim 3, it is characterised in that:The space orientation includes:It establishes single
Range estimation is reflected the two-dimensional coordinate under the image coordinate system residing for burning things which may cause a fire disaster point by the monocular ranging localization model away from location model
The three-dimensional coordinate for road surface coordinate system is penetrated, the monocular ranging localization model is represented by equation below:
Wherein, x, y, z are respectively the D coordinates value of target point mapping point in the coordinate system of road surface, x0, y0Respectively target point exists
The two-dimensional coordinate value of image coordinate system, γ be camera setting angle, zqFor camera optical axis rectilinear direction and road surface plane focus
Coordinate value in the coordinate system of road surface.
5. the monitoring method of incinerated matter according to claim 4, it is characterised in that:According to the road surface coordinate system of burning things which may cause a fire disaster point
Three-dimensional coordinate and the spatial position of monocular distance measuring device installation obtain the spatial position of burning things which may cause a fire disaster point.
6. the monitoring method of incinerated matter according to claim 2, which is characterized in that described image processing further includes:
Establish deep neural network model;
Described image information is inputted into deep neural network model, obtains the probability containing waste incineration characteristic information in image;
The identification of rubbish object burning is completed according to the probability.
7. the monitoring method of incinerated matter according to claim 2, it is characterised in that:The model includes rubbish and identifies depth
Neural network submodel, smog identification deep neural network submodel and flare identification deep neural network submodel;The rubbish
Rubbish burns characteristic information and includes junk information, smog information and flare information.
8. the monitoring method of incinerated matter according to claim 2, it is characterised in that:By described image information input depth god
Through network model, the probability containing rubbish, smog and flare in image is obtained respectively, and it is carried out respectively with preset threshold value
Compare, the identification of rubbish object burning is completed according to comparison result.
9. the monitoring method of incinerated matter according to claim 8, which is characterized in that further include to deep neural network model
It is trained, the training includes:
It is deep that rubbish identification deep neural network submodel, smog identification deep neural network submodel and flare identification are obtained respectively
The penalty values of neural network submodel output are spent, three penalty values are subjected to joint trainings, and by the new penalty values of joint training
Propagate back to identification deep neural network submodel, smog identification deep neural network submodel and flare identification depth nerve
Network submodel.
10. a kind of monitoring system of incinerated matter, which is characterized in that including:
Image acquisition units, for acquiring the image information of each Image Acquisition point in monitoring area;
Image processing unit for carrying out image procossing to described image information, obtains the two-dimensional coordinate of the burning things which may cause a fire disaster point in image;
Space orientation unit for the two-dimensional coordinate and the space coordinate of collection point according to burning things which may cause a fire disaster point, is obtained by space orientation
The three-dimensional coordinate of burning things which may cause a fire disaster point.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201711298090.7A CN108182706B (en) | 2017-12-08 | 2017-12-08 | Method and system for monitoring incinerated substances |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201711298090.7A CN108182706B (en) | 2017-12-08 | 2017-12-08 | Method and system for monitoring incinerated substances |
Publications (2)
Publication Number | Publication Date |
---|---|
CN108182706A true CN108182706A (en) | 2018-06-19 |
CN108182706B CN108182706B (en) | 2021-09-28 |
Family
ID=62545755
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201711298090.7A Active CN108182706B (en) | 2017-12-08 | 2017-12-08 | Method and system for monitoring incinerated substances |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN108182706B (en) |
Cited By (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109636851A (en) * | 2018-11-13 | 2019-04-16 | 中国科学院计算技术研究所 | Targeting localization method is delivered in harmful influence accident treatment agent based on binocular vision |
CN109887025A (en) * | 2019-01-31 | 2019-06-14 | 沈阳理工大学 | Monocular self-adjustable fire point 3-D positioning method and device |
CN110148173A (en) * | 2019-05-21 | 2019-08-20 | 北京百度网讯科技有限公司 | The method and apparatus of target positioning, electronic equipment, computer-readable medium |
CN110276405A (en) * | 2019-06-26 | 2019-09-24 | 北京百度网讯科技有限公司 | Method and apparatus for output information |
CN111781113A (en) * | 2020-07-08 | 2020-10-16 | 湖南九九智能环保股份有限公司 | Dust grid positioning method and dust grid monitoring method |
CN113041578A (en) * | 2021-02-24 | 2021-06-29 | 南京师范大学 | Robot automatic ball picking method based on morphological characteristics and monocular measurement |
CN113159009A (en) * | 2021-06-25 | 2021-07-23 | 华东交通大学 | Intelligent monitoring and identifying method and system for preventing ticket evasion at station |
Citations (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103971479A (en) * | 2013-01-29 | 2014-08-06 | 北京林业大学 | Forest fire positioning method based on camera calibration technology |
CN104408469A (en) * | 2014-11-28 | 2015-03-11 | 武汉大学 | Firework identification method and firework identification system based on deep learning of image |
CN105261029A (en) * | 2015-11-20 | 2016-01-20 | 中国安全生产科学研究院 | Method and robot for performing fire source location and fire extinguishment based on binocular vision |
CN105447500A (en) * | 2015-10-30 | 2016-03-30 | 张弓 | Method and apparatus for identifying straw burning fire point automatically |
CN105678237A (en) * | 2015-12-31 | 2016-06-15 | 张弓 | Fire point determination method and system |
CN105791767A (en) * | 2016-03-11 | 2016-07-20 | 柳州好顺科技有限公司 | Straw burning monitoring system having self-regulating function |
CN105869340A (en) * | 2016-03-28 | 2016-08-17 | 北方民族大学 | Abnormal fire monitoring system and method based on unmanned plane |
CN106289531A (en) * | 2016-07-29 | 2017-01-04 | 国家电网公司 | A kind of high voltage power transmission corridor based on The Cloud Terrace attitude angle mountain fire localization method |
CN106339657A (en) * | 2015-07-09 | 2017-01-18 | 张�杰 | Straw incineration monitoring method and device based on monitoring video |
CN206301209U (en) * | 2016-02-29 | 2017-07-04 | 北方民族大学 | A kind of crop straw burning monitoring device based on unmanned plane |
CN107316012A (en) * | 2017-06-14 | 2017-11-03 | 华南理工大学 | The fire detection and tracking of small-sized depopulated helicopter |
-
2017
- 2017-12-08 CN CN201711298090.7A patent/CN108182706B/en active Active
Patent Citations (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103971479A (en) * | 2013-01-29 | 2014-08-06 | 北京林业大学 | Forest fire positioning method based on camera calibration technology |
CN104408469A (en) * | 2014-11-28 | 2015-03-11 | 武汉大学 | Firework identification method and firework identification system based on deep learning of image |
CN106339657A (en) * | 2015-07-09 | 2017-01-18 | 张�杰 | Straw incineration monitoring method and device based on monitoring video |
CN105447500A (en) * | 2015-10-30 | 2016-03-30 | 张弓 | Method and apparatus for identifying straw burning fire point automatically |
CN105261029A (en) * | 2015-11-20 | 2016-01-20 | 中国安全生产科学研究院 | Method and robot for performing fire source location and fire extinguishment based on binocular vision |
CN105678237A (en) * | 2015-12-31 | 2016-06-15 | 张弓 | Fire point determination method and system |
CN206301209U (en) * | 2016-02-29 | 2017-07-04 | 北方民族大学 | A kind of crop straw burning monitoring device based on unmanned plane |
CN105791767A (en) * | 2016-03-11 | 2016-07-20 | 柳州好顺科技有限公司 | Straw burning monitoring system having self-regulating function |
CN105869340A (en) * | 2016-03-28 | 2016-08-17 | 北方民族大学 | Abnormal fire monitoring system and method based on unmanned plane |
CN106289531A (en) * | 2016-07-29 | 2017-01-04 | 国家电网公司 | A kind of high voltage power transmission corridor based on The Cloud Terrace attitude angle mountain fire localization method |
CN107316012A (en) * | 2017-06-14 | 2017-11-03 | 华南理工大学 | The fire detection and tracking of small-sized depopulated helicopter |
Non-Patent Citations (1)
Title |
---|
贾延明等: ""河南地区秸秆焚烧空气污染区域的图像识别算法研究"", 《科技通报》 * |
Cited By (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109636851A (en) * | 2018-11-13 | 2019-04-16 | 中国科学院计算技术研究所 | Targeting localization method is delivered in harmful influence accident treatment agent based on binocular vision |
CN109887025A (en) * | 2019-01-31 | 2019-06-14 | 沈阳理工大学 | Monocular self-adjustable fire point 3-D positioning method and device |
CN110148173A (en) * | 2019-05-21 | 2019-08-20 | 北京百度网讯科技有限公司 | The method and apparatus of target positioning, electronic equipment, computer-readable medium |
CN110276405A (en) * | 2019-06-26 | 2019-09-24 | 北京百度网讯科技有限公司 | Method and apparatus for output information |
CN111781113A (en) * | 2020-07-08 | 2020-10-16 | 湖南九九智能环保股份有限公司 | Dust grid positioning method and dust grid monitoring method |
CN113041578A (en) * | 2021-02-24 | 2021-06-29 | 南京师范大学 | Robot automatic ball picking method based on morphological characteristics and monocular measurement |
CN113041578B (en) * | 2021-02-24 | 2022-02-11 | 南京师范大学 | Robot automatic ball picking method based on morphological characteristics and monocular measurement |
CN113159009A (en) * | 2021-06-25 | 2021-07-23 | 华东交通大学 | Intelligent monitoring and identifying method and system for preventing ticket evasion at station |
Also Published As
Publication number | Publication date |
---|---|
CN108182706B (en) | 2021-09-28 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN108182706A (en) | The monitoring method and system of a kind of incinerated matter | |
CN111723654B (en) | High-altitude parabolic detection method and device based on background modeling, YOLOv3 and self-optimization | |
CN103116746B (en) | A kind of video flame detection method based on multiple features fusion technology | |
CN106897698B (en) | Classroom people number detection method and system based on machine vision and binocular collaborative technology | |
CN110378931A (en) | A kind of pedestrian target motion track acquisition methods and system based on multi-cam | |
CN106845408A (en) | A kind of street refuse recognition methods under complex environment | |
CN110135282B (en) | Examinee return plagiarism cheating detection method based on deep convolutional neural network model | |
CN105046710A (en) | Depth image partitioning and agent geometry based virtual and real collision interaction method and apparatus | |
CN107408211A (en) | Method for distinguishing is known again for object | |
CN104751146B (en) | A kind of indoor human body detection method based on 3D point cloud image | |
CN104915642B (en) | Front vehicles distance measuring method and device | |
CN110060284A (en) | A kind of binocular vision environmental detecting system and method based on tactilely-perceptible | |
CN106846375A (en) | A kind of flame detecting method for being applied to autonomous firefighting robot | |
CN105022999A (en) | Man code company real-time acquisition system | |
WO2022062238A1 (en) | Football detection method and apparatus, and computer-readable storage medium and robot | |
CN113408584A (en) | RGB-D multi-modal feature fusion 3D target detection method | |
CN108090485A (en) | Display foreground extraction method based on various visual angles fusion | |
CN102289822A (en) | Method for tracking moving target collaboratively by multiple cameras | |
CN114972646B (en) | Method and system for extracting and modifying independent ground objects of live-action three-dimensional model | |
CN116052222A (en) | Cattle face recognition method for naturally collecting cattle face image | |
CN109472228A (en) | A kind of yawn detection method based on deep learning | |
CN105631825B (en) | Based on the image defogging method for rolling guiding | |
Posner et al. | Describing composite urban workspaces | |
CN107220972A (en) | A kind of quality of poultry eggs discrimination method based on infrared image | |
TWI628624B (en) | Improved thermal image feature extraction method |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |