CN110414320B - Method and system for safety production supervision - Google Patents

Method and system for safety production supervision Download PDF

Info

Publication number
CN110414320B
CN110414320B CN201910511350.7A CN201910511350A CN110414320B CN 110414320 B CN110414320 B CN 110414320B CN 201910511350 A CN201910511350 A CN 201910511350A CN 110414320 B CN110414320 B CN 110414320B
Authority
CN
China
Prior art keywords
image
label
recognition model
monitored object
feature extraction
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910511350.7A
Other languages
Chinese (zh)
Other versions
CN110414320A (en
Inventor
周斯加
罗智颖
关超华
陈志军
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Institute of Laser and Optoelectronics Intelligent Manufacturing of Wenzhou University
Original Assignee
Institute of Laser and Optoelectronics Intelligent Manufacturing of Wenzhou University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Institute of Laser and Optoelectronics Intelligent Manufacturing of Wenzhou University filed Critical Institute of Laser and Optoelectronics Intelligent Manufacturing of Wenzhou University
Priority to CN201910511350.7A priority Critical patent/CN110414320B/en
Publication of CN110414320A publication Critical patent/CN110414320A/en
Application granted granted Critical
Publication of CN110414320B publication Critical patent/CN110414320B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q50/00Information and communication technology [ICT] specially adapted for implementation of business processes of specific business sectors, e.g. utilities or tourism
    • G06Q50/10Services
    • G06Q50/26Government or public services
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/26Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
    • G06V10/267Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion by performing operations on regions, e.g. growing, shrinking or watersheds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/52Surveillance or monitoring of activities, e.g. for recognising suspicious objects
    • GPHYSICS
    • G08SIGNALLING
    • G08BSIGNALLING OR CALLING SYSTEMS; ORDER TELEGRAPHS; ALARM SYSTEMS
    • G08B21/00Alarms responsive to a single specified undesired or abnormal condition and not otherwise provided for
    • GPHYSICS
    • G08SIGNALLING
    • G08BSIGNALLING OR CALLING SYSTEMS; ORDER TELEGRAPHS; ALARM SYSTEMS
    • G08B25/00Alarm systems in which the location of the alarm condition is signalled to a central station, e.g. fire or police telegraphic systems
    • G08B25/01Alarm systems in which the location of the alarm condition is signalled to a central station, e.g. fire or police telegraphic systems characterised by the transmission medium
    • G08B25/08Alarm systems in which the location of the alarm condition is signalled to a central station, e.g. fire or police telegraphic systems characterised by the transmission medium using communication transmission lines
    • GPHYSICS
    • G08SIGNALLING
    • G08BSIGNALLING OR CALLING SYSTEMS; ORDER TELEGRAPHS; ALARM SYSTEMS
    • G08B7/00Signalling systems according to more than one of groups G08B3/00 - G08B6/00; Personal calling systems according to more than one of groups G08B3/00 - G08B6/00
    • G08B7/06Signalling systems according to more than one of groups G08B3/00 - G08B6/00; Personal calling systems according to more than one of groups G08B3/00 - G08B6/00 using electric transmission, e.g. involving audible and visible signalling through the use of sound and light sources

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Business, Economics & Management (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Tourism & Hospitality (AREA)
  • Emergency Management (AREA)
  • Data Mining & Analysis (AREA)
  • Artificial Intelligence (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Computational Linguistics (AREA)
  • Biophysics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Biomedical Technology (AREA)
  • Molecular Biology (AREA)
  • Development Economics (AREA)
  • Educational Administration (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Economics (AREA)
  • Human Resources & Organizations (AREA)
  • Marketing (AREA)
  • Primary Health Care (AREA)
  • Strategic Management (AREA)
  • General Business, Economics & Management (AREA)
  • Image Analysis (AREA)

Abstract

The invention provides a method for supervising safety production, which comprises the steps of acquiring a historical video of a monitored object, and extracting a plurality of images from the acquired historical video according to a preset time interval; marking the extracted multiple images with corresponding labels, and performing image processing and feature extraction on the image with each label; constructing an image recognition model based on convolution neural network regression according to the image of each existing label after image processing and feature extraction, and training the image recognition model by adopting an error back propagation algorithm until convergence to obtain a trained image recognition model; and acquiring a current image to be detected of the monitored object, and guiding the current image to be detected of the monitored object into a trained image recognition model for recognition to determine whether the monitored object has potential safety hazards. By implementing the method, the identification model is constructed based on the convolutional neural network, so that the reliability of identification is improved, and time and labor are saved.

Description

Method and system for safety production supervision
Technical Field
The invention relates to the technical field of safety production detection, in particular to a safety production supervision method.
Background
At present, the government safety supervision department needs to perform unified intelligent supervision on all dangerous chemical engineering processes, key supervised dangerous chemicals and major dangerous sources in a responsibility range, perform real-time early warning on potential safety hazards, and analyze and make decisions on various factors of safety production based on big data. The important monitoring objects of the government safety supervision department mainly comprise: the oil unloading places of all gas stations, the dangerous processes of all chemical enterprises, all dangerous chemical substance warehouses and other dangerous operation sites with potential safety hazards. The safety supervision department requires to import the real-time high-definition video data of each supervised place into a brain platform of a smart city, and professional safety production supervisors supervise various operation activities in real time to discover potential safety hazards in advance and ensure that all supervised operators and operation processes meet the specified operation standards and flows.
However, in the existing safety production supervision method, the potential safety hazard can be identified only by manually searching and comparing the image of each monitored object, which not only wastes time and labor, but also may cause comparison errors.
Disclosure of Invention
The technical problem to be solved by the embodiment of the invention is to provide a method and a system for safety production supervision, wherein an identification model is constructed based on a convolutional neural network, so that the identification reliability is improved, and the time and labor are saved.
In order to solve the above technical problem, an embodiment of the present invention provides a method for monitoring and managing safety production, where the method includes the following steps:
acquiring a historical video of a monitored object, and extracting a plurality of images from the acquired historical video according to a preset time interval;
marking the extracted multiple images with corresponding labels, and performing image processing and feature extraction on the image with each label; wherein the tag comprises 1 and 0; the image with the label of 1 represents that potential safety hazard exists; an image labeled 0 indicates normal;
constructing an image recognition model based on convolutional neural network regression according to the image of each existing label after image processing and feature extraction, and training the image recognition model based on convolutional neural network regression by adopting an error back propagation algorithm until convergence to obtain a trained image recognition model;
and acquiring the current image to be detected of the monitored object, importing the acquired current image to be detected of the monitored object into the obtained trained image recognition model for recognition, and determining whether the monitored object has potential safety hazards.
The monitoring objects comprise a tank wagon, an oil discharge pipe, an electrostatic discharge instrument and a fire-fighting device in a gas station, and standardized external packages and fire-fighting devices of vehicles, external stacked goods, smoke ignition points, various hazardous chemical substances in a fire-fighting device and a hazardous chemical substance warehouse in a chemical plant enterprise.
The specific steps of marking the extracted multiple images with corresponding labels and performing image processing and feature extraction on the image with each label include:
if the matching degree of a certain image and a preset image with potential safety hazard is greater than a preset threshold value, marking a label as 1; otherwise, the label is marked as 0;
after all image labels are marked, performing true color enhancement processing on the image of each existing label; if the true color enhancement processing is that the image color is kept unchanged, the image brightness is enhanced;
based on a preset color image segmentation method, performing image segmentation on each original skin thermal imaging image subjected to true color enhancement processing to segment out each target area image with a label;
and extracting the approximate entropy and the sample entropy of each segmented target area image to be used as main features, and further extracting the color histogram, the color moment, the energy, the contrast, the texture entropy, the texture correlation and the image local binarization features of each segmented target area image to be used as supplementary features by using a color statistical feature extraction method, a gray level co-occurrence matrix method and a local binarization method.
The specific steps of performing true color enhancement processing on the image of each existing label include:
r, G, B components in the image of each existing label are correspondingly converted into H, I, S components to be represented;
enhancing the converted I component in each image with the label by utilizing a gray scale linear transformation method;
and after the I component in each image subjected to gray scale linear transformation is enhanced, converting the H, I, S component of each image subjected to gray scale linear transformation into R, G, B components for representation, and obtaining each image subjected to true color enhancement processing.
The specific steps of constructing the image recognition model based on the convolutional neural network regression are as follows:
adopting a deep learning framework TensorFlow to realize a convolutional neural network, establishing an image recognition model, taking an image of each existing label after image processing and feature extraction as a model feature, taking 1 and 0 as model labels, respectively reading the model in the image recognition model, and defining a batch function module, a data reading module, a convolutional neural network CNN structure module, a model evaluation index module and a training and testing module; the convolutional neural network has two convolutional pooling alternating layers and two fully connected layers.
The embodiment of the invention also provides a system for supervising safety production, which comprises:
the image acquisition unit is used for acquiring a historical video of a monitored object and extracting a plurality of images from the acquired historical video according to a preset time interval;
the image feature extraction unit is used for marking the extracted multiple images with corresponding labels and carrying out image processing and feature extraction on the image with each label; wherein the tag comprises 1 and 0; the image with the label of 1 represents that potential safety hazard exists; an image labeled 0 indicates normal;
the image recognition model construction unit is used for constructing an image recognition model based on convolutional neural network regression according to the image of each existing label after image processing and feature extraction, and training the image recognition model based on convolutional neural network regression by adopting an error back propagation algorithm until convergence to obtain a trained image recognition model;
and the image recognition unit is used for acquiring the current image to be detected of the monitored object, importing the acquired current image to be detected of the monitored object into the obtained trained image recognition model for recognition, and determining whether the monitored object has potential safety hazards.
The monitoring objects comprise a tank wagon, an oil discharge pipe, an electrostatic discharge instrument and a fire-fighting device in a gas station, and standardized external packages and fire-fighting devices of vehicles, external stacked goods, smoke ignition points, various hazardous chemical substances in a fire-fighting device and a hazardous chemical substance warehouse in a chemical plant enterprise.
Wherein the image feature extraction unit includes:
the comparison module is used for marking the label as 1 if the matching degree of a certain image and a preset image with potential safety hazard is greater than a preset threshold value; otherwise, the label is marked as 0;
the processing module is used for performing true color enhancement processing on the image of each existing label after all image labels are marked; if the true color enhancement processing is that the image color is kept unchanged, the image brightness is enhanced;
the segmentation module is used for carrying out image segmentation processing on each original skin thermal imaging image subjected to true color enhancement processing based on a preset color image segmentation method to segment each target area image with a label;
and the extraction module is used for extracting the approximate entropy and the sample entropy of each segmented target area image and taking the approximate entropy and the sample entropy as main features, and further extracting the color histogram, the color moment, the energy, the contrast, the texture entropy, the texture correlation and the image local binarization features of each segmented target area image and taking the extracted features as supplementary features by using a color statistical feature extraction method, a gray level co-occurrence matrix method and a local binarization method.
The embodiment of the invention has the following beneficial effects:
according to the method, after the historical image of the monitored object is collected and subjected to label marking, image processing and target area segmentation, the characteristic data of the image of the target area is extracted, and the construction of an image recognition model is completed by using a convolutional neural network, so that the image with potential safety hazard is recognized, the method is rapid and convenient, high in accuracy, and capable of improving the reliability of recognition, and is time-saving and labor-saving.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is within the scope of the present invention for those skilled in the art to obtain other drawings based on the drawings without inventive exercise.
FIG. 1 is a flow chart of a method of safety production supervision provided by an embodiment of the present invention;
fig. 2 is a schematic structural diagram of a system for safety production supervision according to an embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention will be described in further detail with reference to the accompanying drawings.
As shown in fig. 1, a method for safety production supervision provided in an embodiment of the present invention includes the following steps:
step S1, obtaining a history video of the monitored object, and extracting a plurality of images from the obtained history video according to a preset time interval;
the specific process comprises the steps of collecting historical video data of a place where a monitoring object is located in a large quantity, extracting a plurality of images from the historical video according to a preset time interval, and using the images as materials for machine learning of personnel characteristics, wearing characteristics, vehicle characteristics, fire-fighting equipment characteristics, dangerous goods characteristics, auxiliary equipment characteristics and the like. The video data or image data must include various scenes such as different viewing angles, distances, different weather (cloudy and rainy days, snow days), day and night, and the more the material, the better.
The monitored objects comprise a tank truck, an oil discharge pipe, an electrostatic discharge instrument and a fire-fighting device in a gas station, vehicles in chemical plant enterprises, external stacked goods, smoke ignition points, and standardized external packages and fire-fighting devices of various hazardous chemicals in fire-fighting devices and hazardous chemical warehouse, and the like.
It should be noted that, since many irregular operations may cause a great safety accident, the monitored object may be further refined as follows:
(1) oil discharge operation of gas station
The method comprises the following steps of oil tank truck entering identification, oil tank truck license plate number identification, signboard identification, oil tank truck parking position identification, worker clothing identification, fire fighting equipment identification, electrostatic discharge instrument connection state identification, oil pipe quantity identification, oil tank truck leaving identification and the like;
(2) chemical enterprises
Vehicle identification at a designated position, smoke identification at the designated position, article stacking identification at an external designated position, fire-fighting equipment identification at the designated position, inspection personnel (clothing) identification at the designated position and the like;
(3) hazardous chemical warehouse
Fire equipment identification, worker apparel identification, signboard identification, and the like.
Step S2, marking the extracted multiple images with corresponding labels, and processing the image of each label and extracting the characteristics; wherein the tag comprises 1 and 0; the image with the label of 1 represents that potential safety hazard exists; an image labeled 0 indicates normal;
the specific process is that each image can be labeled through manual work or automatic comparison of an image database, and the labeled image is used as a training category of a subsequent mechanical method to identify the attribution of the image to be detected, namely whether the image is an image with potential safety hazard. It should be noted that the manual label marking is determined by the experience of the expert and recorded and stored by the computer, and the automatic comparison of the image database is automatically determined by the image similarity and recorded and stored by the computer.
Therefore, if the matching degree of a certain image and a preset image with potential safety hazard is greater than a preset threshold (such as 90%), the label is 1; otherwise, the label is marked as 0;
after all image labels are marked, performing true color enhancement processing on the image of each existing label; wherein, the true color enhancement processing is that the image color is kept unchanged, and the image brightness is enhanced;
based on a preset color image segmentation method, performing image segmentation on each original skin thermal imaging image subjected to true color enhancement processing to segment out each target area image with a label;
and extracting the approximate entropy and the sample entropy of each segmented target area image to be used as main features, and further extracting the color histogram, the color moment, the energy, the contrast, the texture entropy, the texture correlation and the image local binarization features of each segmented target area image to be used as supplementary features by using a color statistical feature extraction method, a gray level co-occurrence matrix method and a local binarization method.
It should be noted that the specific steps of performing true color enhancement processing on the image of each existing label include: r, G, B components in the image of each existing label are correspondingly converted into H, I, S components to be represented; enhancing the converted I component in each image with the label by utilizing a gray scale linear transformation method; and after the I component in each image subjected to gray scale linear transformation is enhanced, converting the H, I, S component of each image subjected to gray scale linear transformation into R, G, B components for representation, and obtaining each image subjected to true color enhancement processing.
It should be noted that the preset color image segmentation method adopts an image segmentation algorithm based on a global threshold, and the algorithm is specifically as follows: an image with gray scale value in gminAnd gmaxThe step of segmenting the image by using a threshold method comprises the following steps:
determining a grayscale threshold T, and g, for a red region in a thermographic imagemin<T<gmax(ii) a And after the gray processing of the original image, classifying pixels in the image. And comparing the gray value of the pixel in the image with the threshold value, and reserving the pixel point of which the gray value is equal to the threshold value.
Step S3, constructing an image recognition model based on convolutional neural network regression according to the image of each existing label after image processing and feature extraction, and training the image recognition model based on convolutional neural network regression by adopting an error back propagation algorithm until convergence to obtain a trained image recognition model;
the method comprises the specific processes that an image after image processing and feature extraction is used as input, the corresponding existing label is used as output, a deep learning framework TensorFlow is adopted to realize a convolutional neural network, an image recognition model is established, the image of each existing label after image processing and feature extraction is used as a model feature, 1 and 0 are used as model labels, the model labels are respectively read into the image recognition model, and a batch function module, a data reading module, a convolutional neural network CNN structure module, a model evaluation index module and a training and testing module are defined; the convolutional neural network has two convolutional pooling alternating layers and two fully connected layers.
In one example, a convolution kernel with a size of 3 × 3 is adopted in the image recognition model, the size of the pooling window is 2 × 2, the nonlinear activation function ReLU is used after each convolution operation, the number of channels of the convolutional neural network CNN model is changed from 6 to 32, from 32 to 64, from 64 to 1024, and only 1 channel is finally output from input to output. And elongating the characteristic matrix output by the pooling layer 2 into a one-dimensional matrix, inputting the one-dimensional matrix into the fully-connected layer 1, and using a Dropout method at the output end of the fully-connected layer 1, wherein part of neurons in the network model are randomly discarded according to probability in a training stage.
The specific structure of the convolutional neural network is as follows:
input layer
< ═ 1 convolutional layer 1_1(3x3x64)
2 nonlinear response Relu layer
< ═ 3 convolutional layer 1_2(3x3x64)
4 nonlinear response Relu layer
< ═ 5 pooling layer (2x2/2)
2_1(3x3x128) convolutional layer 6 [ ]
< 7 nonlinear response Relu layer
2_2(3x3x128) convolutional layer 8 [ ]
9 nonlinear response Relu layer
< ═ 10 pooling layer (2x2/2)
< ═ 11 convolutional layer 3_1(3x3x256)
-12 nonlinear response Relu layer
< ═ 13 convolutional layer 3_2(3x3x256)
14 global average pooling layer
15 full connection layer (256x100) s
-16 nonlinear response Relu layer
< ═ 17 full connection layer (100x2)
< ═ 14 deconvolution layer D1(4x4x256)
< ═ 19 convolutional layer D1_1(3x3x256)
< ═ 20 convolutional layer D1_2(3x3x256)
< ═ 21 deconvolution layer D2(4x4x128)
<22convolutional layer D2_1(3x3x128)
< ═ 23 convolutional layer D2_2(3x3x128)
< ═ 24 convolutional layer D2_3(3x3x2)
Wherein, the number in front of the symbol "< ═ is the current layer number, and the number behind the symbol" < ═ is the input layer number; the inside of brackets behind the convolutional layer and the deconvolution layer are convolutional layer parameters, wherein the product of two multipliers in front of the convolutional layer parameters is the size of a convolutional kernel, and the multiplier behind the convolutional layer parameters is the number of channels; the bracketing layer parameter is arranged in brackets behind the pooling layer, wherein the product of two multipliers in front of the pooling layer parameter is the size of a pooling kernel, and the multiplier behind the pooling layer parameter is the step length; the parameters of the full connection layer are arranged in brackets behind the full connection layer, wherein the parameters behind the full connection layer are output types; the nonlinear response layer is composed of a nonlinear activation function ReLU.
And S4, acquiring the current image to be detected of the monitored object, importing the acquired current image to be detected of the monitored object into the acquired trained image recognition model for recognition, and determining whether the monitored object has potential safety hazards.
The specific process is that a current image to be detected of a monitored object is obtained and is led into a trained image recognition model for recognition, and if the output category is 1, the potential safety hazard of the monitored object is determined; otherwise, if the output category is 0, determining that the monitored object has no potential safety hazard. It should be noted that the output category attribute is determined by the setting of the tag in step S2.
It can be understood that once the potential safety hazard of the monitored object is determined, monitoring personnel can be prompted by issuing alarm information. The alarm information comprises acousto-optic alarm information and/or character alarm information, wherein the acousto-optic alarm information directly carries out on-site alarm prompting through an indicator lamp, a buzzer and the like, and the character alarm information can be issued to related personnel for alarm prompting in a mail, short message, WeChat, QQ and other modes.
As shown in fig. 2, in an embodiment of the present invention, a system for monitoring and managing safety production is provided, including:
an image acquisition unit 10, configured to acquire a history video of a monitored object, and extract a plurality of images from the acquired history video at preset time intervals;
an image feature extraction unit 20, configured to mark corresponding tags on the extracted multiple images, and perform image processing and feature extraction on each image with a tag; wherein the tag comprises 1 and 0; the image with the label of 1 represents that potential safety hazard exists; an image labeled 0 indicates normal;
the image recognition model construction unit 30 is configured to construct an image recognition model based on convolutional neural network regression according to the image of each existing label after image processing and feature extraction, and train the image recognition model based on convolutional neural network regression by using an error back propagation algorithm until convergence to obtain a trained image recognition model;
and the image recognition unit 40 is used for acquiring the current image to be detected of the monitored object, importing the acquired current image to be detected of the monitored object into the obtained trained image recognition model for recognition, and determining whether the monitored object has potential safety hazards.
The monitoring objects comprise a tank wagon, an oil discharge pipe, an electrostatic discharge instrument and a fire-fighting device in a gas station, and standardized external packages and fire-fighting devices of vehicles, external stacked goods, smoke ignition points, various hazardous chemical substances in a fire-fighting device and a hazardous chemical substance warehouse in a chemical plant enterprise.
Wherein the image feature extraction unit 20 includes:
the comparison module 201 is configured to mark a label as 1 if a matching degree of a certain image and a preset image with potential safety hazard is greater than a preset threshold; otherwise, the label is marked as 0;
the processing module 202 is configured to perform true color enhancement processing on each image with the label after all the image labels are marked; if the true color enhancement processing is that the image color is kept unchanged, the image brightness is enhanced;
the segmentation module 203 is configured to perform image segmentation on each original skin thermal imaging image subjected to true color enhancement processing based on a preset color image segmentation method, and segment each target area image with a label;
the extracting module 204 is configured to extract the approximate entropy and the sample entropy of each segmented target area image and use the extracted approximate entropy and sample entropy as main features, and further extract the color histogram, the color moment, the energy, the contrast, the texture entropy, the texture correlation, and the image local binarization features of each segmented target area image and use the extracted features as supplementary features by using a color statistical feature extracting method, a gray level co-occurrence matrix method, and a local binarization method.
The embodiment of the invention has the following beneficial effects:
according to the method, after the historical image of the monitored object is collected and subjected to label marking, image processing and target area segmentation, the characteristic data of the image of the target area is extracted, and the construction of an image recognition model is completed by using a convolutional neural network, so that the image with potential safety hazard is recognized, the method is rapid and convenient, high in accuracy, and capable of improving the reliability of recognition, and is time-saving and labor-saving.
It should be noted that, in the above system embodiment, each included unit is only divided according to functional logic, but is not limited to the above division as long as the corresponding function can be implemented; in addition, specific names of the functional units are only for convenience of distinguishing from each other, and are not used for limiting the protection scope of the present invention.
It will be understood by those skilled in the art that all or part of the steps in the method for implementing the above embodiments may be implemented by relevant hardware instructed by a program, and the program may be stored in a computer-readable storage medium, such as ROM/RAM, magnetic disk, optical disk, etc.
The above disclosure is only for the purpose of illustrating the preferred embodiments of the present invention, and it is therefore to be understood that the invention is not limited by the scope of the appended claims.

Claims (2)

1. A method of safety production supervision, characterized in that the method comprises the steps of:
acquiring a historical video of a monitored object, and extracting a plurality of images from the acquired historical video according to a preset time interval;
marking the extracted multiple images with corresponding labels, and performing image processing and feature extraction on the image with each label; wherein the tag comprises 1 and 0; the image with the label of 1 represents that potential safety hazard exists; an image labeled 0 indicates normal;
constructing an image recognition model based on convolutional neural network regression according to the image of each existing label after image processing and feature extraction, and training the image recognition model based on convolutional neural network regression by adopting an error back propagation algorithm until convergence to obtain a trained image recognition model;
acquiring a current image to be detected of the monitored object, importing the acquired current image to be detected of the monitored object into the obtained trained image recognition model for recognition, and determining whether the monitored object has potential safety hazards or not;
the monitored objects comprise a tank wagon, an oil discharge pipe, an electrostatic discharge instrument and a fire-fighting apparatus in a gas station, vehicles in chemical plant enterprises, external stacked goods, smoke ignition points, and standardized external packages and fire-fighting apparatuses of various hazardous chemicals in fire-fighting apparatuses and hazardous chemical warehouse;
the specific steps of marking the extracted multiple images with corresponding labels and carrying out image processing and feature extraction on the image with each label comprise:
if the matching degree of a certain image and a preset image with potential safety hazard is greater than a preset threshold value, marking a label as 1; otherwise, the label is marked as 0;
after all image labels are marked, performing true color enhancement processing on the image of each existing label; if the true color enhancement processing is that the image color is kept unchanged, the image brightness is enhanced;
based on a preset color image segmentation method, performing image segmentation on each original skin thermal imaging image subjected to true color enhancement processing to segment out each target area image with a label;
extracting approximate entropy and sample entropy of each segmented target area image to be used as main features, and further extracting color histograms, color moments, energy, contrast, texture entropy, texture correlation and image local binarization features of each segmented target area image to be used as supplementary features by using a color statistical feature extraction method, a gray level co-occurrence matrix method and a local binarization method;
the specific steps of performing true color enhancement processing on the image of each existing label comprise:
r, G, B components in the image of each existing label are correspondingly converted into H, I, S components to be represented;
enhancing the converted I component in each image with the label by utilizing a gray scale linear transformation method;
after the I component in each image subjected to gray scale linear transformation is enhanced, the H, I, S component of each image subjected to gray scale linear transformation is inversely converted into R, G, B component to be represented, and each image subjected to true color enhancement processing is obtained;
the specific steps of constructing the image recognition model based on the convolution neural network regression are as follows:
adopting a deep learning framework TensorFlow to realize a convolutional neural network, establishing an image recognition model, taking an image of each existing label after image processing and feature extraction as a model feature, taking 1 and 0 as model labels, respectively reading the model in the image recognition model, and defining a batch function module, a data reading module, a convolutional neural network CNN structure module, a model evaluation index module and a training and testing module; the convolutional neural network comprises two convolutional pooling alternating layers and two full-connection layers;
the image recognition model adopts convolution kernels with the size of 3x3, the sizes of pooling windows are 2x2, the nonlinear activation function ReLU is used after each convolution operation, the number of channels of the convolution neural network CNN model is changed from 6 to 32, from 32 to 64 and from 64 to 1024 from input to output, only 1 channel is finally output, a feature matrix output by the pooling layer 2 is elongated to be a one-dimensional matrix and then input into the full-connection layer 1, a Dropout method is used at the output end of the full-connection layer 1, and part of neurons in the network model are randomly discarded according to probability in a training stage.
2. A system for safety production supervision, comprising:
the image acquisition unit is used for acquiring a historical video of a monitored object and extracting a plurality of images from the acquired historical video according to a preset time interval;
the image feature extraction unit is used for marking the extracted multiple images with corresponding labels and carrying out image processing and feature extraction on the image with each label; wherein the tag comprises 1 and 0; the image with the label of 1 represents that potential safety hazard exists; an image labeled 0 indicates normal;
the image recognition model construction unit is used for constructing an image recognition model based on convolutional neural network regression according to the image of each existing label after image processing and feature extraction, and training the image recognition model based on convolutional neural network regression by adopting an error back propagation algorithm until convergence to obtain a trained image recognition model;
the image identification unit is used for acquiring a current image to be detected of the monitored object, guiding the acquired current image to be detected of the monitored object into the obtained trained image identification model for identification, and determining whether the monitored object has potential safety hazards or not;
the monitored objects comprise a tank wagon, an oil discharge pipe, an electrostatic discharge instrument and a fire-fighting apparatus in a gas station, vehicles in chemical plant enterprises, external stacked goods, smoke ignition points, and standardized external packages and fire-fighting apparatuses of various hazardous chemicals in fire-fighting apparatuses and hazardous chemical warehouse;
the image feature extraction unit includes:
the comparison module is used for marking the label as 1 if the matching degree of a certain image and a preset image with potential safety hazard is greater than a preset threshold value; otherwise, the label is marked as 0;
the processing module is used for performing true color enhancement processing on the image of each existing label after all image labels are marked; if the true color enhancement processing is that the image color is kept unchanged, the image brightness is enhanced;
the segmentation module is used for carrying out image segmentation processing on each original skin thermal imaging image subjected to true color enhancement processing based on a preset color image segmentation method to segment each target area image with a label;
and the extraction module is used for extracting the approximate entropy and the sample entropy of each segmented target area image and taking the approximate entropy and the sample entropy as main features, and further extracting the color histogram, the color moment, the energy, the contrast, the texture entropy, the texture correlation and the image local binarization features of each segmented target area image and taking the extracted features as supplementary features by using a color statistical feature extraction method, a gray level co-occurrence matrix method and a local binarization method.
CN201910511350.7A 2019-06-13 2019-06-13 Method and system for safety production supervision Active CN110414320B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910511350.7A CN110414320B (en) 2019-06-13 2019-06-13 Method and system for safety production supervision

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910511350.7A CN110414320B (en) 2019-06-13 2019-06-13 Method and system for safety production supervision

Publications (2)

Publication Number Publication Date
CN110414320A CN110414320A (en) 2019-11-05
CN110414320B true CN110414320B (en) 2021-10-22

Family

ID=68359044

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910511350.7A Active CN110414320B (en) 2019-06-13 2019-06-13 Method and system for safety production supervision

Country Status (1)

Country Link
CN (1) CN110414320B (en)

Families Citing this family (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111091535A (en) * 2019-11-22 2020-05-01 三一重工股份有限公司 Factory management method and system based on deep learning image semantic segmentation
CN110910586B (en) * 2019-11-28 2022-03-04 中国银行股份有限公司 Anti-theft card swiping method and system
CN111274962A (en) * 2020-01-20 2020-06-12 广州燃气集团有限公司 Method and system for processing gas potential safety hazard data and storage medium
CN111401131A (en) * 2020-02-13 2020-07-10 深圳供电局有限公司 Image processing method and device for tunnel pipe gallery, computer equipment and storage medium
CN112183397A (en) * 2020-09-30 2021-01-05 四川弘和通讯有限公司 Method for identifying sitting protective fence behavior based on cavity convolutional neural network
CN112396017B (en) * 2020-11-27 2023-04-07 上海建科工程咨询有限公司 Engineering potential safety hazard identification method and system based on image identification
CN112801466B (en) * 2021-01-08 2024-03-29 温州大学激光与光电智能制造研究院 Method and system for early warning illegal operation of oil discharge operation of gas station
CN113537099B (en) * 2021-07-21 2022-11-29 招商局重庆交通科研设计院有限公司 Dynamic detection method for fire smoke in highway tunnel
CN114611400B (en) * 2022-03-18 2023-08-29 河北金锁安防工程股份有限公司 Early warning information screening method and system

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106250845A (en) * 2016-07-28 2016-12-21 北京智芯原动科技有限公司 Flame detecting method based on convolutional neural networks and device
CN106529605A (en) * 2016-11-28 2017-03-22 东华大学 Image identification method of convolutional neural network model based on immunity theory
US10140544B1 (en) * 2018-04-02 2018-11-27 12 Sigma Technologies Enhanced convolutional neural network for image segmentation
CN109858487A (en) * 2018-10-29 2019-06-07 温州大学 Weakly supervised semantic segmentation method based on watershed algorithm and image category label

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106250845A (en) * 2016-07-28 2016-12-21 北京智芯原动科技有限公司 Flame detecting method based on convolutional neural networks and device
CN106529605A (en) * 2016-11-28 2017-03-22 东华大学 Image identification method of convolutional neural network model based on immunity theory
US10140544B1 (en) * 2018-04-02 2018-11-27 12 Sigma Technologies Enhanced convolutional neural network for image segmentation
CN109858487A (en) * 2018-10-29 2019-06-07 温州大学 Weakly supervised semantic segmentation method based on watershed algorithm and image category label

Also Published As

Publication number Publication date
CN110414320A (en) 2019-11-05

Similar Documents

Publication Publication Date Title
CN110414320B (en) Method and system for safety production supervision
CN111432182B (en) Safety supervision method and system for oil discharge place of gas station
CN110188807B (en) Tunnel pedestrian target detection method based on cascading super-resolution network and improved Faster R-CNN
CN108038424B (en) Visual automatic detection method suitable for high-altitude operation
CN110619277A (en) Multi-community intelligent deployment and control method and system
Kumar et al. Automatic vehicle number plate recognition system using machine learning
CN109993138A (en) A kind of car plate detection and recognition methods and device
CN114782897A (en) Dangerous behavior detection method and system based on machine vision and deep learning
CN109001210B (en) System and method for detecting aging and cracking of sealing rubber strip of civil air defense door
CN111523397A (en) Intelligent lamp pole visual identification device, method and system and electronic equipment
CN113191273A (en) Oil field well site video target detection and identification method and system based on neural network
CN114648714A (en) YOLO-based workshop normative behavior monitoring method
CN116311081B (en) Medical laboratory monitoring image analysis method and system based on image recognition
CN114579791A (en) Construction safety violation identification method and system based on operation ticket
CN112597926A (en) Method, device and storage medium for identifying airplane target based on FOD image
Vishwanath et al. Connected component analysis for Indian license plate infra-red and color image character segmentation
CN114067244A (en) Safety operation violation behavior video analysis method and system
CN117114420B (en) Image recognition-based industrial and trade safety accident risk management and control system and method
CN116310922A (en) Petrochemical plant area monitoring video risk identification method, system, electronic equipment and storage medium
CN112598865A (en) Monitoring method and system for preventing cable line from being damaged by external force
CN111178198B (en) Automatic monitoring method for potential safety hazards of laboratory dangerous goods based on machine vision
CN116229396B (en) High-speed pavement disease identification and warning method
CN112861701B (en) Illegal parking identification method, device, electronic equipment and computer readable medium
KR102435435B1 (en) System for searching numbers of vehicle and pedestrian based on artificial intelligence
CN115311458A (en) Real-time expressway pedestrian intrusion event detection method based on multi-task learning

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant