CN116311541A - Intelligent inspection method and system for identifying illegal behaviors of workers - Google Patents

Intelligent inspection method and system for identifying illegal behaviors of workers Download PDF

Info

Publication number
CN116311541A
CN116311541A CN202310573607.8A CN202310573607A CN116311541A CN 116311541 A CN116311541 A CN 116311541A CN 202310573607 A CN202310573607 A CN 202310573607A CN 116311541 A CN116311541 A CN 116311541A
Authority
CN
China
Prior art keywords
image
level
partition
model
sample
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202310573607.8A
Other languages
Chinese (zh)
Other versions
CN116311541B (en
Inventor
侯立东
白劲松
侴华强
高福刚
邢恩奎
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Titan Mianyang Energy Technology Co ltd
Original Assignee
Titan Tianjin Energy Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Titan Tianjin Energy Technology Co ltd filed Critical Titan Tianjin Energy Technology Co ltd
Priority to CN202310573607.8A priority Critical patent/CN116311541B/en
Publication of CN116311541A publication Critical patent/CN116311541A/en
Application granted granted Critical
Publication of CN116311541B publication Critical patent/CN116311541B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/20Movements or behaviour, e.g. gesture recognition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/26Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
    • G06V10/267Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion by performing operations on regions, e.g. growing, shrinking or watersheds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/28Quantising the image, e.g. histogram thresholding for discrimination between background and foreground patterns
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/762Arrangements for image or video recognition or understanding using pattern recognition or machine learning using clustering, e.g. of similar faces in social networks
    • G06V10/7625Hierarchical techniques, i.e. dividing or merging patterns to obtain a tree-like representation; Dendograms
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/764Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/52Surveillance or monitoring of activities, e.g. for recognising suspicious objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/70Labelling scene content, e.g. deriving syntactic or semantic representations
    • GPHYSICS
    • G07CHECKING-DEVICES
    • G07CTIME OR ATTENDANCE REGISTERS; REGISTERING OR INDICATING THE WORKING OF MACHINES; GENERATING RANDOM NUMBERS; VOTING OR LOTTERY APPARATUS; ARRANGEMENTS, SYSTEMS OR APPARATUS FOR CHECKING NOT PROVIDED FOR ELSEWHERE
    • G07C1/00Registering, indicating or recording the time of events or elapsed time, e.g. time-recorders for work people
    • G07C1/20Checking timed patrols, e.g. of watchman
    • GPHYSICS
    • G08SIGNALLING
    • G08BSIGNALLING OR CALLING SYSTEMS; ORDER TELEGRAPHS; ALARM SYSTEMS
    • G08B31/00Predictive alarm systems characterised by extrapolation or other computation using updated historic data
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y04INFORMATION OR COMMUNICATION TECHNOLOGIES HAVING AN IMPACT ON OTHER TECHNOLOGY AREAS
    • Y04SSYSTEMS INTEGRATING TECHNOLOGIES RELATED TO POWER NETWORK OPERATION, COMMUNICATION OR INFORMATION TECHNOLOGIES FOR IMPROVING THE ELECTRICAL POWER GENERATION, TRANSMISSION, DISTRIBUTION, MANAGEMENT OR USAGE, i.e. SMART GRIDS
    • Y04S10/00Systems supporting electrical power generation, transmission or distribution
    • Y04S10/50Systems or methods supporting the power network operation or management, involving a certain degree of interaction with the load-side end user applications

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Health & Medical Sciences (AREA)
  • Evolutionary Computation (AREA)
  • Health & Medical Sciences (AREA)
  • Computing Systems (AREA)
  • Artificial Intelligence (AREA)
  • Software Systems (AREA)
  • Databases & Information Systems (AREA)
  • Medical Informatics (AREA)
  • Computational Linguistics (AREA)
  • Biomedical Technology (AREA)
  • Social Psychology (AREA)
  • Psychiatry (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Human Computer Interaction (AREA)
  • Biophysics (AREA)
  • Data Mining & Analysis (AREA)
  • Molecular Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Business, Economics & Management (AREA)
  • Emergency Management (AREA)
  • Image Analysis (AREA)

Abstract

The invention relates to the technical field of data processing, and provides an intelligent inspection method and system for identifying illegal behaviors of workers, wherein the method comprises the following steps: partitioning is carried out in a working area, and an illegal level-image feature library is constructed according to illegal level classification, so as to distinguish semantic segmentation sub-models of work pieces and people and distinguish image extraction sub-models of human actions; the method comprises the steps of comparing and determining the illegal behaviors at the positions, classifying the illegal behaviors, collecting and synchronizing the illegal behaviors to an image recognition module, matching to obtain an illegal recognition result, solving the technical problem that the actual situation of the actions of workers is difficult to recognize, and the illegal behaviors are limited in judgment precision.

Description

Intelligent inspection method and system for identifying illegal behaviors of workers
Technical Field
The invention relates to the technical field of data processing, in particular to an intelligent inspection method and system for identifying illegal behaviors of workers.
Background
Serious safety accidents and consequences can be brought to the illegal behaviors of coal mine workers, and the operation is not carried out according to the procedures such as the specified time, place, working procedure, process and the like, or the working mode of equipment is changed under the condition of no permission; unauthorized release of the security protection device, disordered connection of the wires, leaving from the working post, no wearing of security protection equipment, private pulling of the disordered connection cable and other violating security operations; unauthorized tampering or use of equipment or tools, or violating workflow or scheduling, are also illegal unauthorized operations, often bring unpredictable harm and loss, have great influence on production and economic development of coal mine enterprises, and also bring great threat to life and property safety of coal mine workers.
At present, mutual supervision is generally adopted to avoid coal mine worker illegal behaviors, coal mine workers which go down together are reporting people and supervision objects, mutual checkpoints are carried out to jointly ensure the life and property safety of the coal mine workers after the coal mine worker illegal behaviors occur, meanwhile, whether the checkpoints are established or not is intelligently judged by utilizing an illegal behavior judgment model, and most illegal behavior judgment models can simulate human expert experience and reasoning processes to a certain extent by adopting an artificial intelligent model based on big data.
In summary, the prior art has the technical problems that the actual situation of the actions of workers is difficult to identify and the accuracy of illegal action judgment is limited.
Disclosure of Invention
The intelligent inspection method and system for identifying the illegal behaviors of the workers are provided, and the technical problem that the actual behaviors of the workers are difficult to identify and the illegal behaviors are limited in judgment accuracy in the prior art is solved.
In view of the above problems, the embodiments of the present application provide an intelligent inspection method and system for identifying illegal behaviors of workers.
In a first aspect of the disclosure, an intelligent inspection method for identifying a violation behavior of a worker is provided, where the method is applied to an inspection robot, the inspection robot has an image acquisition device and a positioning device, and the method includes: a working target area is obtained, the working target area is partitioned according to working content, equipment information and risk degree, and a target partition is established; based on the working content, equipment information and the risk degree in the target partition, carrying out illegal action grading by combining an illegal image sample, and constructing an illegal grade-image feature library; generating a multi-level feature segmentation sample and a multi-level feature identification sample according to the violation level-image feature library; based on semantic segmentation logic, constructing a semantic segmentation sub-model, and training the semantic segmentation sub-model through the multi-level feature segmentation sample; setting a watershed threshold, constructing an image extraction sub-model based on a watershed algorithm, and training the image extraction sub-model through the multi-level characteristic identification sample; connecting the semantic segmentation sub-model and the image extraction sub-model through a connecting layer to construct an image recognition model, and embedding the image recognition model into an image recognition module; and carrying out position recognition on the inspection robot based on the positioning equipment, comparing the inspection robot with the target subarea to determine the information of the target subarea where the inspection robot is located, carrying out image acquisition by utilizing the image acquisition equipment based on the illegal activity grade division corresponding to the target subarea, synchronizing the acquired images to the image recognition module to carry out image activity recognition, and carrying out matching with the illegal activity grade-image feature library according to the image activity recognition result to obtain the illegal activity recognition result.
In another aspect of the present disclosure, an intelligent patrol system for identifying a worker's offensive behavior is provided, wherein the system comprises: the target partition establishing module is used for obtaining a working target area, partitioning the working target area according to working content, equipment information and risk degree, and establishing a target partition; the violation classification module is used for classifying the violation according to the working content, the equipment information and the risk degree in the target partition and combining the violation image sample to construct a violation classification-image feature library; the sample generation module is used for generating a multi-level feature segmentation sample and a multi-level feature identification sample according to the violation level-image feature library; the first model training module is used for constructing a semantic segmentation sub-model based on semantic segmentation logic and training the semantic segmentation sub-model through the multi-level feature segmentation sample; the second model training module is used for setting a watershed threshold value, constructing an image extraction sub-model based on a watershed algorithm, and training the image extraction sub-model through the multi-level characteristic identification sample; the image recognition model construction module is used for connecting the semantic segmentation sub-model and the image extraction sub-model through a connecting layer to construct an image recognition model, and embedding the image recognition model into the image recognition module; and the violation identification result obtaining module is used for carrying out position identification on the inspection robot based on the positioning equipment, comparing the position identification with the target partition to determine the information of the target partition where the inspection robot is located, carrying out image acquisition based on the violation classification corresponding to the target partition by utilizing the image acquisition equipment, synchronizing the acquired image to the image identification module to carry out image behavior identification, and carrying out matching with a violation classification-image feature library according to the image behavior identification result to obtain the violation identification result.
One or more technical solutions provided in the present application have at least the following technical effects or advantages:
because the work target area is obtained, the partition establishes a target partition; the offence grade classification builds an offence grade-image feature library; generating a multi-level feature segmentation sample and a multi-level feature identification sample; constructing a semantic segmentation sub-model and an image extraction sub-model; identifying positions, comparing the positions with target partitions to determine target partition information of the inspection robot, classifying the target partitions based on the corresponding illegal behaviors, collecting images, synchronizing the collected images to an image identification module for image behavior identification, matching the image behavior identification result with an illegal class-image feature library to obtain the illegal identification result, segmenting characters and work objects by using a semantic segmentation technology, segmenting contours by using watershed to determine actions, simultaneously, carrying out partition identification according to different working areas, and dividing dangerous actions of each area into different classes, thereby realizing accurate identification of real situations of actions of workers and improving the technical effect of illegal behavior judgment accuracy.
The foregoing description is only an overview of the technical solutions of the present application, and may be implemented according to the content of the specification in order to make the technical means of the present application more clearly understood, and in order to make the above-mentioned and other objects, features and advantages of the present application more clearly understood, the following detailed description of the present application will be given.
Drawings
Fig. 1 is a schematic flow chart of a possible intelligent inspection method for identifying illegal behaviors of workers according to an embodiment of the present application;
fig. 2 is a schematic diagram of a possible flow for constructing a semantic segmentation sub-model in an intelligent inspection method for identifying a violation behavior of a worker according to an embodiment of the present application;
fig. 3 is a schematic flow chart of a possible output image behavior recognition result in an intelligent inspection method for recognizing a worker's illegal behaviors according to an embodiment of the present application;
fig. 4 is a schematic diagram of a possible structure of an intelligent inspection system for identifying illegal behaviors of workers according to an embodiment of the present application.
Reference numerals illustrate: the system comprises a target partition establishing module 100, an offence grading module 200, a sample generating module 300, a first model training module 400, a second model training module 500, an image recognition model constructing module 600 and an offence recognition result obtaining module 700.
Detailed Description
The embodiment of the application provides an intelligent inspection method and system for identifying illegal behaviors of workers, which solve the technical problems that the actual situation of the behaviors of the workers is difficult to identify and the illegal behaviors are limited in judgment precision, utilize semantic segmentation technology to segment characters and work pieces, utilize watershed to segment contours so as to determine the behaviors, simultaneously, conduct partition identification according to different working areas, and meanwhile, dangerous behaviors of each area are classified into different grades, so that the actual situation of the behaviors of the workers is accurately identified, and the technical effect of the illegal behaviors judgment precision is improved.
Having described the basic principles of the present application, various non-limiting embodiments of the present application will now be described in detail with reference to the accompanying drawings.
Example 1
As shown in fig. 1, an embodiment of the present application provides an intelligent inspection method for identifying a violation behavior of a worker, where the method is applied to an inspection robot, the inspection robot has an image acquisition device and a positioning device, and the method includes:
s10: a working target area is obtained, the working target area is partitioned according to working content, equipment information and risk degree, and a target partition is established;
step S10 includes the steps of:
s11: the method comprises the steps of obtaining a working target area, wherein the working target area comprises target area working content, target area operation equipment, equipment operation risk information and accident cases;
s12: according to the working content of the target area, carrying out content time sequence repeatability analysis, setting a repeatability threshold value, and carrying out partition to obtain a first partition;
s13: based on the first partition, performing operation equipment relevance analysis according to the target area operation equipment, setting a relevance threshold value, and re-partitioning the first partition to obtain a second partition;
S14: based on the second partition, carrying out risk coefficient numerical quantification according to a set rule according to the equipment operation risk information and the accident case to obtain a second partition risk coefficient;
s15: and partitioning the second partition according to the difference value according to the risk coefficient of the second partition, and re-partitioning the second partition according to the preset difference value requirement and the partition range requirement to finish the target partition.
Specifically, the inspection robot is configured with an image acquisition device and a positioning device, the image acquisition device and the positioning device are both in communication connection with the inspection robot, the communication connection is simply through signal transmission interaction, a communication network is formed between the image acquisition device and the inspection robot, and between the positioning device and the inspection robot, and a hardware support is provided for subsequent analysis;
the method comprises the steps of obtaining a working target area, partitioning the working target area according to working content, equipment information and risk degree, and establishing a target partition, wherein the method comprises the following steps: the working target area comprises target area working contents, target area operation equipment, equipment operation risk information and accident cases, the target area working contents correspond to the target area operation equipment in general, the target area operation equipment is mutually matched to complete the target area working contents, the equipment operation risk information is notice information in the specifications of the equipment, the accident cases are experience data, and the experience data are stored in a data storage unit of an intelligent inspection system for identifying the illegal behaviors of workers;
In the working target area, taking the working content of the target area as a first division constraint condition, and if the working content executed by the target area operation equipment is completely consistent, setting the content time sequence repeatability to be 100% (equipment A can replace equipment B); if the working contents executed by the target area operation equipment are completely inconsistent, setting the time sequence repeatability of the contents to 0% (equipment A cannot replace equipment B); based on the content time sequence repeatability analysis is sequentially carried out on all target area operation equipment in the working target area, the content time sequence repeatability of all target area operation equipment in the working target area is obtained, a repeatability threshold (generally, 80%) is set, and the first partition is obtained by comparing the content time sequence repeatability of all target area operation equipment in the working target area with the repeatability threshold, wherein at least two first partitions exist in each working target area;
in the first partition, taking the relevance of the running equipment as a second partition constraint condition, comparing the partition steps of the first partition with the partition steps of the first partition, carrying out relevance analysis of the running equipment according to the running equipment of the target area, wherein the relevance analysis is the prior art, the specific expansion is not carried out, a relevance threshold (generally, the relevance threshold is set to be 90 percent), and carrying out re-partition on the first partition to obtain a second partition, wherein at least two second partitions exist in the first partition;
In the second partition, taking experience data corresponding to the equipment operation risk information and the accident case as a knowledge base, generating a risk coefficient numerical quantization expert system according to a set rule, taking the equipment operation risk information and the accident case of the working target area as input data, inputting the risk coefficient numerical quantization expert system to perform risk coefficient numerical quantization, and outputting a risk coefficient of the second partition;
and in the second partition, partitioning according to a plurality of second partition risk coefficients, wherein the preset gap value requirement is [0, 100], namely, the plurality of second partition risk coefficients are [0, 100], and the target partition is completed according to partition range requirements of [0, 10], (10, 20], (20, 30], (30, 40], (40, 50], (50, 60], (60, 70], (70, 80], (80, 90], (90, 100 ]), and the plurality of second partition risk coefficients are subjected to partition again, wherein at least two target partitions exist in each second partition, scientific and reasonable partition is performed, and a basis is provided for further refinement and partition.
S20: based on the working content, equipment information and the risk degree in the target partition, carrying out illegal action grading by combining an illegal image sample, and constructing an illegal grade-image feature library;
Step S20 includes the steps of:
s21: carrying out accident case extraction based on the working content, the equipment information and the risk degree;
s22: carrying out accident level clustering on the accident cases to obtain accident level classification results, wherein the accident level classification results are different level classification results aiming at the same accident;
s23: and extracting corresponding image samples based on the accident level classification result, extracting accident characteristics of the image samples, and carrying out characteristic labeling on the extracted characteristics based on preset illegal behaviors, wherein the characteristic labeling comprises operation behavior characteristic labeling and operation object characteristic labeling.
Specifically, performing offence grading based on working content, equipment information and dangerous degree in the target partition and combining offence image samples, wherein the working content, the equipment information and the dangerous degree can be used as layering characteristics of hierarchical clustering, namely, the working content, the equipment information and the dangerous degree are simply selected as central reference points, corresponding image samples are extracted based on the accident grading classification result, accident cases are extracted according to the working content, the equipment information and the dangerous degree, bottom-up aggregation hierarchical clustering analysis is performed, and after the accident grading classification result is extracted, accident grading clustering is performed until the accident grading classification result is completed after the corresponding image sample distribution is not changed, and an accident grading classification result is obtained, wherein the accident grading classification result is different grading classification results aiming at the same accident;
And extracting corresponding image samples based on the accident level classification result, carrying out accident feature extraction on the image samples by comparing the accident level classification result, taking the extracted features of the preset violation pairs as feature marking content, carrying out feature marking on the extracted features based on the preset violation pairs, wherein the feature marking comprises operation behavior feature marking and operation object feature marking, determining the association mapping between the feature marking and the accident level classification result and the accident case after the feature marking is finished, and generating a violation level-image feature library by means of multiple groups of association mapping between the feature marking and the accident level classification result and the accident case so as to provide support for automatic classification of the accident level.
S30: generating a multi-level feature segmentation sample and a multi-level feature identification sample according to the violation level-image feature library;
s40: based on semantic segmentation logic, constructing a semantic segmentation sub-model, and training the semantic segmentation sub-model through the multi-level feature segmentation sample;
as shown in fig. 2, step S40 includes the steps of:
s41: constructing an encoder and a decoder based on a full convolution neural network structure to obtain the semantic segmentation sub-model;
S42: dividing the multi-stage feature division sample to determine a multi-stage sample image division result;
s43: and training the encoder and the decoder by adopting the multi-stage characteristic segmentation samples and the multi-stage sample image segmentation results until the model converges to the requirement.
Specifically, a plurality of groups of association mapping between feature labels and accident cases in the past year in the violation grade-image feature library are used as multi-level feature segmentation samples, and feature labels and accident grade classification results in the past year in the violation grade-image feature library are used as multi-level feature recognition samples, so that the multi-level feature segmentation samples and the multi-level feature recognition samples are respectively generated;
constructing a semantic segmentation sub-model based on semantic segmentation logic, training the semantic segmentation sub-model through the multi-level feature segmentation sample, wherein the semantic segmentation sub-model is constructed by respectively adopting the multi-level feature segmentation sample and a multi-level sample image segmentation result as construction data, the CNN (Convolutional Neural Networks, convolutional neural network) structure diagram of a full convolutional neural network in semantic segmentation is constructed, the CNN structure diagram comprises an up-sampling process and a down-sampling process, the execution times of the up-sampling process are the same as those of the down-sampling process, finally, a pixel classification output layer is used for mapping each pixel to a specific class to form an encoder-decoder framework, a plurality of encoders and a plurality of decoders corresponding to the multi-level feature segmentation sample are respectively constructed based on the basis of the multi-level feature segmentation sample, the multi-level sample image segmentation result is determined, and a plurality of encoders and a plurality of decoders in the multi-level sample image segmentation result are corresponding;
Dividing the construction data into training data, verification data and test data according to the proportion of 9:0.5:0.5 by respectively adopting the multi-stage characteristic segmentation sample and the multi-stage sample image segmentation result as construction data, performing supervised training on the encoder and the decoder by adopting the training data, performing verification and test on the encoder and the decoder by adopting the verification data and the test data after the training is finished, and taking the trained encoder and decoder as an image segmentation unit right above in the multi-stage sample image segmentation result after the accuracy rate accords with preset conditions;
preferably, the full convolutional neural network in the semantic segmentation recovers the resolution of the output feature map through an Upsampling (Upsampling) operation, and compared with a traditional interpolation method, the Upsampling can learn more complex feature representation, an encoder-decoder architecture is formed in an end-to-end manner, the segmentation precision is improved, and the full convolutional neural network is used for extracting contours and distinguishing objects such as equipment and workers according to the contours in the embodiment of the application;
setting training data, verification data and test data of semantic segmentation, performing supervised training, verification and test training by adopting a plurality of encoders and a plurality of decoders in the multi-stage sample image segmentation result until the accuracy rate obtained by the test meets the model convergence requirement, wherein the model convergence requirement can be that the accuracy is not lower than 95%, constructing the encoders and the decoders, generating the semantic segmentation sub-model according to the constructed plurality of encoder-decoder architectures, and performing segmentation of characters and work objects by utilizing a semantic segmentation technology to provide model support for distinguishing and identifying the work objects and the people.
S50: setting a watershed threshold, constructing an image extraction sub-model based on a watershed algorithm, and training the image extraction sub-model through the multi-level characteristic identification sample;
as shown in fig. 3, step S50 includes the steps of:
s51: constructing a watershed treatment layer based on a watershed algorithm and a watershed threshold value, and carrying out segmentation recognition on the multi-level characteristic recognition sample;
s52: setting aggregation constraint parameters, wherein the aggregation constraint parameters comprise color aggregation parameters and distance aggregation parameters;
s53: constructing a feature polymerization layer based on the color polymerization parameters and the distance polymerization parameters;
s54: inputting the image segmentation recognition result of the watershed processing layer into a feature aggregation layer, aggregating the image segmentation recognition result according to the color aggregation parameter and the distance aggregation parameter, and outputting an image behavior recognition result.
Specifically, setting a watershed threshold value, constructing an image extraction sub-model based on a watershed algorithm, constructing a watershed processing layer based on the watershed algorithm and the watershed threshold value, carrying out segmentation recognition on the multi-level feature recognition sample, regarding the image in the multi-level feature recognition sample as a submerged topography, finding out the position of a peak by calculating gradient values at each point in the image in the multi-level feature recognition sample, dividing the image in the multi-level feature recognition sample into a plurality of areas from the peaks, enabling the pixel gray level in each area to be approximate, enabling the pixel gray level difference between adjacent areas to be large, dividing the image in the multi-level feature recognition sample into a plurality of areas by using the gray segmentation threshold value corresponding to the gray segmentation threshold value of the image in the multi-level feature recognition sample as the watershed threshold value, and dividing the image in the multi-level feature recognition sample into a plurality of areas as the watershed processing layer;
The aggregation constraint parameters comprise color aggregation parameters and distance aggregation parameters, and the aggregation constraint parameters are set: the color aggregation parameters comprise aggregated pixels and non-aggregated pixels, the aggregated pixels and the non-aggregated pixels are generally judged according to color differences between adjacent pixels, color histograms are used for extracting color characteristics of the aggregated pixels, then bin of each color histogram is grouped according to the color aggregation parameters, and average operation is carried out in the group to obtain an aggregated color vector; distance aggregation parameters: in the distance aggregation process, pixels with similar distances are aggregated into one region, so that the number of the segmented regions can be reduced, and a more accurate segmentation result is obtained;
setting a feature aggregation layer by taking the color aggregation parameter as a first layering constraint and the distance aggregation parameter as a first layering constraint; the output end of the watershed processing layer is communicated with the input end of the characteristic aggregation layer, and meanwhile, the image segmentation recognition result of the watershed processing layer is input into the characteristic aggregation layer, the image segmentation recognition result is aggregated according to the color aggregation parameter and the distance aggregation parameter, the image behavior recognition result is output, the watershed segmentation algorithm is based on the gradient segmentation of the image, and because a plurality of areas exist in one image, the situation that the result is over-segmented is caused due to the fact that the interior of a flat area in the gradient image possibly generates wrong local minimum values, and based on the situation, layering constraint is carried out according to the color aggregation parameter and the distance aggregation parameter, so that the rationality of image segmentation is improved;
Preferably, the watershed segmentation algorithm has strong sensitivity to tiny gray level changes, so that the edge of an object can be accurately positioned in an image, and finally the segmented area has sealing and connectivity, and the watershed segmentation algorithm is used for segmenting the contour by using the watershed to determine the action of a worker and ensure the real situation of the action of the worker;
and training the image extraction sub-model by taking the multi-stage feature recognition sample as training data, performing model fitting training by referring to the training process of the semantic segmentation sub-model, generating an image extraction sub-model, and dividing the outline by using the watershed to determine actions, thereby providing a model foundation for recognition of the actions of workers.
Step S51 includes the steps of:
s511: performing image binarization processing on the multi-level characteristic identification sample, and converting the multi-level characteristic identification sample into a gray level image;
s512: determining a minimum value point based on the gray value in the gray image, wherein the minimum value point is a pixel point with the minimum gray value;
s513: setting a watershed threshold according to gray value distribution in the gray image;
s514: taking the minimum value point as a starting point, performing water injection fitting liquid level rising, generating a dividing line when the liquid levels corresponding to any starting point are intersected, and judging whether the gray value of the dividing line is larger than the threshold value of the watershed;
S515: and when the threshold value of the dividing line is larger than the threshold value of the dividing line, reserving the dividing line, and when the threshold value of the dividing line is smaller than the threshold value of the dividing line, submerging the dividing line until the maximum gray value is reached, and completing image recognition division.
Specifically, the multi-level feature recognition sample is subjected to segmentation recognition, which comprises the steps of performing image binarization processing on images in the multi-level feature recognition sample, and converting color images into gray images; the minimum value point is the pixel point with the minimum gray value (the gray value represents the height), and the minimum value point is determined according to the gray value in the gray image; setting a watershed threshold value according to gray level segmentation threshold values corresponding to the image in the multi-level feature recognition sample divided into a plurality of areas, and distributing according to gray level values in the gray level image;
adopting a water injection mode, taking the minimum value point as a starting point, carrying out water injection fitting liquid level rising, comparing the gray level value of the broken pixel point with the gray level value between the watershed threshold values, generating a dividing line when the liquid levels corresponding to any starting point meet, and judging whether the gray level value of the dividing line is larger than the watershed threshold values or not simultaneously: when the threshold value of the watershed is larger than the threshold value of the watershed, reserving a dividing line; when the threshold value of the watershed is smaller than the threshold value of the watershed, the pixel points are submerged, a dividing line is arranged on the submerged pixel points until the maximum gray level value is reached, image recognition division is completed, and a reference is provided for fine division of images in the multi-stage feature recognition sample.
S60: connecting the semantic segmentation sub-model and the image extraction sub-model through a connecting layer to construct an image recognition model, and embedding the image recognition model into an image recognition module;
s70: and carrying out position recognition on the inspection robot based on the positioning equipment, comparing the inspection robot with the target subarea to determine the information of the target subarea where the inspection robot is located, carrying out image acquisition by utilizing the image acquisition equipment based on the illegal activity grade division corresponding to the target subarea, synchronizing the acquired images to the image recognition module to carry out image activity recognition, and carrying out matching with the illegal activity grade-image feature library according to the image activity recognition result to obtain the illegal activity recognition result.
After obtaining the violation identification result, the embodiment of the application further includes the steps of:
s71: setting a multi-stage early warning signal rule based on the target partition and the violation level-image feature library;
s72: based on the violation identification result and the multi-stage early warning signal rule, carrying out early warning category and level identification, and determining an early warning signal rule;
s73: and generating an early warning signal according to the early warning signal rule, and sending the early warning signal through the inspection robot.
Specifically, the semantic segmentation sub-model and the image extraction sub-model are combined as a serial node processing model through a connection layer, the output end of the semantic segmentation sub-model and the input end of the image extraction sub-model are communicated, an image recognition model is generated, and meanwhile, the image recognition model is embedded into an image recognition module which is used for recognizing the real situation of the action of a worker;
The method comprises the steps that position identification is conducted on the inspection robot based on positioning equipment, and currently acquired images of image acquisition equipment in communication connection with the inspection robot are compared with the target subareas to determine target subarea information of the inspection robot; the corresponding illegal behavior grade division of the target partition is used for expressing the grade of the illegal behavior in different partitions, for example, in a low risk area A, the illegal grade of the non-wearing safety protection equipment is grade 3; in the high risk area B, the violation grade without wearing safety protection equipment is 7 grade, and the violation grade-image feature library is utilized to divide the violation grade based on the corresponding violation behavior grade of the target partition; the method comprises the steps of utilizing image acquisition equipment to acquire images, synchronizing acquired images to an image recognition module to perform image behavior recognition, and matching an image behavior recognition result with a violation grade-image feature library to obtain a violation recognition result, wherein the violation recognition result is used for carrying out graded violation warning reminding according to the violation grade;
after obtaining the violation identification result, the method further comprises the steps that the multi-level early warning signal rule comprises a red-7 level, an orange-6 level, a yellow-5 level, a green-4 level, a cyan-3 level, a blue-2 level and a purple-1 level, and the multi-level early warning signal rule is set from the point of warning and reminding of the violation behavior based on the target partition and the violation level-image feature library; in the multi-stage early warning signal rule, early warning category and level recognition are carried out according to the violation recognition result, the early warning signal rule is determined, the early warning signal rule comprises 7-level, 6-level, 5-level, 4-level, 3-level, 2-level or 1-level, the early warning signal rule is used as the content of an early warning signal, the early warning signal is set, the early warning signal is synchronously sent to an intelligent inspection system for recognizing the violation behaviors of workers through an inspection robot, after the violation behaviors are determined, grading early warning is carried out, and the early warning signal is sent to the intelligent inspection system for recognizing the violation behaviors of the workers, so that support is provided for reminding and timely correcting the violation behaviors of the workers.
In summary, the intelligent inspection method and system for identifying the illegal behaviors of workers provided by the embodiment of the application have the following technical effects:
1. because the work target area is obtained, the partition establishes a target partition; the offence grade classification builds an offence grade-image feature library; generating a multi-level feature segmentation sample and a multi-level feature identification sample; constructing a semantic segmentation sub-model and an image extraction sub-model; identifying positions and comparing the positions with target partitions to determine target partition information of the inspection robot, classifying the target partitions based on the corresponding illegal behaviors, collecting images, synchronizing the collected images to an image identification module for image behavior identification, and matching the image behavior identification results with an illegal class-image feature library to obtain illegal identification results.
2. Due to the adoption of the encoder and the decoder, a semantic segmentation sub-model is obtained; dividing the multi-stage characteristic segmentation sample to determine a multi-stage sample image segmentation result; the multi-stage characteristic segmentation samples and the multi-stage sample image segmentation results are adopted to train the encoder and the decoder until convergence, and the semantic segmentation technology is utilized to segment the characters and the working objects, so that model support is provided for distinguishing and identifying the working objects and the people.
Example two
Based on the same inventive concept as the intelligent inspection method for identifying the illegal behaviors of the workers in the foregoing embodiments, as shown in fig. 4, an embodiment of the present application provides an intelligent inspection system for identifying the illegal behaviors of the workers, where the system includes:
the target partition establishing module 100 is configured to obtain a working target area, partition the working target area according to working content, equipment information and risk level, and establish a target partition;
the offence grading module 200 is configured to perform offence grading in combination with an offence image sample based on the working content, the equipment information and the risk degree in the target partition, and construct an offence grade-image feature library;
the sample generation module 300 is configured to generate a multi-level feature segmentation sample and a multi-level feature identification sample according to the violation level-image feature library;
A first model training module 400, configured to construct a semantic segmentation sub-model based on semantic segmentation logic, and train the semantic segmentation sub-model through the multi-level feature segmentation sample;
the second model training module 500 is configured to set a watershed threshold, construct an image extraction sub-model based on a watershed algorithm, and train the image extraction sub-model through the multi-level feature identification sample;
the image recognition model construction module 600 is configured to connect the semantic segmentation sub-model and the image extraction sub-model through a connection layer, construct an image recognition model, and embed the image recognition model into the image recognition module;
the violation identification result obtaining module 700 is configured to identify a position of the inspection robot based on the positioning device, compare the position of the inspection robot with the target partition to determine information of the target partition where the inspection robot is located, perform image acquisition based on the violation classification corresponding to the target partition by using the image acquisition device, synchronize the acquired image to the image identification module to perform image behavior identification, and match the image behavior identification result with a violation classification-image feature library to obtain a violation identification result.
Further, the system includes:
the multi-stage early warning signal rule setting module is used for setting multi-stage early warning signal rules based on the target partition and the violation level-image feature library;
the early warning signal rule determining module is used for carrying out early warning category and level recognition based on the violation recognition result and the multi-level early warning signal rule, and determining an early warning signal rule;
and the early warning signal generation module is used for generating an early warning signal according to the early warning signal rule and sending the early warning signal through the inspection robot.
Further, the system includes:
the working target area obtaining module is used for obtaining a working target area, wherein the working target area comprises target area working content, target area operation equipment, equipment operation risk information and accident cases;
the first partition obtaining module is used for carrying out content time sequence repeatability analysis according to the working content of the target area, setting a repeatability threshold value and obtaining a first partition by partitioning;
the relevance threshold setting module is used for carrying out relevance analysis of the operation equipment according to the target area operation equipment based on the first partition, setting a relevance threshold and carrying out re-partition on the first partition to obtain a second partition;
The second partition risk coefficient obtaining module is used for carrying out risk coefficient numerical quantification according to a set rule and according to the equipment operation risk information and the accident case based on the second partition to obtain a second partition risk coefficient;
and the re-partitioning module is used for partitioning the second partition according to the second partition risk coefficient and re-partitioning the second partition according to the preset gap value requirement and the partition range requirement to finish the target partition.
Further, the system includes:
the accident case extraction module is used for extracting the accident case based on the working content, the equipment information and the risk degree;
the accident level classification result obtaining module is used for carrying out accident level clustering on accident cases to obtain an accident level classification result, wherein the accident level classification result is different level classification results aiming at the same accident;
the accident feature extraction module is used for extracting corresponding image samples based on the accident level classification result, extracting accident features of the image samples, and carrying out feature labeling on the extracted features based on preset illegal behaviors, wherein the feature labeling comprises operation behavior feature labeling and operation object feature labeling.
Further, the system includes:
the semantic segmentation sub-model obtaining module is used for constructing an encoder and a decoder based on the full convolution neural network structure to obtain the semantic segmentation sub-model;
the multi-stage sample image segmentation result determining module is used for segmenting the multi-stage feature segmentation samples and determining multi-stage sample image segmentation results;
and the encoder and decoder training module is used for training the encoder and the decoder by adopting the multi-stage characteristic segmentation samples and the multi-stage sample image segmentation results until the model converges to the requirement.
Further, the system includes:
the segmentation recognition module is used for constructing a watershed treatment layer based on a watershed algorithm and a watershed threshold value and carrying out segmentation recognition on the multi-level characteristic recognition sample;
the aggregation constraint parameter setting module is used for setting aggregation constraint parameters, wherein the aggregation constraint parameters comprise color aggregation parameters and distance aggregation parameters;
the characteristic polymerization layer construction module is used for constructing a characteristic polymerization layer based on the color polymerization parameters and the distance polymerization parameters;
and the image behavior recognition result output module is used for inputting the image segmentation recognition result of the watershed processing layer into the characteristic aggregation layer, aggregating the image segmentation recognition result according to the color aggregation parameter and the distance aggregation parameter, and outputting the image behavior recognition result.
Further, the system includes:
the gray image conversion module is used for carrying out image binarization processing on the multi-level characteristic identification sample and converting the multi-level characteristic identification sample into a gray image;
the minimum value point determining module is used for determining a minimum value point based on the gray value in the gray image, wherein the minimum value point is the pixel point with the minimum gray value;
the watershed threshold setting module is used for setting a watershed threshold according to the gray value distribution in the gray image;
the gray value judging module is used for taking the minimum value point as a starting point, carrying out water injection fitting liquid level rising, generating a dividing line when the liquid levels corresponding to any starting point are intersected, and judging whether the gray value of the dividing line is larger than the threshold value of the watershed;
and the image recognition segmentation module is used for reserving a segmentation line when the segmentation line is larger than the watershed threshold value, submerging the segmentation line when the segmentation line is smaller than the watershed threshold value until the segmentation line is increased to the maximum gray value, and completing image recognition segmentation.
Any of the steps of the methods described above may be stored as computer instructions or programs in a non-limiting computer memory and may be called by a non-limiting computer processor to identify any of the methods to implement embodiments of the present application, without unnecessary limitations.
Further, the first or second element may not only represent a sequential relationship, but may also represent a particular concept, and/or may be selected individually or in whole among a plurality of elements. It will be apparent to those skilled in the art that various modifications and variations can be made in the present application without departing from the scope of the application. Thus, if such modifications and variations of the present application fall within the scope of the present application and the equivalents thereof, the present application is intended to cover such modifications and variations.

Claims (8)

1. An intelligent inspection method for identifying illegal behaviors of workers is characterized by being applied to an inspection robot, wherein the inspection robot is provided with image acquisition equipment and positioning equipment, and the method comprises the following steps:
a working target area is obtained, the working target area is partitioned according to working content, equipment information and risk degree, and a target partition is established;
based on the working content, equipment information and the risk degree in the target partition, carrying out illegal action grading by combining an illegal image sample, and constructing an illegal grade-image feature library;
generating a multi-level feature segmentation sample and a multi-level feature identification sample according to the violation level-image feature library;
Based on semantic segmentation logic, constructing a semantic segmentation sub-model, and training the semantic segmentation sub-model through the multi-level feature segmentation sample;
setting a watershed threshold, constructing an image extraction sub-model based on a watershed algorithm, and training the image extraction sub-model through the multi-level characteristic identification sample;
connecting the semantic segmentation sub-model and the image extraction sub-model through a connecting layer to construct an image recognition model, and embedding the image recognition model into an image recognition module;
and carrying out position recognition on the inspection robot based on the positioning equipment, comparing the inspection robot with the target subarea to determine the information of the target subarea where the inspection robot is located, carrying out image acquisition by utilizing the image acquisition equipment based on the illegal activity grade division corresponding to the target subarea, synchronizing the acquired images to the image recognition module to carry out image activity recognition, and carrying out matching with the illegal activity grade-image feature library according to the image activity recognition result to obtain the illegal activity recognition result.
2. The method of claim 1, wherein the violation identification result, thereafter, comprises:
setting a multi-stage early warning signal rule based on the target partition and the violation level-image feature library;
Based on the violation identification result and the multi-stage early warning signal rule, carrying out early warning category and level identification, and determining an early warning signal rule;
and generating an early warning signal according to the early warning signal rule, and sending the early warning signal through the inspection robot.
3. The method of claim 1, wherein the obtaining the work target area, partitioning the work target area according to the work content, the equipment information, and the risk level, and establishing the target partition, includes:
the method comprises the steps of obtaining a working target area, wherein the working target area comprises target area working content, target area operation equipment, equipment operation risk information and accident cases;
according to the working content of the target area, carrying out content time sequence repeatability analysis, setting a repeatability threshold value, and carrying out partition to obtain a first partition;
based on the first partition, performing operation equipment relevance analysis according to the target area operation equipment, setting a relevance threshold value, and re-partitioning the first partition to obtain a second partition;
based on the second partition, carrying out risk coefficient numerical quantification according to a set rule according to the equipment operation risk information and the accident case to obtain a second partition risk coefficient;
And partitioning the second partition according to the difference value according to the risk coefficient of the second partition, and re-partitioning the second partition according to the preset difference value requirement and the partition range requirement to finish the target partition.
4. The method of claim 1, wherein ranking violations in combination with the offending image samples based on work content, device information, and risk level in the target partition comprises:
carrying out accident case extraction based on the working content, the equipment information and the risk degree;
carrying out accident level clustering on the accident cases to obtain accident level classification results, wherein the accident level classification results are different level classification results aiming at the same accident;
and extracting corresponding image samples based on the accident level classification result, extracting accident characteristics of the image samples, and carrying out characteristic labeling on the extracted characteristics based on preset illegal behaviors, wherein the characteristic labeling comprises operation behavior characteristic labeling and operation object characteristic labeling.
5. The method of claim 1, wherein constructing a semantic segmentation sub-model based on semantic segmentation logic and training the semantic segmentation sub-model through the multi-level feature segmentation samples comprises:
Constructing an encoder and a decoder based on a full convolution neural network structure to obtain the semantic segmentation sub-model;
dividing the multi-stage feature division sample to determine a multi-stage sample image division result;
and training the encoder and the decoder by adopting the multi-stage characteristic segmentation samples and the multi-stage sample image segmentation results until the model converges to the requirement.
6. The method of claim 1, wherein setting a watershed threshold and constructing an image extraction sub-model based on a watershed algorithm comprises:
constructing a watershed treatment layer based on a watershed algorithm and a watershed threshold value, and carrying out segmentation recognition on the multi-level characteristic recognition sample;
setting aggregation constraint parameters, wherein the aggregation constraint parameters comprise color aggregation parameters and distance aggregation parameters;
constructing a feature polymerization layer based on the color polymerization parameters and the distance polymerization parameters;
inputting the image segmentation recognition result of the watershed processing layer into a feature aggregation layer, aggregating the image segmentation recognition result according to the color aggregation parameter and the distance aggregation parameter, and outputting an image behavior recognition result.
7. The method of claim 6, wherein performing segmentation recognition on the multi-level feature recognition sample comprises:
Performing image binarization processing on the multi-level characteristic identification sample, and converting the multi-level characteristic identification sample into a gray level image;
determining a minimum value point based on the gray value in the gray image, wherein the minimum value point is a pixel point with the minimum gray value;
setting a watershed threshold according to gray value distribution in the gray image;
taking the minimum value point as a starting point, performing water injection fitting liquid level rising, generating a dividing line when the liquid levels corresponding to any starting point are intersected, and judging whether the gray value of the dividing line is larger than the threshold value of the watershed;
and when the threshold value of the dividing line is larger than the threshold value of the dividing line, reserving the dividing line, and when the threshold value of the dividing line is smaller than the threshold value of the dividing line, submerging the dividing line until the maximum gray value is reached, and completing image recognition division.
8. An intelligent patrol system for identifying a worker's offensive behavior, characterized by implementing an intelligent patrol method for identifying a worker's offensive behavior as claimed in any one of claims 1-7, comprising:
the target partition establishing module is used for obtaining a working target area, partitioning the working target area according to working content, equipment information and risk degree, and establishing a target partition;
the violation classification module is used for classifying the violation according to the working content, the equipment information and the risk degree in the target partition and combining the violation image sample to construct a violation classification-image feature library;
The sample generation module is used for generating a multi-level feature segmentation sample and a multi-level feature identification sample according to the violation level-image feature library;
the first model training module is used for constructing a semantic segmentation sub-model based on semantic segmentation logic and training the semantic segmentation sub-model through the multi-level feature segmentation sample;
the second model training module is used for setting a watershed threshold value, constructing an image extraction sub-model based on a watershed algorithm, and training the image extraction sub-model through the multi-level characteristic identification sample;
the image recognition model construction module is used for connecting the semantic segmentation sub-model and the image extraction sub-model through a connecting layer to construct an image recognition model, and embedding the image recognition model into the image recognition module;
and the violation identification result obtaining module is used for carrying out position identification on the inspection robot based on the positioning equipment, comparing the position identification with the target partition to determine the information of the target partition where the inspection robot is located, carrying out image acquisition based on the violation classification corresponding to the target partition by utilizing the image acquisition equipment, synchronizing the acquired image to the image identification module to carry out image behavior identification, and carrying out matching with a violation classification-image feature library according to the image behavior identification result to obtain the violation identification result.
CN202310573607.8A 2023-05-22 2023-05-22 Intelligent inspection method and system for identifying illegal behaviors of workers Active CN116311541B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310573607.8A CN116311541B (en) 2023-05-22 2023-05-22 Intelligent inspection method and system for identifying illegal behaviors of workers

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310573607.8A CN116311541B (en) 2023-05-22 2023-05-22 Intelligent inspection method and system for identifying illegal behaviors of workers

Publications (2)

Publication Number Publication Date
CN116311541A true CN116311541A (en) 2023-06-23
CN116311541B CN116311541B (en) 2023-08-04

Family

ID=86818918

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310573607.8A Active CN116311541B (en) 2023-05-22 2023-05-22 Intelligent inspection method and system for identifying illegal behaviors of workers

Country Status (1)

Country Link
CN (1) CN116311541B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116596411A (en) * 2023-07-18 2023-08-15 广州健新科技有限责任公司 Production safety evaluation method and system combining two-ticket detection

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110472616A (en) * 2019-08-22 2019-11-19 腾讯科技(深圳)有限公司 Image-recognizing method, device, computer equipment and storage medium
CN112183265A (en) * 2020-09-17 2021-01-05 国家电网有限公司 Electric power construction video monitoring and alarming method and system based on image recognition
CN113642631A (en) * 2021-08-10 2021-11-12 沭阳协润电子有限公司 Dangerous area electronic fence generation method and system based on artificial intelligence
WO2022142827A1 (en) * 2020-12-30 2022-07-07 华为技术有限公司 Road occupancy information determination method and apparatus
CN114972203A (en) * 2022-04-29 2022-08-30 南通市立新机械制造有限公司 Mechanical part rolling abnormity detection method based on watershed segmentation

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110472616A (en) * 2019-08-22 2019-11-19 腾讯科技(深圳)有限公司 Image-recognizing method, device, computer equipment and storage medium
CN112183265A (en) * 2020-09-17 2021-01-05 国家电网有限公司 Electric power construction video monitoring and alarming method and system based on image recognition
WO2022142827A1 (en) * 2020-12-30 2022-07-07 华为技术有限公司 Road occupancy information determination method and apparatus
CN113642631A (en) * 2021-08-10 2021-11-12 沭阳协润电子有限公司 Dangerous area electronic fence generation method and system based on artificial intelligence
CN114972203A (en) * 2022-04-29 2022-08-30 南通市立新机械制造有限公司 Mechanical part rolling abnormity detection method based on watershed segmentation

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
赵逸如等: "基于目标检测和语义分割的人行道违规停车检测", 《图形图像》, pages 82 - 88 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116596411A (en) * 2023-07-18 2023-08-15 广州健新科技有限责任公司 Production safety evaluation method and system combining two-ticket detection
CN116596411B (en) * 2023-07-18 2023-12-22 广州健新科技有限责任公司 Production safety evaluation method and system combining two-ticket detection

Also Published As

Publication number Publication date
CN116311541B (en) 2023-08-04

Similar Documents

Publication Publication Date Title
CN110135351B (en) Built-up area boundary identification method and equipment based on urban building space data
CN108509954A (en) A kind of more car plate dynamic identifying methods of real-time traffic scene
CN112347916B (en) Video image analysis-based power field operation safety monitoring method and device
CN109858367B (en) Visual automatic detection method and system for worker through supporting unsafe behaviors
CN111311918B (en) Traffic management method and device based on visual analysis
CN112102226A (en) Data processing method, pattern detection method and wafer defect pattern detection method
Yusof et al. Crack detection and classification in asphalt pavement images using deep convolution neural network
CN110533950A (en) Detection method, device, electronic equipment and the storage medium of parking stall behaviour in service
CN116311541B (en) Intelligent inspection method and system for identifying illegal behaviors of workers
CN113642474A (en) Hazardous area personnel monitoring method based on YOLOV5
CN110288823B (en) Traffic violation misjudgment identification method based on naive Bayesian network
CN113065578A (en) Image visual semantic segmentation method based on double-path region attention coding and decoding
CN116359218B (en) Industrial aggregation area atmospheric pollution mobile monitoring system
CN111127465A (en) Automatic generation method and system for bridge detection report
CN114581764B (en) Underground structure crack disease discriminating method based on deep learning algorithm
CN115995056A (en) Automatic bridge disease identification method based on deep learning
Uslu et al. Image-based 3D reconstruction and recognition for enhanced highway condition assessment
CN116168356A (en) Vehicle damage judging method based on computer vision
CN116975990B (en) Management method and system for three-dimensional model of oil-gas chemical engineering wharf
CN113158954A (en) Automatic traffic off-site zebra crossing area detection method based on AI technology
CN114066288B (en) Intelligent data center-based emergency detection method and system for operation road
CN111539363A (en) Highway rockfall identification and analysis method
CN111415326A (en) Method and system for detecting abnormal state of railway contact net bolt
CN113361968B (en) Power grid infrastructure worker safety risk assessment method based on artificial intelligence and big data
CN112861701B (en) Illegal parking identification method, device, electronic equipment and computer readable medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CP03 Change of name, title or address

Address after: Room 702, 7th Floor, Building 10, Innovation Base, Science and Technology City New Area, Mianyang City, Sichuan Province, 621000

Patentee after: Titan (Mianyang) Energy Technology Co.,Ltd.

Country or region after: China

Address before: Building 1, No. 26 Bohai 33rd Road, Lingang Economic Zone, Binhai New Area, Tianjin, 300452

Patentee before: Titan (Tianjin) Energy Technology Co.,Ltd.

Country or region before: China

CP03 Change of name, title or address