CN113838094B - Safety early warning method based on intelligent video identification - Google Patents

Safety early warning method based on intelligent video identification Download PDF

Info

Publication number
CN113838094B
CN113838094B CN202111140674.8A CN202111140674A CN113838094B CN 113838094 B CN113838094 B CN 113838094B CN 202111140674 A CN202111140674 A CN 202111140674A CN 113838094 B CN113838094 B CN 113838094B
Authority
CN
China
Prior art keywords
moving object
image
safety helmet
images
recognition
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202111140674.8A
Other languages
Chinese (zh)
Other versions
CN113838094A (en
Inventor
杜泽新
左天才
贺亚山
宋尔进
曾体健
崔珂伟
张孙蓉
张玉吉
李林
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guizhou Wujiang Hydropower Development Co Ltd
Original Assignee
Guizhou Wujiang Hydropower Development Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guizhou Wujiang Hydropower Development Co Ltd filed Critical Guizhou Wujiang Hydropower Development Co Ltd
Priority to CN202111140674.8A priority Critical patent/CN113838094B/en
Publication of CN113838094A publication Critical patent/CN113838094A/en
Application granted granted Critical
Publication of CN113838094B publication Critical patent/CN113838094B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/246Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/13Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/136Segmentation; Edge detection involving thresholding
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Biophysics (AREA)
  • Data Mining & Analysis (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Biomedical Technology (AREA)
  • Software Systems (AREA)
  • Computational Linguistics (AREA)
  • Health & Medical Sciences (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Multimedia (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a safety early warning method based on intelligent video recognition, which comprises the steps of obtaining a safety helmet recognition model through training based on a convolutional neural network; collecting video images through a camera; identifying a moving object in the video image by using a two-frame difference method; the method comprises the steps of combining a safety helmet recognition model to carry out safety helmet wearing recognition on a moving object, obtaining a safety helmet recognition model through training, obtaining a real-time video stream, recognizing the moving object in a video image, carrying out movement analysis on the moving object, carrying out recognition comparison according to the safety helmet recognition model, establishing a rectangular mark frame at the safety helmet position, and carrying out safety helmet wearing recognition according to a safety helmet recognition comparison result, a contour highest point and a contour center point comparison result of the moving object and an angle comparison result formed by diagonal lines of the rectangular mark frame.

Description

Safety early warning method based on intelligent video identification
Technical Field
The invention relates to the technical field of intelligent video identification, in particular to a safety early warning method based on intelligent video identification.
Background
At present, more and more importance is attached to safety production, various measures are taken by each enterprise to ensure the safety production of staff, but the situation that on-duty workers do not wear safety helmets to conduct dangerous operation still exists, safety accidents caused by the fact that safety regulations are violated at construction sites and operation sites currently occur, safety helmet wearing management becomes a great difficulty, safety problems caused by the fact that safety helmets are not worn are quite common, problems of high cost, low efficiency and the like exist when manual supervision is used, the existing research is to position faces based on color features and facial features, then identify the safety helmets according to the color features, however, most of positioning human bodies need to traverse the whole image by using complex algorithms, time and labor are wasted, instantaneity is poor, construction site environments are complex, and identification based on the color features is inaccurate.
Disclosure of Invention
This section is intended to outline some aspects of embodiments of the invention and to briefly introduce some preferred embodiments. Some simplifications or omissions may be made in this section as well as in the description summary and in the title of the application, to avoid obscuring the purpose of this section, the description summary and the title of the invention, which should not be used to limit the scope of the invention.
The present invention has been made in view of the above-described problems occurring in the prior art.
Therefore, the technical problems solved by the invention are as follows: the existing safety helmet wearing recognition is based on facial features to position a human face, a complex algorithm is required to traverse the whole image, time and labor are wasted, instantaneity is poor, the construction site environment is complex, and color feature recognition is inaccurate.
In order to solve the technical problems, the invention provides the following technical scheme: a security early warning method based on intelligent video recognition comprises the steps of,
based on a convolutional neural network, acquiring a safety helmet recognition model through training of a large sample;
collecting video images in a certain area through a camera;
identifying a moving object in the acquired video image by using a two-frame difference method;
and carrying out helmet wearing recognition on the recognized moving object by combining the helmet recognition model.
As a preferable scheme of the intelligent video recognition-based safety pre-warning method, the invention comprises the following steps: based on convolutional neural network, through the training of big sample, obtain the helmet recognition model includes: collecting safety helmet data as sample data, performing mirror symmetry on positive samples, generating a positive and negative sample index catalog, marking, building a convolutional neural network, training the convolutional neural network by using the sample data, and storing the trained convolutional neural network into a safety helmet identification model.
As a preferable scheme of the intelligent video recognition-based safety pre-warning method, the invention comprises the following steps: the identifying the moving object in the collected video image by utilizing the two-frame difference method comprises the following steps: calling a camera to acquire a real-time video stream, and recording the nth frame and the n-1 frame images as f n 、f n-1 The gray value of the corresponding pixel point of two frames is marked as f n (x, y) and f n-1 (x, y) obtaining a difference image D by calculating the absolute value of the gray value difference value of the corresponding pixel points of the two frames of images n
D n (x,y)=|f n (x,y)-f n-1 (x,y)|
By threshold comparison, when D n And (x, y) identifying that a moving object exists in the current nth frame image when the (x, y) exceeds a set threshold.
As a preferable scheme of the intelligent video recognition-based safety pre-warning method, the invention comprises the following steps: the helmet wearing recognition of the recognized moving object by combining the helmet recognition model comprises the following steps: marking images with moving objects as first images, obtaining the outlines of the moving objects in the first images, determining the central point positions of the outlines of the moving objects, further obtaining time nodes corresponding to the first images, sequencing the first images according to the time nodes, performing movement analysis on the moving objects in the first images adjacent to the time nodes, and further marking the first images according to movement analysis results, wherein the marked first images are second images.
As a preferable scheme of the intelligent video recognition-based safety pre-warning method, the invention comprises the following steps: the determining the center point position of the outline of the moving object comprises the following steps: according to the contour of the moving object, a first straight line between the highest point of the moving object and the horizontal ground is obtained, and the length of the first straight line is H max And the first straight line is vertical to the horizontal ground, and the length from the first straight line to the horizontal ground is determined to be The two points of the profile of the moving object are obtained, the two points are taken as a second straight line and a third straight line which are perpendicular to the first straight line, the second straight line and the third straight line are taken as a first profile edge and a second profile edge, the two points of the first profile edge and the second profile edge which are farthest from the first straight line are obtained, the two points are taken as straight lines which are perpendicular to the horizontal ground, the highest point is taken as straight line which is parallel to the horizontal ground, and the taken straight lineAnd a rectangle is defined between the moving object and the horizontal ground, and the intersection point of the diagonal line of the rectangle is the center point position of the outline of the moving object.
As a preferable scheme of the intelligent video recognition-based safety pre-warning method, the invention comprises the following steps: the moving object in the first images adjacent to the time node is subjected to movement analysis, the first images are further marked according to the movement analysis result, and the marked first images are second images, and the method comprises the following steps: calculating the time interval between two adjacent time nodes, and recording as t n The two time nodes are respectively marked as a starting time node and a stopping time node, if the time interval t n If the time interval is smaller than the time interval threshold value, the time interval t between the adjacent time node and the initial time node is taken down n+1 And determining the same moving object in the first image according to the contour of the moving object and the position of the central point until the time interval is greater than or equal to a time interval threshold, acquiring the position change distance of the same moving object, and marking the first image with the same moving object as a second image when the position change distance of the same moving object is greater than or equal to a change distance threshold, wherein the change distance threshold is the product of the preset moving speed and the time interval.
As a preferable scheme of the intelligent video recognition-based safety pre-warning method, the invention comprises the following steps: the helmet wearing recognition of the recognized moving object by combining the helmet recognition model further comprises the following steps: acquiring a second image with a moving object, carrying out recognition comparison on the second image by combining a safety helmet recognition model, if the current recognition comparison is successful, further carrying out color feature extraction on a region with successful recognition comparison in the second image, comparing the extracted second color feature with a preset first color feature, if the comparison is successful, carrying out safety helmet in the current second image, and marking the second image.
As a preferable scheme of the intelligent video recognition-based safety pre-warning method, the invention comprises the following steps: the helmet wearing recognition of the recognized moving object by combining the helmet recognition model further comprises the following steps: the method comprises the steps of obtaining a marked second image, determining the horizontal ground position in the image, combining the specific position of the safety helmet in the image, and establishing a rectangular mark frame with the vertical distance H between the safety helmet and the horizontal ground and a certain value W, wherein the length of the rectangular mark frame is H, the width of the rectangular mark frame is W, and carrying out safety helmet wearing identification according to the characteristics of the rectangular mark frame.
As a preferable scheme of the intelligent video recognition-based safety pre-warning method, the invention comprises the following steps: the safety helmet wearing recognition according to the characteristics of the rectangular marking frame comprises the following steps:
extracting the characteristics of the rectangular mark frame, connecting diagonal lines of the rectangular frame, and respectively marking the diagonal lines as l 1 、l 2 Determining a diagonal line l 1 And diagonal line l 2 Angle theta formed between 1 By the formulaCalculated theta 1 When θ is the value of 1 Satisfy theta min1max And when the distance between the center point of the identification and comparison successful area and the highest point of the outline of the moving object is smaller than a preset value, the current safety helmet wearing identification passes, otherwise, the current safety helmet wearing identification does not pass, the marked image is used for determining the area to which the image belongs, and the area is subjected to safety helmet unworn early warning.
The invention has the beneficial effects that: the method comprises the steps of obtaining a safety helmet identification model through training, obtaining a real-time video stream, judging whether a moving object exists in an image or not by combining a difference method between two frames, identifying the same moving object in different images, marking the image by a movement analysis result of the moving object, identifying and comparing the marked image by combining the safety helmet identification model, establishing a rectangular marking frame according to the existence position of the safety helmet, and carrying out safety helmet wearing identification according to the safety helmet identification comparison result, the highest point of the contour of the moving object, the comparison result of the center point of the contour of the moving object and the angle comparison result formed by the diagonal line of the rectangular marking frame.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings that are needed in the description of the embodiments will be briefly described below, it being obvious that the drawings in the following description are only some embodiments of the present invention, and that other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art. Wherein:
fig. 1 is a schematic diagram of steps of a security early warning method based on intelligent video recognition.
Detailed Description
So that the manner in which the above recited objects, features and advantages of the present invention can be understood in detail, a more particular description of the invention, briefly summarized above, may be had by reference to the embodiments, some of which are illustrated in the appended drawings. All other embodiments, which can be made by one of ordinary skill in the art based on the embodiments of the present invention without making any inventive effort, shall fall within the scope of the present invention.
In the following description, numerous specific details are set forth in order to provide a thorough understanding of the present invention, but the present invention may be practiced in other ways other than those described herein, and persons skilled in the art will readily appreciate that the present invention is not limited to the specific embodiments disclosed below.
Further, reference herein to "one embodiment" or "an embodiment" means that a particular feature, structure, or characteristic can be included in at least one implementation of the invention. The appearances of the phrase "in one embodiment" in various places in the specification are not necessarily all referring to the same embodiment, nor are separate or alternative embodiments mutually exclusive of other embodiments.
While the embodiments of the present invention have been illustrated and described in detail in the drawings, the cross-sectional view of the device structure is not to scale in the general sense for ease of illustration, and the drawings are merely exemplary and should not be construed as limiting the scope of the invention. In addition, the three-dimensional dimensions of length, width and depth should be included in actual fabrication.
Also in the description of the present invention, it should be noted that the orientation or positional relationship indicated by the terms "upper, lower, inner and outer", etc. are based on the orientation or positional relationship shown in the drawings, are merely for convenience of describing the present invention and simplifying the description, and do not indicate or imply that the apparatus or elements referred to must have a specific orientation, be constructed and operated in a specific orientation, and thus should not be construed as limiting the present invention. Furthermore, the terms "first, second, or third" are used for descriptive purposes only and are not to be construed as indicating or implying relative importance.
The terms "mounted, connected, and coupled" should be construed broadly in this disclosure unless otherwise specifically indicated and defined, such as: can be fixed connection, detachable connection or integral connection; it may also be a mechanical connection, an electrical connection, or a direct connection, or may be indirectly connected through an intermediate medium, or may be a communication between two elements. The specific meaning of the above terms in the present invention will be understood in specific cases by those of ordinary skill in the art.
Example 1
Referring to fig. 1, the embodiment provides a safety early warning method based on intelligent video recognition, which includes acquiring a safety helmet recognition model through training of a large sample based on a convolutional neural network;
collecting video images in a certain area through a camera;
identifying a moving object in the acquired video image by using a two-frame difference method;
and carrying out helmet wearing recognition on the recognized moving object by combining the helmet recognition model.
Based on convolutional neural network, through the training of big sample, obtain the helmet recognition model includes: collecting safety helmet data as sample data, performing mirror symmetry on positive samples, generating a positive and negative sample index catalog, marking, building a convolutional neural network, training the convolutional neural network by using the sample data, and storing the trained convolutional neural network into a safety helmet identification model.
The method for identifying the moving object in the acquired video image by utilizing the two-frame difference method comprises the following steps: calling a camera to acquire a real-time video stream, and recording the nth frame and the n-1 frame images as f n 、f n-1 The gray value of the corresponding pixel point of two frames is marked as f n (x, y) and f n-1 (x, y) obtaining a difference image D by calculating the absolute value of the gray value difference value of the corresponding pixel points of the two frames of images n
D n (x,y)=|f n (x,y)-f n-1 (x,y)|
By threshold comparison, when D n And (x, y) identifying that a moving object exists in the current nth frame image when the (x, y) exceeds a set threshold.
The helmet wearing recognition of the recognized moving object by combining the helmet recognition model comprises the following steps: marking images with moving objects as first images, obtaining the outlines of the moving objects in the first images, determining the central point positions of the outlines of the moving objects, further obtaining time nodes corresponding to the first images, sequencing the first images according to the time nodes, performing movement analysis on the moving objects in the first images adjacent to the time nodes, and further marking the first images according to movement analysis results, wherein the marked first images are second images.
Determining a center point location of a contour of a moving object includes: according to the contour of the moving object, a first straight line between the highest point and the horizontal ground is obtained, and the length of the first straight line is H max And the first straight line is vertical to the horizontal ground, and the length from the first straight line to the horizontal ground is determined asThe two points are passed through a second straight line and a third straight line which are perpendicular to the first straight line, the second straight line and the third straight line intercept two sections of contour edges of the contour of the moving object, and the two sections of contour edges are marked as a first contour edge and a second contour edge to obtain a first contour edgeThe two points of the contour edge and the second contour edge which are farthest from the first straight line are perpendicular to the horizontal ground through the two points, the straight line is parallel to the horizontal ground through the highest point, a rectangle is enclosed between the straight line and the horizontal ground, and the intersection point of the diagonal lines of the rectangle is the center point position of the contour of the moving object.
The method comprises the steps of carrying out movement analysis on moving objects in first images adjacent to a time node, further marking the first images according to movement analysis results, wherein the marked first images are second images, and the method comprises the following steps: calculating the time interval between two adjacent time nodes, and recording as t n The two time nodes are respectively marked as a starting time node and a stopping time node, if the time interval t n If the time interval is smaller than the time interval threshold value, the time interval t between the adjacent time node and the initial time node is taken down n+1 And determining the same moving object in the first image according to the contour of the moving object and the position of the central point until the time interval is greater than or equal to a time interval threshold value, acquiring the position change distance of the same moving object, and marking the first image with the same moving object as a second image when the position change distance of the same moving object is greater than or equal to a change distance threshold value, wherein the change distance threshold value is the product of the preset moving speed and the time interval.
The safety helmet wearing recognition of the recognized moving object by combining the safety helmet recognition model further comprises the following steps: and acquiring a second image with a moving object, carrying out recognition comparison on the second image by combining a safety helmet recognition model, if the current recognition comparison is successful, further carrying out color feature extraction on a region with successful recognition comparison in the second image, comparing the extracted second color feature with a preset first color feature, and if the comparison is successful, carrying out safety helmet existence in the current second image and marking the second image.
The safety helmet wearing recognition of the recognized moving object by combining the safety helmet recognition model further comprises the following steps: and acquiring a marked second image, determining the horizontal ground position in the image, combining the specific position of the safety helmet in the image, establishing a rectangular marking frame with the vertical distance H between the safety helmet and the horizontal ground and a certain value W, wherein the length of the rectangular marking frame is H, the width of the rectangular marking frame is W, and carrying out the wearing identification of the safety helmet according to the characteristics of the rectangular marking frame.
The safety helmet wearing recognition according to the characteristics of the rectangular marking frame comprises the following steps:
extracting the characteristics of the rectangular mark frame, connecting diagonal lines of the rectangular frame, and respectively marking the diagonal lines as l 1 、l 2 Determining a diagonal line l 1 And diagonal line l 2 Angle theta formed between 1 By the formulaCalculated theta 1 When θ is the value of 1 Satisfy theta min1max And when the distance between the center point of the identification successful region and the highest point of the outline of the moving object is smaller than a preset value, the current safety helmet wearing identification passes, otherwise, the current safety helmet wearing identification does not pass, the region to which the image belongs is determined through the marked image, and the safety helmet non-wearing early warning is carried out on the region.
Example 2
Referring to fig. 1, the present embodiment provides a security pre-warning method based on intelligent video recognition, including,
based on a convolutional neural network, a safety helmet recognition model is obtained through training of a large sample:
collecting safety helmet data as sample data, performing mirror symmetry on positive samples, generating a positive and negative sample index catalog, marking, building a convolutional neural network, training the convolutional neural network by using the sample data, and storing the trained convolutional neural network into a safety helmet identification model;
collecting video images in a certain area through a camera;
identifying a moving object in the acquired video image by using a two-frame difference method:
calling a camera to acquire a real-time video stream, and recording the nth frame and the n-1 frame images as f n 、f n-1 The gray value of the corresponding pixel point of two frames is marked as f n (x, y) and f n-1 (x,y),Obtaining a difference image D by calculating the absolute value of the gray value difference value of the corresponding pixel points of the two frames of images n
D n (x,y)=|f n (x,y)-f n-1 (x,y)|
By threshold comparison, when D n When (x, y) exceeds a set threshold value, identifying that a moving object exists in the current nth frame image;
carrying out helmet wearing recognition on the recognized moving object by combining a helmet recognition model:
marking an image with a moving object, marking the image as a first image, acquiring the contour of the moving object in the first image, determining the position of the central point of the contour of the moving object, and acquiring a first straight line between the highest point of the contour of the moving object and the horizontal ground according to the contour of the moving object, wherein the length of the first straight line is H max And the first straight line is vertical to the horizontal ground, and the length from the first straight line to the horizontal ground is determined asThe two points are taken as a second straight line and a third straight line perpendicular to the first straight line, the second straight line and the third straight line intercept two sections of contour edges of the contour of the moving object, the two points are marked as a first contour edge and a second contour edge, the two points with the first contour edge and the second contour edge farthest from the first straight line are obtained, the two points are taken as straight lines perpendicular to the horizontal ground and the highest point is taken as straight lines parallel to the horizontal ground, a rectangle is formed between the straight line and the horizontal ground, the intersection point of the diagonal lines of the rectangle is the center point position of the contour of the moving object, the time node corresponding to the first image is further obtained, the first images are sequenced according to the time node, the moving object in the first images adjacent to the time node is further marked according to the moving analysis result, the marked first images are the second images, the time interval between the two adjacent time nodes is calculated, and the time interval is marked as t n The two time nodes are respectively marked as a starting time node and a stopping time node, if the time interval t n If the time interval threshold is smaller than the time interval threshold, the next adjacent time node and the starting time are taken downTime interval t between nodes n+1 Determining the same moving object in the first image according to the contour of the moving object and the position of the central point until the time interval is greater than or equal to a time interval threshold value, acquiring the position change distance of the same moving object, marking the first image with the same moving object as a second image when the position change distance of the same moving object is greater than or equal to a change distance threshold value, wherein the change distance threshold value is the product of a preset moving speed and the time interval, and the preset moving speed is 1.25m/s of the normal moving speed of an adult;
acquiring a second image with a moving object, carrying out recognition comparison on the second image by combining a safety helmet recognition model, if the current recognition comparison is successful, further carrying out color feature extraction on a region with successful recognition comparison in the second image, comparing the extracted second color feature with a preset first color feature, if the comparison is successful, carrying out safety helmet in the current second image, and marking the second image;
acquiring a marked second image, determining the horizontal ground position in the image, combining the specific position of the safety helmet in the image, and establishing a rectangular mark frame with the vertical distance H between the safety helmet and the horizontal ground and a certain value W, wherein the length of the rectangular mark frame is H, the width of the rectangular mark frame is W, the safety helmet wearing identification is carried out according to the characteristics of the rectangular mark frame, the value range of W is 351 cm-375 cm, W is the shoulder width of an adult, the value range of H is 150 cm-200 cm, H is the height of the adult, and in the value range, H and W take any values;
extracting the characteristics of the rectangular mark frame, connecting diagonal lines of the rectangular frame, and respectively marking the diagonal lines as l 1 、l 2 Determining a diagonal line l 1 And diagonal line l 2 Angle theta formed between 1 By the formulaCalculated theta 1 In the calculation, W can be any value in the value range, and H is substituted min =150cm、H max Calculation of θ =200 cm min 、θ max When theta is 1 Satisfy theta mix1max And when the distance between the center point of the identification successful region and the highest point of the outline of the moving object is smaller than a preset value, the current safety helmet wearing identification passes, otherwise, the current safety helmet wearing identification does not pass, the region to which the image belongs is determined through the marked image, and the safety helmet non-wearing early warning is carried out on the region.
The method comprises the steps of obtaining a safety helmet identification model through training, obtaining a real-time video stream, judging whether a moving object exists in an image or not by combining a difference method between two frames, identifying the same moving object in different images, marking the image by a movement analysis result of the moving object, identifying and comparing the marked image by combining the safety helmet identification model, establishing a rectangular marking frame according to the existence position of the safety helmet, and carrying out safety helmet wearing identification according to the safety helmet identification comparison result, the highest point of the contour of the moving object, the comparison result of the center point of the contour of the moving object and the angle comparison result formed by the diagonal line of the rectangular marking frame.
It should be appreciated that embodiments of the invention may be implemented or realized by computer hardware, a combination of hardware and software, or by computer instructions stored in a non-transitory computer readable memory. The methods may be implemented in a computer program using standard programming techniques, including a non-transitory computer readable storage medium configured with a computer program, where the storage medium so configured causes a computer to operate in a specific and predefined manner, in accordance with the methods and drawings described in the specific embodiments. Each program may be implemented in a high level procedural or object oriented programming language to communicate with a computer system. However, the program(s) can be implemented in assembly or machine language, if desired. In any case, the language may be a compiled or interpreted language. Furthermore, the program can be run on a programmed application specific integrated circuit for this purpose.
Furthermore, the operations of the processes described herein may be performed in any suitable order unless otherwise indicated herein or otherwise clearly contradicted by context. The processes (or variations and/or combinations thereof) described herein may be performed under control of one or more computer systems configured with executable instructions, and may be implemented as code (e.g., executable instructions, one or more computer programs, or one or more applications), by hardware, or combinations thereof, collectively executing on one or more processors. The computer program includes a plurality of instructions executable by one or more processors.
Further, the method may be implemented in any type of computing platform operatively connected to a suitable computing platform, including, but not limited to, a personal computer, mini-computer, mainframe, workstation, network or distributed computing environment, separate or integrated computer platform, or in communication with a charged particle tool or other imaging device, and so forth. Aspects of the invention may be implemented in machine-readable code stored on a non-transitory storage medium or device, whether removable or integrated into a computing platform, such as a hard disk, optical read and/or write storage medium, RAM, ROM, etc., such that it is readable by a programmable computer, which when read by a computer, is operable to configure and operate the computer to perform the processes described herein. Further, the machine readable code, or portions thereof, may be transmitted over a wired or wireless network. When such media includes instructions or programs that, in conjunction with a microprocessor or other data processor, implement the steps described above, the invention described herein includes these and other different types of non-transitory computer-readable storage media. The invention also includes the computer itself when programmed according to the methods and techniques of the present invention. The computer program can be applied to the input data to perform the functions described herein, thereby converting the input data to generate output data that is stored to the non-volatile memory. The output information may also be applied to one or more output devices such as a display. In a preferred embodiment of the invention, the transformed data represents physical and tangible objects, including specific visual depictions of physical and tangible objects produced on a display.
As used in this application, the terms "component," "module," "system," and the like are intended to refer to a computer-related entity, either hardware, firmware, a combination of hardware and software, or software in execution. For example, the components may be, but are not limited to: a process running on a processor, an object, an executable, a thread of execution, a program, and/or a computer. By way of example, both an application running on a computing device and the computing device can be a component. One or more components may reside within a process and/or thread of execution and a component may be localized on one computer and/or distributed between two or more computers. Furthermore, these components can execute from various computer readable media having various data structures thereon. The components may communicate by way of local and/or remote processes such as in accordance with a signal having one or more data packets (e.g., data from one component interacting with another component in a local system, distributed system, and/or across a network such as the internet with other systems by way of the signal).
It should be noted that the above embodiments are only for illustrating the technical solution of the present invention and not for limiting the same, and although the present invention has been described in detail with reference to the preferred embodiments, it should be understood by those skilled in the art that the technical solution of the present invention may be modified or substituted without departing from the spirit and scope of the technical solution of the present invention, which is intended to be covered in the scope of the claims of the present invention.

Claims (1)

1. A safety early warning method based on intelligent video identification is characterized in that: comprising the steps of (a) a step of,
based on a convolutional neural network, acquiring a safety helmet recognition model through training of a large sample;
collecting video images in a certain area through a camera;
identifying a moving object in the acquired video image by using a two-frame difference method;
carrying out safety helmet wearing recognition on the recognized moving object by combining a safety helmet recognition model;
based on convolutional neural network, through the training of big sample, obtain the helmet recognition model includes: collecting safety helmet data as sample data, performing mirror symmetry on positive samples, generating a positive and negative sample index catalog, marking, building a convolutional neural network, training the convolutional neural network by using the sample data, and storing the trained convolutional neural network into a safety helmet identification model;
the identifying the moving object in the collected video image by utilizing the two-frame difference method comprises the following steps: calling a camera to acquire a real-time video stream, and recording the nth frame and the n-1 frame images as f n 、f n-1 The gray value of the corresponding pixel point of two frames is marked as f n (x, y) and f n-1 (x, y) obtaining a difference image D by calculating the absolute value of the gray value difference value of the corresponding pixel points of the two frames of images n
D n (x,y)=|f n (x,y)-f n-1 (x,y)|
By threshold comparison, when D n When (x, y) exceeds a set threshold value, identifying that a moving object exists in the current nth frame image;
the helmet wearing recognition of the recognized moving object by combining the helmet recognition model comprises the following steps: marking images with moving objects as first images, obtaining the outlines of the moving objects in the first images, determining the central point positions of the outlines of the moving objects, further obtaining time nodes corresponding to the first images, sequencing the first images according to the time nodes, performing movement analysis on the moving objects in the first images adjacent to the time nodes, and further marking the first images according to movement analysis results, wherein the marked first images are second images;
the determining the center point position of the outline of the moving object comprises the following steps: according to the contour of the moving object, a first straight line between the highest point of the moving object and the horizontal ground is obtained, and the length of the first straight line is H max And the first straight line is vertical to the horizontal ground, and the length from the first straight line to the horizontal ground is determined to beThe two points of the profile of the moving object are marked as a first profile edge and a second profile edge, two points of the first profile edge and the second profile edge which are farthest from the first line are obtained, the two points are made to be straight lines and are perpendicular to the horizontal ground, the highest point is made to be straight lines and are parallel to the horizontal ground, a rectangle is enclosed between the made straight lines and the horizontal ground, and the intersection point of the rectangle diagonal lines is the central point position of the profile of the moving object;
the moving object in the first images adjacent to the time node is subjected to movement analysis, the first images are further marked according to the movement analysis result, and the marked first images are second images, and the method comprises the following steps: calculating the time interval between two adjacent time nodes, and recording as t n The two time nodes are respectively marked as a starting time node and a stopping time node, if the time interval t n If the time interval is smaller than the time interval threshold value, the time interval t between the adjacent time node and the initial time node is taken down n+1 Determining the same moving object in the first image according to the contour of the moving object and the position of the central point until the time interval is greater than or equal to a time interval threshold, acquiring the position change distance of the same moving object, and marking the first image with the same moving object as a second image when the position change distance of the same moving object is greater than or equal to a change distance threshold, wherein the change distance threshold is the product of the preset moving speed and the time interval;
the helmet wearing recognition of the recognized moving object by combining the helmet recognition model further comprises the following steps: acquiring a second image with a moving object, carrying out identification comparison on the second image by combining a safety helmet identification model, if the current identification comparison is successful, further carrying out color feature extraction on an identification comparison successful area in the second image, comparing the extracted second color feature with a preset first color feature, if the comparison is successful, carrying out safety helmet in the current second image, and marking the second image;
the helmet wearing recognition of the recognized moving object by combining the helmet recognition model further comprises the following steps: acquiring a marked second image, determining the horizontal ground position in the image, combining the specific position of the safety helmet in the image, and establishing a rectangular mark frame with the vertical distance H between the safety helmet and the horizontal ground and a certain value W, wherein the length of the rectangular mark frame is H, the width of the rectangular mark frame is W, and carrying out wearing identification of the safety helmet according to the characteristics of the rectangular mark frame;
extracting the characteristics of the rectangular mark frame, connecting diagonal lines of the rectangular mark frame, and marking the diagonal lines as l respectively 1 、l 2 Determining a diagonal line l 1 And diagonal line l 2 Angle theta formed between 1 By the formulaCalculated theta 1 When θ is the value of 1 Satisfy theta min1max And when the distance between the center point of the identification and comparison successful area and the highest point of the outline of the moving object is smaller than a preset value, the current safety helmet wearing identification passes, otherwise, the current safety helmet wearing identification does not pass, the marked image is used for determining the area to which the image belongs, and the area is subjected to safety helmet unworn early warning.
CN202111140674.8A 2021-09-28 2021-09-28 Safety early warning method based on intelligent video identification Active CN113838094B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111140674.8A CN113838094B (en) 2021-09-28 2021-09-28 Safety early warning method based on intelligent video identification

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111140674.8A CN113838094B (en) 2021-09-28 2021-09-28 Safety early warning method based on intelligent video identification

Publications (2)

Publication Number Publication Date
CN113838094A CN113838094A (en) 2021-12-24
CN113838094B true CN113838094B (en) 2024-03-05

Family

ID=78970786

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111140674.8A Active CN113838094B (en) 2021-09-28 2021-09-28 Safety early warning method based on intelligent video identification

Country Status (1)

Country Link
CN (1) CN113838094B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114758363B (en) * 2022-06-16 2022-08-19 四川金信石信息技术有限公司 Insulating glove wearing detection method and system based on deep learning

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109241871A (en) * 2018-08-16 2019-01-18 北京此时此地信息科技有限公司 A kind of public domain stream of people's tracking based on video data
CN109255298A (en) * 2018-08-07 2019-01-22 南京工业大学 Safety cap detection method and system in a kind of dynamic background
CN110188724A (en) * 2019-06-05 2019-08-30 中冶赛迪重庆信息技术有限公司 The method and system of safety cap positioning and color identification based on deep learning
CN110263609A (en) * 2019-01-27 2019-09-20 杭州品茗安控信息技术股份有限公司 A kind of automatic identifying method of safety cap wear condition
CN110472531A (en) * 2019-07-29 2019-11-19 腾讯科技(深圳)有限公司 Method for processing video frequency, device, electronic equipment and storage medium
US10528818B1 (en) * 2013-03-14 2020-01-07 Hrl Laboratories, Llc Video scene analysis system for situational awareness
CN112084986A (en) * 2020-09-16 2020-12-15 国网福建省电力有限公司营销服务中心 Real-time safety helmet detection method based on image feature extraction
CN112184773A (en) * 2020-09-30 2021-01-05 华中科技大学 Helmet wearing detection method and system based on deep learning
CN112487963A (en) * 2020-11-27 2021-03-12 新疆爱华盈通信息技术有限公司 Wearing detection method and system for safety helmet
CN113297900A (en) * 2021-04-02 2021-08-24 中国地质大学(武汉) Method, device, equipment and storage medium for identifying video stream safety helmet based on YOLO

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10528818B1 (en) * 2013-03-14 2020-01-07 Hrl Laboratories, Llc Video scene analysis system for situational awareness
CN109255298A (en) * 2018-08-07 2019-01-22 南京工业大学 Safety cap detection method and system in a kind of dynamic background
CN109241871A (en) * 2018-08-16 2019-01-18 北京此时此地信息科技有限公司 A kind of public domain stream of people's tracking based on video data
CN110263609A (en) * 2019-01-27 2019-09-20 杭州品茗安控信息技术股份有限公司 A kind of automatic identifying method of safety cap wear condition
CN110188724A (en) * 2019-06-05 2019-08-30 中冶赛迪重庆信息技术有限公司 The method and system of safety cap positioning and color identification based on deep learning
CN110472531A (en) * 2019-07-29 2019-11-19 腾讯科技(深圳)有限公司 Method for processing video frequency, device, electronic equipment and storage medium
CN112084986A (en) * 2020-09-16 2020-12-15 国网福建省电力有限公司营销服务中心 Real-time safety helmet detection method based on image feature extraction
CN112184773A (en) * 2020-09-30 2021-01-05 华中科技大学 Helmet wearing detection method and system based on deep learning
CN112487963A (en) * 2020-11-27 2021-03-12 新疆爱华盈通信息技术有限公司 Wearing detection method and system for safety helmet
CN113297900A (en) * 2021-04-02 2021-08-24 中国地质大学(武汉) Method, device, equipment and storage medium for identifying video stream safety helmet based on YOLO

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
宽度学习算法在码头安全保障中的应用研究;王扬;《中国优秀硕士学位论文全文数据库—工程科技Ⅰ辑》;B026-94 *
杨磊 等.《网络视频监控技术》.中国传媒大学出版社,2017,175-176. *

Also Published As

Publication number Publication date
CN113838094A (en) 2021-12-24

Similar Documents

Publication Publication Date Title
Fang et al. Falls from heights: A computer vision-based approach for safety harness detection
CN109670441B (en) Method, system, terminal and computer readable storage medium for realizing wearing recognition of safety helmet
CN104361327B (en) A kind of pedestrian detection method and system
CN110414400B (en) Automatic detection method and system for wearing of safety helmet on construction site
CN111144263A (en) Construction worker high-fall accident early warning method and device
CN112396658B (en) Indoor personnel positioning method and system based on video
CN110569772A (en) Method for detecting state of personnel in swimming pool
CN109506628A (en) Object distance measuring method under a kind of truck environment based on deep learning
CN109543542A (en) A kind of determination method whether particular place personnel dressing standardizes
CN111460988A (en) Illegal behavior identification method and device
CN103824114B (en) A kind of pedestrian stream gauge counting method based on cross section traffic statistics and system
CN113838094B (en) Safety early warning method based on intelligent video identification
CN113158851B (en) Wearing safety helmet detection method and device and computer storage medium
CN111062303A (en) Image processing method, system and computer storage medium
CN112071084A (en) Method and system for judging illegal parking by utilizing deep learning
CN106570440A (en) People counting method and people counting device based on image analysis
CN111738336A (en) Image detection method based on multi-scale feature fusion
CN112184773A (en) Helmet wearing detection method and system based on deep learning
CN110084201A (en) A kind of human motion recognition method of convolutional neural networks based on specific objective tracking under monitoring scene
Ottakath et al. ViDMASK dataset for face mask detection with social distance measurement
CN114782988A (en) Construction environment-oriented multi-stage safety early warning method
CN113111771A (en) Method for identifying unsafe behaviors of power plant workers
CN115223249A (en) Quick analysis and identification method for unsafe behaviors of underground personnel based on machine vision
JP7211428B2 (en) Information processing device, control method, and program
CN113919627A (en) Intelligent monitoring method applied to hydro-junction engineering

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant