CN113838094A - Safety early warning method based on intelligent video identification - Google Patents

Safety early warning method based on intelligent video identification Download PDF

Info

Publication number
CN113838094A
CN113838094A CN202111140674.8A CN202111140674A CN113838094A CN 113838094 A CN113838094 A CN 113838094A CN 202111140674 A CN202111140674 A CN 202111140674A CN 113838094 A CN113838094 A CN 113838094A
Authority
CN
China
Prior art keywords
image
moving object
identification
safety helmet
straight line
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202111140674.8A
Other languages
Chinese (zh)
Other versions
CN113838094B (en
Inventor
杜泽新
左天才
贺亚山
宋尔进
曾体健
崔珂伟
张孙蓉
张玉吉
李林
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guizhou Wujiang Hydropower Development Co Ltd
Original Assignee
Guizhou Wujiang Hydropower Development Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guizhou Wujiang Hydropower Development Co Ltd filed Critical Guizhou Wujiang Hydropower Development Co Ltd
Priority to CN202111140674.8A priority Critical patent/CN113838094B/en
Publication of CN113838094A publication Critical patent/CN113838094A/en
Application granted granted Critical
Publication of CN113838094B publication Critical patent/CN113838094B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/246Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/13Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/136Segmentation; Edge detection involving thresholding
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]

Abstract

The invention discloses a safety early warning method based on intelligent video identification, which comprises the steps of acquiring a safety helmet identification model through training based on a convolutional neural network; acquiring a video image through a camera; identifying a moving object in a video image by using a difference method between two frames; the method comprises the steps of carrying out safety helmet wearing identification on a moving object by combining a safety helmet identification model, obtaining the safety helmet identification model through training, obtaining a real-time video stream, identifying the moving object in a video image, carrying out movement analysis on the moving object, carrying out identification comparison according to the safety helmet identification model, establishing a rectangular marking frame at the position of the safety helmet, and carrying out safety helmet wearing identification through a safety helmet identification comparison result, a comparison result of the highest point of the contour of the moving object and the central point of the contour, and an angle comparison result formed by the diagonal lines of the rectangular marking frame.

Description

Safety early warning method based on intelligent video identification
Technical Field
The invention relates to the technical field of intelligent video identification, in particular to a safety early warning method based on intelligent video identification.
Background
Nowadays, more and more attention is paid to safety production, various measures are also taken by each enterprise to ensure safety production of staff, but the situation that workers do not wear safety helmets to carry out dangerous work still exists, safety accidents caused by violation of safety standard problems in construction sites and operation sites occur frequently at present, the wearing management of the safety helmets becomes a big difficulty, the safety problems caused by the fact that the safety helmets are not worn are common, and the problems of high cost, low efficiency and the like exist in manual supervision.
Disclosure of Invention
This section is for the purpose of summarizing some aspects of embodiments of the invention and to briefly introduce some preferred embodiments. In this section, as well as in the abstract and the title of the invention of this application, simplifications or omissions may be made to avoid obscuring the purpose of the section, the abstract and the title, and such simplifications or omissions are not intended to limit the scope of the invention.
The present invention has been made in view of the above-mentioned conventional problems.
Therefore, the technical problem solved by the invention is as follows: the existing safety helmet wearing recognition positions a human face based on facial features, the problems of time and labor waste and poor real-time performance are caused by traversing the whole image by a complex algorithm, the environment of a construction site is complex, and the color feature recognition is not accurate.
In order to solve the technical problems, the invention provides the following technical scheme: a safety early warning method based on intelligent video identification comprises the following steps,
based on a convolutional neural network, obtaining a safety helmet identification model through training of a large sample;
acquiring a video image in a certain area through a camera;
identifying a moving object in the collected video image by using a difference method between two frames;
and carrying out safety helmet wearing identification on the identified moving object by combining a safety helmet identification model.
As an optimal scheme of the safety early warning method based on intelligent video identification, the method comprises the following steps: the acquiring of the safety helmet identification model through training of a large sample based on the convolutional neural network comprises the following steps: collecting safety helmet data as sample data, carrying out mirror symmetry on a positive sample, generating a positive sample index catalog and a negative sample index catalog, marking, building a convolutional neural network, training the convolutional neural network by using the sample data, and storing the trained convolutional neural network into a safety helmet identification model.
As an optimal scheme of the safety early warning method based on intelligent video identification, the method comprises the following steps: the moving object in the collected video image is identified by utilizing the difference method between two framesThe method comprises the following steps: calling a camera to obtain a real-time video stream, and recording the image of the nth frame and the image of the (n-1) th frame as fn、fn-1The gray value of the corresponding pixel points of two frames is recorded as fn(x, y) and fn-1(x, y), calculating the absolute value of the gray value difference of the corresponding pixel points of the two frames of images to obtain a difference image Dn
Dn(x,y)=|fn(x,y)-fn-1(x,y)|
By threshold value comparison, when DnAnd (x, y) when the (x, y) exceeds the set threshold, identifying that a moving object exists in the current nth frame image.
As an optimal scheme of the safety early warning method based on intelligent video identification, the method comprises the following steps: the safety helmet wearing identification of the identified moving object by combining the safety helmet identification model comprises the following steps: the method comprises the steps of marking an image with a moving object as a first image, obtaining a contour of the moving object in the first image, determining the position of a center point of the contour of the moving object, further obtaining a time node corresponding to the first image, sequencing the first image according to the time node, carrying out movement analysis on the moving object in the first image adjacent to the time node, further marking the first image according to the movement analysis result, and enabling the marked first image to be a second image.
As an optimal scheme of the safety early warning method based on intelligent video identification, the method comprises the following steps: the determining the position of the center point of the contour of the moving object comprises: according to the contour of the moving object, a first straight line between the highest point of the moving object and the horizontal ground is obtained, and the length of the first straight line is HmaxAnd the first straight line is vertical to the horizontal ground, and the length of the first straight line from the horizontal ground is determined to be
Figure BDA0003283640910000021
Figure BDA0003283640910000022
A second straight line is made through the two points, a third straight line is perpendicular to the first straight line, and the second straight line is perpendicular to the first straight lineAnd intercepting two section contour edges of the contour of the moving object by a straight line and a third straight line, recording the two section contour edges as a first contour edge and a second contour edge, acquiring two points of the first contour edge and the second contour edge, which are farthest from the first straight line, making the straight line perpendicular to the horizontal ground after passing through the two points, making the straight line parallel to the horizontal ground after passing through the highest point, enclosing a rectangle between the made straight line and the horizontal ground, wherein the intersection point of diagonal lines of the rectangle is the central point position of the contour of the moving object.
As an optimal scheme of the safety early warning method based on intelligent video identification, the method comprises the following steps: the moving analysis is carried out on the moving object in the first image adjacent to the time node, the first image is further marked according to the moving analysis result, and the marked first image is a second image and comprises the following steps: calculating the time interval between two adjacent time nodes, and recording the time interval as tnThe two time nodes are respectively marked as an initial time node and a stop time node, if the time interval t isnIf the time interval is less than the time interval threshold value, the time interval t between the next adjacent time node and the starting time node is takenn+1And when the position change distance of the same moving object is larger than or equal to a change distance threshold value, marking the first image with the same moving object as a second image, wherein the change distance threshold value is the product of a preset moving speed and the time interval.
As an optimal scheme of the safety early warning method based on intelligent video identification, the method comprises the following steps: the safety helmet wearing identification of the identified moving object by combining the safety helmet identification model further comprises the following steps: the method comprises the steps of obtaining a second image with a moving object, carrying out identification comparison on the second image by combining a safety helmet identification model, further carrying out color feature extraction on an area with successful identification comparison in the second image if the current identification comparison is successful, comparing the extracted second color feature with a preset first color feature, and marking the second image if the comparison is successful, wherein the safety helmet exists in the second image currently.
As an optimal scheme of the safety early warning method based on intelligent video identification, the method comprises the following steps: the safety helmet wearing identification of the identified moving object by combining the safety helmet identification model further comprises the following steps: acquiring a marked second image, determining the position of the horizontal ground in the image, establishing a rectangular marking frame by combining the specific position of the safety helmet in the image and the vertical distance H between the safety helmet and the horizontal ground and a certain value W, wherein the length of the rectangular marking frame is H, the width of the rectangular marking frame is W, and carrying out safety helmet wearing identification according to the characteristics of the rectangular marking frame.
As an optimal scheme of the safety early warning method based on intelligent video identification, the method comprises the following steps: the identifying of the wearing of the safety helmet according to the characteristics of the rectangular marking frame comprises the following steps:
extracting the characteristics of the rectangular marking frame, connecting the diagonal lines of the rectangular marking frame, and respectively marking the diagonal line as l1、l2Determining the diagonal line l1To the diagonal line l2Angle theta between them1By the formula
Figure BDA0003283640910000031
Calculated to obtain theta1When value of theta1Satisfies thetamin1maxAnd when the distance between the center point of the identification comparison successful region and the highest point of the moving object outline is smaller than a preset value, the current safety helmet wearing identification is passed, otherwise, the current safety helmet wearing identification is not passed, the region to which the image belongs is determined through the marked image, and safety helmet non-wearing early warning is carried out on the region.
The invention has the beneficial effects that: the method comprises the steps of obtaining a safety helmet identification model through training, obtaining a real-time video stream, judging whether a moving object exists in an image or not by combining a difference method between two frames, identifying the same moving object in different images, marking the image according to a movement analysis result of the moving object, identifying and comparing the marked image by combining the safety helmet identification model, establishing a rectangular marking frame according to the existing position of the safety helmet, and identifying the wearing of the safety helmet according to a safety helmet identification comparison result, a comparison result of the highest point of the outline of the moving object and the outline center point of the moving object and an angle formed by the diagonal lines of the rectangular marking frame.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings needed to be used in the description of the embodiments will be briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without inventive exercise. Wherein:
fig. 1 is a schematic diagram illustrating steps of a security early warning method based on intelligent video identification according to the present invention.
Detailed Description
In order to make the aforementioned objects, features and advantages of the present invention comprehensible, specific embodiments accompanied with figures are described in detail below, and it is apparent that the described embodiments are a part of the embodiments of the present invention, not all of the embodiments. All other embodiments, which can be obtained by a person skilled in the art without making creative efforts based on the embodiments of the present invention, shall fall within the protection scope of the present invention.
In the following description, numerous specific details are set forth in order to provide a thorough understanding of the present invention, but the present invention may be practiced in other ways than those specifically described and will be readily apparent to those of ordinary skill in the art without departing from the spirit of the present invention, and therefore the present invention is not limited to the specific embodiments disclosed below.
Furthermore, reference herein to "one embodiment" or "an embodiment" means that a particular feature, structure, or characteristic described in connection with the embodiment is included in at least one implementation of the invention. The appearances of the phrase "in one embodiment" in various places in the specification are not necessarily all referring to the same embodiment, nor are separate or alternative embodiments mutually exclusive of other embodiments.
The present invention will be described in detail with reference to the drawings, wherein the cross-sectional views illustrating the structure of the device are not enlarged partially in general scale for convenience of illustration, and the drawings are only exemplary and should not be construed as limiting the scope of the present invention. In addition, the three-dimensional dimensions of length, width and depth should be included in the actual fabrication.
Meanwhile, in the description of the present invention, it should be noted that the terms "upper, lower, inner and outer" and the like indicate orientations or positional relationships based on the orientations or positional relationships shown in the drawings, and are only for convenience of describing the present invention and simplifying the description, but do not indicate or imply that the referred device or element must have a specific orientation, be constructed in a specific orientation and operate, and thus, cannot be construed as limiting the present invention. Furthermore, the terms first, second, or third are used for descriptive purposes only and are not to be construed as indicating or implying relative importance.
The terms "mounted, connected and connected" in the present invention are to be understood broadly, unless otherwise explicitly specified or limited, for example: can be fixedly connected, detachably connected or integrally connected; they may be mechanically, electrically, or directly connected, or indirectly connected through intervening media, or may be interconnected between two elements. The specific meanings of the above terms in the present invention can be understood in specific cases to those skilled in the art.
Example 1
Referring to fig. 1, the embodiment provides a safety early warning method based on intelligent video identification, including that a safety helmet identification model is obtained through training of a large sample based on a convolutional neural network;
acquiring a video image in a certain area through a camera;
identifying a moving object in the collected video image by using a difference method between two frames;
and carrying out safety helmet wearing identification on the identified moving object by combining a safety helmet identification model.
Based on the convolutional neural network, through the training of big sample, obtain the safety helmet identification model and include: collecting safety helmet data as sample data, carrying out mirror symmetry on a positive sample, generating a positive sample index catalog and a negative sample index catalog, marking, building a convolutional neural network, training the convolutional neural network by using the sample data, and storing the trained convolutional neural network into a safety helmet identification model.
The identification of the moving object in the collected video image by using the difference method between two frames comprises the following steps: calling a camera to obtain a real-time video stream, and recording the image of the nth frame and the image of the (n-1) th frame as fn、fn-1The gray value of the corresponding pixel points of two frames is recorded as fn(x, y) and fn-1(x, y), calculating the absolute value of the gray value difference of the corresponding pixel points of the two frames of images to obtain a difference image Dn
Dn(x,y)=|fn(x,y)-fn-1(x,y)|
By threshold value comparison, when DnAnd (x, y) when the (x, y) exceeds the set threshold, identifying that a moving object exists in the current nth frame image.
The safety helmet wearing identification of the identified moving object by combining the safety helmet identification model comprises the following steps: the method comprises the steps of marking an image with a moving object as a first image, obtaining a contour of the moving object in the first image, determining the position of a center point of the contour of the moving object, further obtaining a time node corresponding to the first image, sequencing the first image according to the time node, carrying out movement analysis on the moving object in the first image adjacent to the time node, further marking the first image according to the movement analysis result, and enabling the marked first image to be a second image.
Determining the position of the center point of the contour of the moving object comprises the following steps: according to the contour of the moving object, a first straight line between the highest point of the moving object and the horizontal ground is obtained, and the length of the first straight line is HmaxAnd the first straight line is vertical to the horizontal ground, and the length of the first straight line from the horizontal ground is determined to be
Figure BDA0003283640910000061
And two points of the moving object contour are taken as a second straight line and a third straight line perpendicular to the first straight line through the two points, the second straight line and the third straight line are intercepted to two section contour edges of the moving object contour and are marked as a first contour edge and a second contour edge, two points of the first contour edge and the second contour edge which are farthest away from the first straight line are obtained, the straight line which is taken as the two points is perpendicular to the horizontal ground, the straight line which is taken as the highest point is parallel to the horizontal ground, a rectangle is enclosed between the taken straight line and the horizontal ground, and the intersection point of the diagonal line of the rectangle is the central point position of the moving object contour.
Performing movement analysis on a moving object in a first image adjacent to a time node, further marking the first image according to the movement analysis result, wherein the marked first image is a second image and comprises the following steps: calculating the time interval between two adjacent time nodes, and recording the time interval as tnThe two time nodes are respectively marked as an initial time node and a stop time node, if the time interval t isnIf the time interval is less than the time interval threshold value, the time interval t between the next adjacent time node and the starting time node is takenn+1And determining the same moving object in the first image according to the contour of the moving object and the position of the central point until the time interval is greater than or equal to a time interval threshold, acquiring the position change distance of the same moving object, and marking the first image with the same moving object as a second image when the position change distance of the same moving object is greater than or equal to a change distance threshold, wherein the change distance threshold is the product of the preset moving speed and the time interval.
The safety helmet wearing identification of the identified moving object by combining the safety helmet identification model further comprises the following steps: and acquiring a second image with a moving object, identifying and comparing the second image by combining a safety helmet identification model, if the current identification and comparison are successful, further extracting color features of an area with successful identification and comparison in the second image, comparing the extracted second color features with preset first color features, and if the comparison is successful, marking the second image with a safety helmet.
The safety helmet wearing identification of the identified moving object by combining the safety helmet identification model further comprises the following steps: acquiring a marked second image, determining the position of the horizontal ground in the image, establishing a rectangular marking frame by combining the specific position of the safety helmet in the image and the vertical distance H between the safety helmet and the horizontal ground and a certain value W, wherein the length of the rectangular marking frame is H, the width of the rectangular marking frame is W, and carrying out safety helmet wearing identification according to the characteristics of the rectangular marking frame.
The step of identifying the wearing of the safety helmet according to the characteristics of the rectangular marking frame comprises the following steps:
extracting the characteristics of the rectangular marking frame, connecting the diagonal lines of the rectangular marking frame, and respectively marking the diagonal line as l1、l2Determining the diagonal line l1To the diagonal line l2Angle theta between them1By the formula
Figure BDA0003283640910000071
Calculated to obtain theta1When value of theta1Satisfies thetamin1maxAnd when the distance between the center point of the identification comparison success region and the highest point of the moving object outline is smaller than a preset value, the current safety helmet wearing identification is passed, otherwise, the current safety helmet wearing identification is not passed, the region to which the image belongs is determined through the marked image, and safety helmet non-wearing early warning is carried out on the region.
Example 2
Referring to fig. 1, the embodiment provides a security early warning method based on intelligent video identification, including,
based on a convolutional neural network, through training of a large sample, obtaining a safety helmet identification model:
collecting safety helmet data as sample data, carrying out mirror symmetry on a positive sample, generating a positive sample index catalog and a negative sample index catalog, marking, building a convolutional neural network, training the convolutional neural network by using the sample data, and storing the trained convolutional neural network into a safety helmet identification model;
acquiring a video image in a certain area through a camera;
identifying a moving object in the acquired video image by using a difference method between two frames:
calling a camera to obtain a real-time video stream, and recording the image of the nth frame and the image of the (n-1) th frame as fn、fn-1The gray value of the corresponding pixel points of two frames is recorded as fn(x, y) and fn-1(x, y), calculating the absolute value of the gray value difference of the corresponding pixel points of the two frames of images to obtain a difference image Dn
Dn(x,y)=|fn(x,y)-fn-1(x,y)|
By threshold value comparison, when Dn(x, y) when the threshold value is exceeded, identifying that a moving object exists in the current nth frame image;
and (3) carrying out safety helmet wearing identification on the identified moving object by combining a safety helmet identification model:
marking the image with the moving object as a first image, acquiring the contour of the moving object in the first image, determining the position of the center point of the contour of the moving object, acquiring a first straight line between the highest point and the horizontal ground according to the contour of the moving object, wherein the length of the first straight line is HmaxAnd the first straight line is vertical to the horizontal ground, and the length of the first straight line from the horizontal ground is determined to be
Figure BDA0003283640910000081
The two points of the moving object are used for drawing a second straight line and a third straight line which are vertical to the first straight line through the two points, the second straight line and the third straight line are intercepted to two section contour edges of the contour of the moving object and are marked as a first contour edge and a second contour edge, two points of the first contour edge and the second contour edge which are farthest away from the first straight line are obtained, the straight line drawn through the two points is vertical to the horizontal ground, the straight line drawn through the highest point is parallel to the horizontal ground, a rectangle is enclosed between the drawn straight line and the horizontal ground, the intersection point of the diagonal line of the rectangle is the central point position of the contour of the moving object, the time node corresponding to the first image is further obtained, the first images are sorted according to the time node, the moving object in the first image adjacent to the time node is subjected to movement analysis, and the moving object in the first image adjacent to the time node is subjected to the movement analysis resultThe first image is further marked, the marked first image is a second image, and the time interval between two adjacent time nodes is calculated and is marked as tnThe two time nodes are respectively marked as an initial time node and a stop time node, if the time interval t isnIf the time interval is less than the time interval threshold value, the time interval t between the next adjacent time node and the starting time node is takenn+1Until the time interval is greater than or equal to a time interval threshold, determining the same moving object in the first image according to the contour of the moving object and the position of the center point, acquiring the position change distance of the same moving object, and when the position change distance of the same moving object is greater than or equal to a change distance threshold, marking the first image with the same moving object as a second image, wherein the change distance threshold is the product of a preset moving speed and the time interval, and the preset moving speed is 1.25m/s of the normal moving speed of an adult;
acquiring a second image with a moving object, identifying and comparing the second image by combining a safety helmet identification model, if the current identification and comparison are successful, further extracting color features of an area with successful identification and comparison in the second image, comparing the extracted second color features with preset first color features, and if the comparison is successful, marking the second image and indicating that the safety helmet exists in the current second image;
acquiring a marked second image, determining the position of a horizontal ground in the image, establishing a rectangular marking frame by combining the specific position of the safety helmet in the image and taking the vertical distance H between the safety helmet and the horizontal ground and a certain value W, wherein the length of the rectangular marking frame is H and the width of the rectangular marking frame is W, carrying out safety helmet wearing identification according to the characteristics of the rectangular marking frame, the value range of W is 351-375 cm, W is the shoulder width of an adult, the value range of H is 150-200 cm, H is the height of the adult, and H and W take any value within the value range;
extracting the characteristics of the rectangular marking frame, connecting the diagonal lines of the rectangular marking frame, and respectively marking the diagonal line as l1、l2Determining the diagonal line l1To the diagonal line l2Angle theta between them1By the formula
Figure BDA0003283640910000091
Calculated to obtain theta1When calculating, W can be any value in the value range and substituted into Hmin=150cm、HmaxCalculating theta (theta) 200cmmin、θmaxWhen theta is1Satisfies thetamix1maxAnd when the distance between the center point of the identification comparison success region and the highest point of the moving object outline is smaller than a preset value, the current safety helmet wearing identification is passed, otherwise, the current safety helmet wearing identification is not passed, the region to which the image belongs is determined through the marked image, and safety helmet non-wearing early warning is carried out on the region.
The method comprises the steps of obtaining a safety helmet identification model through training, obtaining a real-time video stream, judging whether a moving object exists in an image or not by combining a difference method between two frames, identifying the same moving object in different images, marking the image according to a movement analysis result of the moving object, identifying and comparing the marked image by combining the safety helmet identification model, establishing a rectangular marking frame according to the existing position of the safety helmet, and identifying the wearing of the safety helmet according to a safety helmet identification comparison result, a comparison result of the highest point of the outline of the moving object and the outline center point of the moving object and an angle formed by the diagonal lines of the rectangular marking frame.
It should be recognized that embodiments of the present invention can be realized and implemented by computer hardware, a combination of hardware and software, or by computer instructions stored in a non-transitory computer readable memory. The methods may be implemented in a computer program using standard programming techniques, including a non-transitory computer-readable storage medium configured with the computer program, where the storage medium so configured causes a computer to operate in a specific and predefined manner, according to the methods and figures described in the detailed description. Each program may be implemented in a high level procedural or object oriented programming language to communicate with a computer system. However, the program(s) can be implemented in assembly or machine language, if desired. In any case, the language may be a compiled or interpreted language. Furthermore, the program can be run on a programmed application specific integrated circuit for this purpose.
Further, the operations of processes described herein can be performed in any suitable order unless otherwise indicated herein or otherwise clearly contradicted by context. The processes described herein (or variations and/or combinations thereof) may be performed under the control of one or more computer systems configured with executable instructions, and may be implemented as code (e.g., executable instructions, one or more computer programs, or one or more applications) collectively executed on one or more processors, by hardware, or combinations thereof. The computer program includes a plurality of instructions executable by one or more processors.
Further, the method may be implemented in any type of computing platform operatively connected to a suitable interface, including but not limited to a personal computer, mini computer, mainframe, workstation, networked or distributed computing environment, separate or integrated computer platform, or in communication with a charged particle tool or other imaging device, and the like. Aspects of the invention may be embodied in machine-readable code stored on a non-transitory storage medium or device, whether removable or integrated into a computing platform, such as a hard disk, optically read and/or write storage medium, RAM, ROM, or the like, such that it may be read by a programmable computer, which when read by the storage medium or device, is operative to configure and operate the computer to perform the procedures described herein. Further, the machine-readable code, or portions thereof, may be transmitted over a wired or wireless network. The invention described herein includes these and other different types of non-transitory computer-readable storage media when such media include instructions or programs that implement the steps described above in conjunction with a microprocessor or other data processor. The invention also includes the computer itself when programmed according to the methods and techniques described herein. A computer program can be applied to input data to perform the functions described herein to transform the input data to generate output data that is stored to non-volatile memory. The output information may also be applied to one or more output devices, such as a display. In a preferred embodiment of the invention, the transformed data represents physical and tangible objects, including particular visual depictions of physical and tangible objects produced on a display.
As used in this application, the terms "component," "module," "system," and the like are intended to refer to a computer-related entity, either hardware, firmware, a combination of hardware and software, or software in execution. For example, a component may be, but is not limited to being: a process running on a processor, an object, an executable, a thread of execution, a program, and/or a computer. By way of example, both an application running on a computing device and the computing device can be a component. One or more components can reside within a process and/or thread of execution and a component can be localized on one computer and/or distributed between two or more computers. In addition, these components can execute from various computer readable media having various data structures thereon. The components may communicate by way of local and/or remote processes such as in accordance with a signal having one or more data packets (e.g., data from one component interacting with another component in a local system, distributed system, and/or across a network such as the internet with other systems by way of the signal).
It should be noted that the above-mentioned embodiments are only for illustrating the technical solutions of the present invention and not for limiting, and although the present invention has been described in detail with reference to the preferred embodiments, it should be understood by those skilled in the art that modifications or equivalent substitutions may be made on the technical solutions of the present invention without departing from the spirit and scope of the technical solutions of the present invention, which should be covered by the claims of the present invention.

Claims (9)

1. A safety early warning method based on intelligent video identification is characterized in that: comprises the steps of (a) preparing a mixture of a plurality of raw materials,
based on a convolutional neural network, obtaining a safety helmet identification model through training of a large sample;
acquiring a video image in a certain area through a camera;
identifying a moving object in the collected video image by using a difference method between two frames;
and carrying out safety helmet wearing identification on the identified moving object by combining a safety helmet identification model.
2. The intelligent video identification-based safety precaution method according to claim 1, characterized in that: the acquiring of the safety helmet identification model through training of a large sample based on the convolutional neural network comprises the following steps: collecting safety helmet data as sample data, carrying out mirror symmetry on a positive sample, generating a positive sample index catalog and a negative sample index catalog, marking, building a convolutional neural network, training the convolutional neural network by using the sample data, and storing the trained convolutional neural network into a safety helmet identification model.
3. The intelligent video identification-based safety precaution method of claim 2, wherein: the identification of the moving object in the collected video image by using the difference method between two frames comprises the following steps: calling a camera to obtain a real-time video stream, and recording the image of the nth frame and the image of the (n-1) th frame as fn、fn-1The gray value of the corresponding pixel points of two frames is recorded as fn(x, y) and fn-1(x, y), calculating the absolute value of the gray value difference of the corresponding pixel points of the two frames of images to obtain a difference image Dn
Dn(x,y)=|fn(x,y)-fn-1(x,y)|
By threshold value comparison, when DnAnd (x, y) when the (x, y) exceeds the set threshold, identifying that a moving object exists in the current nth frame image.
4. The intelligent video identification-based safety precaution method of claim 3, wherein: the safety helmet wearing identification of the identified moving object by combining the safety helmet identification model comprises the following steps: the method comprises the steps of marking an image with a moving object as a first image, obtaining a contour of the moving object in the first image, determining the position of a center point of the contour of the moving object, further obtaining a time node corresponding to the first image, sequencing the first image according to the time node, carrying out movement analysis on the moving object in the first image adjacent to the time node, further marking the first image according to the movement analysis result, and enabling the marked first image to be a second image.
5. The intelligent video identification-based safety precaution method of claim 4, wherein: the determining the position of the center point of the contour of the moving object comprises: according to the contour of the moving object, a first straight line between the highest point of the moving object and the horizontal ground is obtained, and the length of the first straight line is HmaxAnd the first straight line is vertical to the horizontal ground, and the length of the first straight line from the horizontal ground is determined to be
Figure FDA0003283640900000011
And the two points pass through the two points to form a second straight line, the third straight line is perpendicular to the first straight line, the second straight line and the third straight line are intercepted to two section contour edges of the contour of the moving object and are recorded as a first contour edge and a second contour edge, the two points of the first contour edge and the second contour edge, which are farthest from the first straight line, are obtained, the straight line passing through the two points is perpendicular to the horizontal ground, the straight line passing through the highest point is parallel to the horizontal ground, a rectangle is defined by the straight line and the horizontal ground, and the intersection point of the diagonal line of the rectangle is the central point position of the contour of the moving object.
6. The intelligent video identification-based safety precaution method of claim 5, wherein: the moving analysis is carried out on the moving object in the first image adjacent to the time node, the first image is further marked according to the moving analysis result, and the marked first image is a second image and comprises the following steps: calculating the time interval between two adjacent time nodes, and recording the time interval as tnSaid two time nodes are respectively marked asA start time node, an end time node, if the time interval t isnIf the time interval is less than the time interval threshold value, the time interval t between the next adjacent time node and the starting time node is takenn+1And when the position change distance of the same moving object is larger than or equal to a change distance threshold value, marking the first image with the same moving object as a second image, wherein the change distance threshold value is the product of a preset moving speed and the time interval.
7. The intelligent video identification-based safety precaution method of claim 6, wherein: the safety helmet wearing identification of the identified moving object by combining the safety helmet identification model further comprises the following steps: the method comprises the steps of obtaining a second image with a moving object, carrying out identification comparison on the second image by combining a safety helmet identification model, further carrying out color feature extraction on an area with successful identification comparison in the second image if the current identification comparison is successful, comparing the extracted second color feature with a preset first color feature, and marking the second image if the comparison is successful, wherein the safety helmet exists in the second image currently.
8. The intelligent video identification-based safety precaution method of claim 7, wherein: the safety helmet wearing identification of the identified moving object by combining the safety helmet identification model further comprises the following steps: acquiring a marked second image, determining the position of the horizontal ground in the image, establishing a rectangular marking frame by combining the specific position of the safety helmet in the image and the vertical distance H between the safety helmet and the horizontal ground and a certain value W, wherein the length of the rectangular marking frame is H, the width of the rectangular marking frame is W, and carrying out safety helmet wearing identification according to the characteristics of the rectangular marking frame.
9. The intelligent video identification-based safety precaution method of claim 8, wherein: the identifying of the wearing of the safety helmet according to the characteristics of the rectangular marking frame comprises the following steps:
extracting the characteristics of the rectangular marking frame, connecting the diagonal lines of the rectangular marking frame, and respectively marking the diagonal line as l1、l2Determining the diagonal line l1To the diagonal line l2Angle theta between them1By the formula
Figure FDA0003283640900000031
Calculated to obtain theta1When value of theta1Satisfies thetamin1maxAnd when the distance between the center point of the identification comparison successful region and the highest point of the moving object outline is smaller than a preset value, the current safety helmet wearing identification is passed, otherwise, the current safety helmet wearing identification is not passed, the region to which the image belongs is determined through the marked image, and safety helmet non-wearing early warning is carried out on the region.
CN202111140674.8A 2021-09-28 2021-09-28 Safety early warning method based on intelligent video identification Active CN113838094B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111140674.8A CN113838094B (en) 2021-09-28 2021-09-28 Safety early warning method based on intelligent video identification

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111140674.8A CN113838094B (en) 2021-09-28 2021-09-28 Safety early warning method based on intelligent video identification

Publications (2)

Publication Number Publication Date
CN113838094A true CN113838094A (en) 2021-12-24
CN113838094B CN113838094B (en) 2024-03-05

Family

ID=78970786

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111140674.8A Active CN113838094B (en) 2021-09-28 2021-09-28 Safety early warning method based on intelligent video identification

Country Status (1)

Country Link
CN (1) CN113838094B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114758363A (en) * 2022-06-16 2022-07-15 四川金信石信息技术有限公司 Insulating glove wearing detection method and system based on deep learning

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109241871A (en) * 2018-08-16 2019-01-18 北京此时此地信息科技有限公司 A kind of public domain stream of people's tracking based on video data
CN109255298A (en) * 2018-08-07 2019-01-22 南京工业大学 Safety cap detection method and system in a kind of dynamic background
CN110188724A (en) * 2019-06-05 2019-08-30 中冶赛迪重庆信息技术有限公司 The method and system of safety cap positioning and color identification based on deep learning
CN110263609A (en) * 2019-01-27 2019-09-20 杭州品茗安控信息技术股份有限公司 A kind of automatic identifying method of safety cap wear condition
CN110472531A (en) * 2019-07-29 2019-11-19 腾讯科技(深圳)有限公司 Method for processing video frequency, device, electronic equipment and storage medium
US10528818B1 (en) * 2013-03-14 2020-01-07 Hrl Laboratories, Llc Video scene analysis system for situational awareness
CN112084986A (en) * 2020-09-16 2020-12-15 国网福建省电力有限公司营销服务中心 Real-time safety helmet detection method based on image feature extraction
CN112184773A (en) * 2020-09-30 2021-01-05 华中科技大学 Helmet wearing detection method and system based on deep learning
CN112487963A (en) * 2020-11-27 2021-03-12 新疆爱华盈通信息技术有限公司 Wearing detection method and system for safety helmet
CN113297900A (en) * 2021-04-02 2021-08-24 中国地质大学(武汉) Method, device, equipment and storage medium for identifying video stream safety helmet based on YOLO

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10528818B1 (en) * 2013-03-14 2020-01-07 Hrl Laboratories, Llc Video scene analysis system for situational awareness
CN109255298A (en) * 2018-08-07 2019-01-22 南京工业大学 Safety cap detection method and system in a kind of dynamic background
CN109241871A (en) * 2018-08-16 2019-01-18 北京此时此地信息科技有限公司 A kind of public domain stream of people's tracking based on video data
CN110263609A (en) * 2019-01-27 2019-09-20 杭州品茗安控信息技术股份有限公司 A kind of automatic identifying method of safety cap wear condition
CN110188724A (en) * 2019-06-05 2019-08-30 中冶赛迪重庆信息技术有限公司 The method and system of safety cap positioning and color identification based on deep learning
CN110472531A (en) * 2019-07-29 2019-11-19 腾讯科技(深圳)有限公司 Method for processing video frequency, device, electronic equipment and storage medium
CN112084986A (en) * 2020-09-16 2020-12-15 国网福建省电力有限公司营销服务中心 Real-time safety helmet detection method based on image feature extraction
CN112184773A (en) * 2020-09-30 2021-01-05 华中科技大学 Helmet wearing detection method and system based on deep learning
CN112487963A (en) * 2020-11-27 2021-03-12 新疆爱华盈通信息技术有限公司 Wearing detection method and system for safety helmet
CN113297900A (en) * 2021-04-02 2021-08-24 中国地质大学(武汉) Method, device, equipment and storage medium for identifying video stream safety helmet based on YOLO

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
杨磊 等: "《网络视频监控技术》", 中国传媒大学出版社, pages: 175 - 176 *
王扬: "宽度学习算法在码头安全保障中的应用研究", 《中国优秀硕士学位论文全文数据库—工程科技Ⅰ辑》, pages 026 - 94 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114758363A (en) * 2022-06-16 2022-07-15 四川金信石信息技术有限公司 Insulating glove wearing detection method and system based on deep learning
CN114758363B (en) * 2022-06-16 2022-08-19 四川金信石信息技术有限公司 Insulating glove wearing detection method and system based on deep learning

Also Published As

Publication number Publication date
CN113838094B (en) 2024-03-05

Similar Documents

Publication Publication Date Title
CN109670441B (en) Method, system, terminal and computer readable storage medium for realizing wearing recognition of safety helmet
CN110188724B (en) Method and system for helmet positioning and color recognition based on deep learning
US20220108607A1 (en) Method of controlling traffic, electronic device, roadside device, cloud control platform, and storage medium
CN110188807B (en) Tunnel pedestrian target detection method based on cascading super-resolution network and improved Faster R-CNN
CN101732055B (en) Method and system for testing fatigue of driver
JP3879732B2 (en) Object detection apparatus, object detection method, and computer program
CN102163287B (en) Method for recognizing characters of licence plate based on Haar-like feature and support vector machine
CN103824452A (en) Lightweight peccancy parking detection device based on full view vision
CN104680557A (en) Intelligent detection method for abnormal behavior in video sequence image
CN110858295A (en) Traffic police gesture recognition method and device, vehicle control unit and storage medium
CN103942539B (en) A kind of oval accurate high efficiency extraction of head part and masking method for detecting human face
CN111126235A (en) Method and device for detecting and processing illegal berthing of ship
CN113158851B (en) Wearing safety helmet detection method and device and computer storage medium
US10650249B2 (en) Method and device for counting pedestrians based on identification of head top of human body
CN111460988A (en) Illegal behavior identification method and device
WO2020233000A1 (en) Facial recognition method and apparatus, and computer-readable storage medium
CN103824114B (en) A kind of pedestrian stream gauge counting method based on cross section traffic statistics and system
CN113838094A (en) Safety early warning method based on intelligent video identification
CN112184773A (en) Helmet wearing detection method and system based on deep learning
CN106570440A (en) People counting method and people counting device based on image analysis
CN109002774A (en) A kind of fatigue monitoring device and method based on convolutional neural networks
CN111524350B (en) Method, system, terminal device and medium for detecting abnormal driving condition of vehicle and road cooperation
CN114140745A (en) Method, system, device and medium for detecting personnel attributes of construction site
CN113919627A (en) Intelligent monitoring method applied to hydro-junction engineering
CN205184784U (en) Machine people goes on patrol

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant