CN114220165A - Automatic alarm method and system based on motion recognition - Google Patents

Automatic alarm method and system based on motion recognition Download PDF

Info

Publication number
CN114220165A
CN114220165A CN202111426217.5A CN202111426217A CN114220165A CN 114220165 A CN114220165 A CN 114220165A CN 202111426217 A CN202111426217 A CN 202111426217A CN 114220165 A CN114220165 A CN 114220165A
Authority
CN
China
Prior art keywords
fighting
camera
image
automatic alarm
coordinate
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202111426217.5A
Other languages
Chinese (zh)
Other versions
CN114220165B (en
Inventor
兰雨晴
乔孟阳
王丹星
于艺春
黄永琢
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
China Standard Intelligent Security Technology Co Ltd
Original Assignee
China Standard Intelligent Security Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by China Standard Intelligent Security Technology Co Ltd filed Critical China Standard Intelligent Security Technology Co Ltd
Priority to CN202111426217.5A priority Critical patent/CN114220165B/en
Publication of CN114220165A publication Critical patent/CN114220165A/en
Application granted granted Critical
Publication of CN114220165B publication Critical patent/CN114220165B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q50/00Information and communication technology [ICT] specially adapted for implementation of business processes of specific business sectors, e.g. utilities or tourism
    • G06Q50/10Services
    • G06Q50/26Government or public services
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02BCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO BUILDINGS, e.g. HOUSING, HOUSE APPLIANCES OR RELATED END-USER APPLICATIONS
    • Y02B20/00Energy efficient lighting technologies, e.g. halogen lamps or gas discharge lamps
    • Y02B20/40Control techniques providing energy savings, e.g. smart controller or presence detection

Landscapes

  • Business, Economics & Management (AREA)
  • Tourism & Hospitality (AREA)
  • Marketing (AREA)
  • Primary Health Care (AREA)
  • Health & Medical Sciences (AREA)
  • Economics (AREA)
  • General Health & Medical Sciences (AREA)
  • Human Resources & Organizations (AREA)
  • Development Economics (AREA)
  • Educational Administration (AREA)
  • Strategic Management (AREA)
  • Physics & Mathematics (AREA)
  • General Business, Economics & Management (AREA)
  • General Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Image Analysis (AREA)
  • Alarm Systems (AREA)

Abstract

The application provides an automatic alarm method and system based on action recognition, and relates to the technical field of public security management. The automatic alarm method based on the motion recognition firstly recognizes the motion of people in the video collected by the camera to obtain a motion recognition result, and then judges whether a fighting phenomenon exists according to the motion recognition result, if the fighting phenomenon exists, the automatic alarm operation is executed. It can be seen that the fighting phenomenon can be timely and accurately identified, the automatic alarm operation is executed when the fighting phenomenon exists, the purpose of effectively processing the fighting phenomenon is achieved, public security management is facilitated, and property safety of people is protected.

Description

Automatic alarm method and system based on motion recognition
Technical Field
The application relates to the technical field of public security management, in particular to an automatic alarm method and system based on action recognition.
Background
Fighting refers to a subjective conscious behavior with the action characteristics of violence tendency when two or more opponents contradict each other and develop to the utmost point, and the purpose of causing physical harm to others. No matter what the cause and purpose of the behavior is an unreasonable and unconscious behavior, even the serious person can touch the criminal law, so how to timely and effectively process the event of fighting becomes a technical problem which needs to be solved urgently.
Disclosure of Invention
In view of the above problems, the present application has been made to provide an automatic alarm method and system based on motion recognition that overcome or at least partially solve the above problems, and can timely and accurately recognize a fighting phenomenon, and perform an automatic alarm operation when there is a fighting phenomenon, achieving an object of effectively processing the fighting phenomenon. The technical scheme is as follows:
in a first aspect, an automatic alarm method based on motion recognition is provided, which includes the following steps:
identifying the action of a person in a video acquired by a camera to obtain an action identification result;
judging whether a fighting phenomenon exists or not according to the action recognition result;
and if the fighting phenomenon exists, executing automatic alarm operation.
In a possible implementation manner, the recognizing a motion of a person in a video captured by a camera to obtain a motion recognition result includes:
inputting the video collected by the camera into the action recognition model to obtain an action recognition result output by the action recognition model; the motion recognition model is used for extracting spatial features and time sequence features in the video collected by the camera, carrying out depth time sequence feature weighting to obtain motion feature values, and carrying out motion recognition on the video collected by the camera based on the motion feature values.
In one possible implementation manner, the executing an automatic alarm operation if it is determined that a fighting phenomenon exists includes:
if the fighting phenomenon exists, acquiring a corresponding fighting image in the video;
and uploading the corresponding fighting image in the video to a public security network system to execute automatic alarm operation.
In one possible implementation, the method further includes:
identifying the head types of all people in the fighting image;
obtaining a coordinate point with the most dense people in the fighting scene according to the distance between the head and the head of each person in the fighting image;
the illumination intensity of the highlight flashlight on the camera is controlled according to the distance value of each person in the fighting image from the most dense coordinate point, and the camera is aligned to irradiate the most dense coordinate point in the fighting scene.
In a possible implementation mode, the lower left corner vertex of the fighting image is used as an original point, the left frame of the fighting image is upward and is a Y axis, the lower frame of the fighting image is rightward and is an X axis, a rectangular plane coordinate system is established, the unit length of the X axis of the rectangular plane coordinate system is a distance value between two adjacent transverse pixel points of the fighting image, the unit length of the Y axis of the rectangular plane coordinate system is a distance value between two adjacent longitudinal pixel points of the fighting image, and then the coordinate value of each pixel point in the fighting image can be obtained according to the rectangular plane coordinate system;
obtaining the most dense coordinate points of the people in the fighting scene according to the head-to-head distance of each person in the fighting image by using the following formula:
Figure BDA0003375040030000021
substituting the value of a from 1 to m into the above formula to find GaThe value of a at the minimum is denoted as aminThe coordinate point with the most intensive people in the fighting scene is
Figure BDA0003375040030000031
Wherein G isaRepresenting the sum of the distances between the head of the a-th person and the heads of all persons in the fighting image; (X)a(i),Ya(i) Represents the ith coordinate value in the a-th personal character head region in the recognized fighting image; (X)b(i),Yb(i) Represents the ith coordinate value in the b-th personal character head region in the recognized fighting image; n isaRepresenting the number of pixel points in the alpha personal character head type area in the identified fighting image; n isbRepresenting the number of pixel points in the b-th personal character head type area in the identified fighting image; m represents the total number of people in the recognized fighting image.
In one possible implementation, the camera is controlled to align to the most dense coordinate point in the fighting scene according to the following formula:
Figure BDA0003375040030000032
wherein, the delta X represents the coordinate distance of the camera which needs to move transversely, if the delta X is more than or equal to 0, the camera needs to move rightwards by delta X unit length distances, and if the delta X is less than 0, the camera needs to move leftwards by delta X unit length distances; the delta Y represents the coordinate distance of the camera which needs to move vertically, if the delta Y is larger than or equal to 0, the camera needs to move upwards for delta Y-axis unit length distances, and if the delta Y is smaller than 0, the camera needs to move downwards for delta Y-axis unit length distances; (X)0,Y0) The center position coordinates of the fighting image are indicated.
In one possible implementation, the coordinate point of the highest concentration of people in a fighting scene is recorded as
Figure BDA0003375040030000033
Controlling the distance value from each person in the fighting image to the densest coordinate point by using the following formulaThe illumination intensity of the highlight flashlight on the camera is manufactured:
Figure BDA0003375040030000034
wherein E represents the illumination intensity control value of the highlight flashlight on the camera; emaxThe maximum illumination intensity value of the strong light flashlight on the camera is represented; k represents the transverse coordinate width value of the fighting image; l represents a vertical coordinate length value of the fighting image.
In a second aspect, an automatic alarm system based on motion recognition is provided, which includes:
the identification module is used for identifying the action of a person in the video acquired by the camera to obtain an action identification result;
the judging module is used for judging whether a fighting phenomenon exists according to the action recognition result;
and the alarm module is used for executing automatic alarm operation if the judgment module judges that the fighting phenomenon exists.
In one possible implementation, the identification module is further configured to:
inputting the video collected by the camera into the action recognition model to obtain an action recognition result output by the action recognition model; the motion recognition model is used for extracting spatial features and time sequence features in the video collected by the camera, carrying out depth time sequence feature weighting to obtain motion feature values, and carrying out motion recognition on the video collected by the camera based on the motion feature values.
In one possible implementation, the alarm module is further configured to:
if the fighting phenomenon exists, acquiring a corresponding fighting image in the video;
and uploading the corresponding fighting image in the video to a public security network system to execute automatic alarm operation.
By means of the technical scheme, the automatic alarm method based on the motion recognition comprises the steps of firstly recognizing the motion of people in a video collected by a camera to obtain a motion recognition result, then judging whether a fighting phenomenon exists according to the motion recognition result, and if the fighting phenomenon is judged to exist, executing automatic alarm operation. It can be seen that the fighting phenomenon can be timely and accurately identified, the automatic alarm operation is executed when the fighting phenomenon exists, the purpose of effectively processing the fighting phenomenon is achieved, public security management is facilitated, and property safety of people is protected.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings used in the description of the embodiments of the present application will be briefly described below.
FIG. 1 illustrates a flow diagram of a method for automatic alerting based on motion recognition according to an embodiment of the present application;
FIG. 2 illustrates a block diagram of an automatic alert system based on motion recognition according to an embodiment of the present application;
fig. 3 illustrates a block diagram of an automatic alarm system based on motion recognition according to another embodiment of the present application.
Detailed Description
Exemplary embodiments of the present application will be described in more detail below with reference to the accompanying drawings. While exemplary embodiments of the present application are shown in the drawings, it should be understood that the present application may be embodied in various forms and should not be limited to the embodiments set forth herein. Rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the scope of the disclosure to those skilled in the art.
It should be noted that the terms "first," "second," and the like in the description and claims of this application and in the drawings described above are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It is to be understood that such uses are interchangeable under appropriate circumstances such that the embodiments of the application described herein are capable of operation in sequences other than those illustrated or described herein. Furthermore, the term "include" and its variants are to be read as open-ended terms meaning "including, but not limited to".
An embodiment of the present application provides an automatic alarm method based on motion recognition, and as shown in fig. 1, the automatic alarm method based on motion recognition may include the following steps S101 to S103:
step S101, identifying the motion of a person in a video acquired by a camera to obtain a motion identification result;
step S102, judging whether a fighting phenomenon exists according to the action recognition result, if so, executing step S103; if not, returning to execute the step S101;
and step S103, executing automatic alarm operation.
According to the automatic alarm method based on motion recognition, firstly, the motion of people in a video collected by a camera is recognized, a motion recognition result is obtained, whether a fighting phenomenon exists or not is judged according to the motion recognition result, and if the fighting phenomenon exists, automatic alarm operation is executed. It can be seen that the fighting phenomenon can be timely and accurately identified, the automatic alarm operation is executed when the fighting phenomenon exists, the purpose of effectively processing the fighting phenomenon is achieved, public security management is facilitated, and property safety of people is protected.
In the embodiment of the present application, a possible implementation manner is provided, where in step S101, a motion of a person in a video acquired by a camera is identified to obtain a motion identification result, specifically, the video acquired by the camera is input to a motion identification model to obtain a motion identification result output by the motion identification model; the motion recognition model is used for extracting spatial features and time sequence features in the video collected by the camera, carrying out depth time sequence feature weighting to obtain motion feature values, and carrying out motion recognition on the video collected by the camera based on the motion feature values. According to the video motion recognition method and device, the video motion recognition is achieved through the motion recognition model, after the motion recognition model extracts the spatial features and the time sequence features in the video collected by the camera, motion feature values need to be obtained through depth time sequence feature weighting, the depth time sequence feature weighting can distribute larger weights to key frames in the video collected by the camera, the key frame information is fully utilized, and therefore the motion recognition accuracy is effectively improved.
In the embodiment of the present application, a possible implementation manner is provided, where in the step S103, the automatic alarm operation is executed, specifically, the corresponding fighting image in the video is acquired, and then the corresponding fighting image in the video is uploaded to the public security network system to execute the automatic alarm operation. Therefore, online butt joint can be realized with a public security network system, so that the fighting phenomenon is quickly and effectively processed, and the property safety of people is guaranteed.
A possible implementation manner is provided in the embodiment of the present application, and while the automatic alarm operation is executed in step S103 or after the automatic alarm operation is executed, the following steps a1 to A3 may be further included:
step A1, identifying the head types of all people in the fighting image;
a2, obtaining the most dense coordinate points in the fighting scene according to the head-to-head distance of each person in the fighting image;
and A3, controlling the illumination intensity of the highlight flashlight on the camera according to the distance value of each person in the fighting image from the most dense coordinate point, and irradiating the camera to the most dense coordinate point in the fighting scene.
All the head types of all the people in the fighting image are identified according to the head type identification, then the most intensive coordinate points of the people in the fighting scene are obtained according to the distance between the head and the head of each person in the fighting image, then the illumination intensity of the highlight flashlight on the camera is controlled according to the distance value of each person from the point, the camera is irradiated to the most intensive point of the people in the fighting scene, therefore, the fighting phenomenon can be timely processed, and the fighting in the most serious area can be relieved to a certain extent.
The embodiment of the application provides a possible implementation manner, in the step a2, the coordinate points with the highest concentration of people in the fighting scene are obtained according to the distance between the head and the head of each person in the fighting image, specifically, the lower left corner vertex of the fighting image is used as the origin, the left frame of the fighting image is upward Y-axis, the lower frame of the fighting image is rightward X-axis, a rectangular plane coordinate system is established, the unit length of the X-axis of the rectangular plane coordinate system is the distance value between two adjacent transverse pixel points of the fighting image, the unit length of the Y-axis of the rectangular plane coordinate system is the distance value between two adjacent longitudinal pixel points of the fighting image, and then the coordinate value of each pixel point in the fighting image can be obtained according to the rectangular plane coordinate system;
obtaining the most dense coordinate points of the people in the fighting scene according to the head-to-head distance of each person in the fighting image by using the following formula:
Figure BDA0003375040030000071
substituting the value of a from 1 to m into the above formula to find GaThe value of a at the minimum is denoted as aminThe coordinate point with the most intensive people in the fighting scene is
Figure BDA0003375040030000072
Wherein G isaRepresenting the sum of the distances between the head of the a-th person and the heads of all persons in the fighting image; (X)a(i),Ya(i) Represents the ith coordinate value in the a-th personal character head region in the recognized fighting image; (X)b(i),Yb(i) Represents the ith coordinate value in the b-th personal character head region in the recognized fighting image; n isaRepresenting the number of pixel points in the alpha personal character head type area in the identified fighting image; n isbRepresenting the number of pixel points in the b-th personal character head type area in the identified fighting image; m represents the total number of people in the recognized fighting image.
According to the method and the device, the coordinate points with the most intensive people in the fighting image are obtained according to the head type coordinates of all the people in the fighting image, and then the area with the most serious fighting is determined, so that the area can be effectively processed, and the processing efficiency of fighting events is improved.
The embodiment of the application provides a possible implementation manner, and the camera is controlled to align to the most dense coordinate point according to the most dense coordinate point in the fighting scene by using the following formula:
Figure BDA0003375040030000081
wherein, the delta X represents the coordinate distance of the camera which needs to move transversely, if the delta X is more than or equal to 0, the camera needs to move rightwards by delta X unit length distances, and if the delta X is less than 0, the camera needs to move leftwards by delta X unit length distances; the delta Y represents the coordinate distance of the camera which needs to move vertically, if the delta X is larger than or equal to 0, the camera needs to move upwards for delta Y-axis unit length distances, and if the delta Y is smaller than 0, the camera needs to move downwards for delta Y-axis unit length distances; (X)0,Y0) The center position coordinates of the fighting image are indicated.
According to the embodiment, the camera is controlled to align the coordinate points with the highest concentration of the personnel according to the coordinate points with the highest concentration of the personnel, so that the highlight irradiation operation can be executed after the coordinate points with the highest concentration of the personnel are aligned, the highlight can be directly irradiated to the place with the highest frame hitting degree, and therefore the fighting event can be effectively processed.
The embodiment of the application provides a possible implementation mode, and the coordinate point with the most dense people in the fighting scene can be recorded as
Figure BDA0003375040030000082
And controlling the illumination intensity of the highlight flashlight on the camera according to the distance value of each person in the fighting image from the densest coordinate point by using the following formula:
Figure BDA0003375040030000083
wherein E represents the illumination intensity control value of the highlight flashlight on the camera; emaxThe maximum illumination intensity value of the strong light flashlight on the camera is represented; k represents the transverse coordinate width value of the fighting image; l represents a vertical coordinate length value of the fighting image.
According to the embodiment, the illumination intensity of the highlight flashlight on the camera is controlled according to the distance value of the coordinate point, with the highest concentration of people, of each person in the fighting image, the brightness of the highlight is controlled according to the concentration degree of the people at the highest concentration, the more people are concentrated, the higher the illumination intensity is, the more people are concentrated, the fighting condition of the areas with the highest concentration of people can be relieved to a certain extent, and therefore the purposes of assisting public security management and protecting the property safety of people are achieved.
It should be noted that, in practical applications, all the possible embodiments described above may be combined in a combined manner at will to form possible embodiments of the present application, and details are not described here again.
Based on the automatic alarm method based on the action recognition provided by each embodiment, the embodiment of the application also provides an automatic alarm system based on the action recognition based on the same inventive concept.
Fig. 2 illustrates a block diagram of an automatic alarm system based on motion recognition according to an embodiment of the present application. As shown in fig. 2, the automatic alarm system based on motion recognition may include a recognition module 210, a judgment module 220, and an alarm module 230.
The identification module 210 is configured to identify a motion of a person in a video acquired by the camera to obtain a motion identification result;
the judging module 220 is used for judging whether a fighting phenomenon exists according to the action recognition result;
and the alarm module 230 is used for executing automatic alarm operation if the judgment module judges that the fighting phenomenon exists.
In an embodiment of the present application, a possible implementation manner is provided, and the identification module 210 shown in fig. 2 is further configured to:
inputting the video collected by the camera into the action recognition model to obtain an action recognition result output by the action recognition model; the motion recognition model is used for extracting spatial features and time sequence features in the video collected by the camera, carrying out depth time sequence feature weighting to obtain motion feature values, and carrying out motion recognition on the video collected by the camera based on the motion feature values.
In the embodiment of the present application, a possible implementation manner is provided, and the alarm module 230 shown in fig. 2 is further configured to:
if the fighting phenomenon exists, acquiring a corresponding fighting image in the video;
and uploading the corresponding fighting image in the video to a public security network system to execute automatic alarm operation.
In the embodiment of the present application, a possible implementation manner is provided, as shown in fig. 3, the automatic alarm system based on motion recognition shown in fig. 2 may further include an illumination warning module 310, where the illumination warning module 310 is configured to:
identifying the head types of all people in the fighting image;
obtaining a coordinate point with the most dense people in the fighting scene according to the distance between the head and the head of each person in the fighting image;
the illumination intensity of the highlight flashlight on the camera is controlled according to the distance value of each person in the fighting image from the most dense coordinate point, and the camera is aligned to irradiate the most dense coordinate point in the fighting scene.
In an embodiment of the present application, a possible implementation manner is provided, and the illumination warning module 310 shown in fig. 3 is further configured to:
taking the lower left corner vertex of the fighting image as an origin, setting the left frame of the fighting image upwards as a Y axis, setting the lower frame of the fighting image rightwards as an X axis, establishing a planar rectangular coordinate system, setting the unit length of the X axis of the planar rectangular coordinate system as the distance value between two adjacent transverse pixel points of the fighting image, setting the unit length of the Y axis of the planar rectangular coordinate system as the distance value between two adjacent longitudinal pixel points of the fighting image, and further obtaining the coordinate value of each pixel point in the fighting image according to the planar rectangular coordinate system;
obtaining the most dense coordinate points of the people in the fighting scene according to the head-to-head distance of each person in the fighting image by using the following formula:
Figure BDA0003375040030000101
substituting the value of a from 1 to m into the above formula to find GaThe value of a at the minimum is denoted as aminThe coordinate point with the most intensive people in the fighting scene is
Figure BDA0003375040030000102
Wherein G isaRepresenting the sum of the distances between the head of the a-th person and the heads of all persons in the fighting image; (X)a(i),Ya(i) Represents the ith coordinate value in the a-th personal character head region in the recognized fighting image; (X)b(i),Yb(i) Represents the ith coordinate value in the b-th personal character head region in the recognized fighting image; n isaRepresenting the number of pixel points in the alpha personal character head type area in the identified fighting image; n isbRepresenting the number of pixel points in the b-th personal character head type area in the identified fighting image; m represents the total number of people in the recognized fighting image.
In an embodiment of the present application, a possible implementation manner is provided, and the illumination warning module 310 shown in fig. 3 is further configured to:
controlling the camera to align to the most dense coordinate points in the fighting scene according to the following formula:
Figure BDA0003375040030000111
wherein, the delta X represents the coordinate distance of the camera which needs to move transversely, if the delta X is more than or equal to 0, the camera needs to move rightwards by delta X unit length distances, and if the delta X is less than 0, the camera needs to move leftwards by delta X unit length distances; the delta Y represents the coordinate distance of the camera which needs to move vertically, if the delta Y is larger than or equal to 0, the camera needs to move upwards for delta Y-axis unit length distances, and if the delta Y is smaller than 0, the camera needs to move downwards for delta Y-axis unit length distances; (X)0,Y0) The center position coordinates of the fighting image are indicated.
In an embodiment of the present application, a possible implementation manner is provided, and the illumination warning module 310 shown in fig. 3 is further configured to:
record the coordinate point with the most intensive people in the fighting scene
Figure BDA0003375040030000112
And controlling the illumination intensity of the highlight flashlight on the camera according to the distance value of each person in the fighting image from the densest coordinate point by using the following formula:
Figure BDA0003375040030000113
wherein E represents the illumination intensity control value of the highlight flashlight on the camera; emaxThe maximum illumination intensity value of the strong light flashlight on the camera is represented; k represents the transverse coordinate width value of the fighting image; l represents a vertical coordinate length value of the fighting image.
The fighting phenomenon can be timely and accurately identified, the automatic alarm operation is carried out when the fighting phenomenon exists, and the purpose of effectively processing the fighting phenomenon is achieved. All the head types of all the people in the fighting image are further identified according to the head type identification, then the most intensive coordinate points of the people in the fighting scene are obtained according to the distance between the head and the head of each person in the fighting image, then the illumination intensity of the highlight flashlight on the camera is controlled according to the distance value of each person from the point, the camera is irradiated to the most intensive point of the people in the fighting scene, therefore, the fighting phenomenon can be timely processed, the fighting in the most serious area can be relieved to a certain extent, the public security management is facilitated, and the property safety of people and people is protected.
It can be clearly understood by those skilled in the art that the specific working processes of the system, the apparatus, and the module described above may refer to the corresponding processes in the foregoing method embodiments, and for the sake of brevity, the detailed description is omitted here.
Those of ordinary skill in the art will understand that: the technical solution of the present application may be essentially or wholly or partially embodied in the form of a software product, where the computer software product is stored in a storage medium and includes program instructions for enabling an electronic device (e.g., a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present application when the program instructions are executed. And the aforementioned storage medium includes: various media capable of storing program codes, such as a U disk, a removable hard disk, a Read Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk, or an optical disk.
Alternatively, all or part of the steps of implementing the foregoing method embodiments may be implemented by hardware (an electronic device such as a personal computer, a server, or a network device) associated with program instructions, which may be stored in a computer-readable storage medium, and when the program instructions are executed by a processor of the electronic device, the electronic device executes all or part of the steps of the method described in the embodiments of the present application.
The above embodiments are only used for illustrating the technical solutions of the present application, and not for limiting the same; although the present application has been described in detail with reference to the foregoing embodiments, it should be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments can be modified or some or all of the technical features can be equivalently replaced within the spirit and principle of the present application; such modifications or substitutions do not depart from the scope of the present application.

Claims (10)

1. An automatic alarm method based on motion recognition is characterized by comprising the following steps:
identifying the action of a person in a video acquired by a camera to obtain an action identification result;
judging whether a fighting phenomenon exists or not according to the action recognition result;
and if the fighting phenomenon exists, executing automatic alarm operation.
2. The automatic alarm method based on motion recognition according to claim 1, wherein the recognizing the motion of the person in the video collected by the camera to obtain the motion recognition result comprises:
inputting the video collected by the camera into the action recognition model to obtain an action recognition result output by the action recognition model; the motion recognition model is used for extracting spatial features and time sequence features in the video collected by the camera, carrying out depth time sequence feature weighting to obtain motion feature values, and carrying out motion recognition on the video collected by the camera based on the motion feature values.
3. The automatic alarm method based on motion recognition according to claim 1 or 2, wherein if it is determined that there is a fighting phenomenon, performing an automatic alarm operation, including:
if the fighting phenomenon exists, acquiring a corresponding fighting image in the video;
and uploading the corresponding fighting image in the video to a public security network system to execute automatic alarm operation.
4. The automatic alarm method based on motion recognition according to claim 3, further comprising:
identifying the head types of all people in the fighting image;
obtaining a coordinate point with the most dense people in the fighting scene according to the distance between the head and the head of each person in the fighting image;
the illumination intensity of the highlight flashlight on the camera is controlled according to the distance value of each person in the fighting image from the most dense coordinate point, and the camera is aligned to irradiate the most dense coordinate point in the fighting scene.
5. The automatic alarm method based on motion recognition is characterized in that a rectangular plane coordinate system is established by taking the top of the lower left corner of the fighting image as the origin, the left frame of the fighting image is Y-axis upwards, the lower frame of the fighting image is X-axis rightwards, the unit length of the X-axis of the rectangular plane coordinate system is the distance value between two adjacent transverse pixel points of the fighting image, the unit length of the Y-axis of the rectangular plane coordinate system is the distance value between two adjacent longitudinal pixel points of the fighting image, and the coordinate value of each pixel point in the fighting image can be obtained according to the rectangular plane coordinate system;
obtaining the most dense coordinate points of the people in the fighting scene according to the head-to-head distance of each person in the fighting image by using the following formula:
Figure FDA0003375040020000021
substituting the value of a from 1 to m into the above formula to find GaThe value of a at the minimum is denoted as aminThe coordinate point with the most intensive people in the fighting scene is
Figure FDA0003375040020000022
Wherein G isaRepresenting the sum of the distances between the head of the a-th person and the heads of all persons in the fighting image; (X)a(i),Ya(i) Represents the ith coordinate value in the a-th personal character head region in the recognized fighting image; (X)b(i),Yb(i) Represents the ith coordinate value in the b-th personal character head region in the recognized fighting image; n isaRepresenting the number of pixel points in the alpha personal character head type area in the identified fighting image; n isbRepresenting the number of pixel points in the b-th personal character head type area in the identified fighting image; m represents the total number of people in the recognized fighting image.
6. The automatic alarm method based on motion recognition according to claim 5, wherein the camera is controlled to be aligned to the most intensive coordinate points in the fighting scene according to the following formula:
Figure FDA0003375040020000023
wherein, the delta X represents the coordinate distance of the camera which needs to move transversely, if the delta X is more than or equal to 0, the camera needs to move rightwards by delta X unit length distances, and if the delta X is less than 0, the camera needs to move leftwards by delta X unit length distances; the delta Y represents the coordinate distance of the camera which needs to move vertically, if the delta Y is larger than or equal to 0, the camera needs to move upwards for delta Y-axis unit length distances, and if the delta Y is smaller than 0, the camera needs to move downwards for delta Y-axis unit length distances; (X)0,Y0) The center position coordinates of the fighting image are indicated.
7. The automatic alarm method based on motion recognition according to claim 6, characterized in that the coordinate point with the densest people in the fighting scene is recorded as
Figure FDA0003375040020000032
The illumination intensity of the highlight flashlight on the camera is controlled according to the following formula and the distance value of each person in the fighting image from the most dense coordinate pointDegree:
Figure FDA0003375040020000031
wherein E represents the illumination intensity control value of the highlight flashlight on the camera; emaxThe maximum illumination intensity value of the strong light flashlight on the camera is represented; k represents the transverse coordinate width value of the fighting image; l represents a vertical coordinate length value of the fighting image.
8. An automatic alarm system based on action recognition is characterized by comprising:
the identification module is used for identifying the action of a person in the video acquired by the camera to obtain an action identification result;
the judging module is used for judging whether a fighting phenomenon exists according to the action recognition result;
and the alarm module is used for executing automatic alarm operation if the judgment module judges that the fighting phenomenon exists.
9. The automatic alert system based on motion recognition as claimed in claim 8, wherein the recognition module is further configured to:
inputting the video collected by the camera into the action recognition model to obtain an action recognition result output by the action recognition model; the motion recognition model is used for extracting spatial features and time sequence features in the video collected by the camera, carrying out depth time sequence feature weighting to obtain motion feature values, and carrying out motion recognition on the video collected by the camera based on the motion feature values.
10. The automatic alert system based on motion recognition as claimed in claim 8 or 9, wherein the alert module is further configured to:
if the fighting phenomenon exists, acquiring a corresponding fighting image in the video;
and uploading the corresponding fighting image in the video to a public security network system to execute automatic alarm operation.
CN202111426217.5A 2021-11-25 2021-11-25 Automatic alarm method and system based on motion recognition Active CN114220165B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111426217.5A CN114220165B (en) 2021-11-25 2021-11-25 Automatic alarm method and system based on motion recognition

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111426217.5A CN114220165B (en) 2021-11-25 2021-11-25 Automatic alarm method and system based on motion recognition

Publications (2)

Publication Number Publication Date
CN114220165A true CN114220165A (en) 2022-03-22
CN114220165B CN114220165B (en) 2022-07-08

Family

ID=80698637

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111426217.5A Active CN114220165B (en) 2021-11-25 2021-11-25 Automatic alarm method and system based on motion recognition

Country Status (1)

Country Link
CN (1) CN114220165B (en)

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102098492A (en) * 2009-12-11 2011-06-15 上海弘视通信技术有限公司 Audio and video conjoint analysis-based fighting detection system and detection method thereof
CN107103267A (en) * 2016-02-23 2017-08-29 北京文安智能技术股份有限公司 A kind of fight behavioral value method, device based on video
CN107370947A (en) * 2017-07-28 2017-11-21 惠州市伊涅科技有限公司 Area monitoring method
CN107465893A (en) * 2017-07-28 2017-12-12 惠州市伊涅科技有限公司 Colony's close quarters method for early warning
CN111008601A (en) * 2019-12-06 2020-04-14 江西洪都航空工业集团有限责任公司 Fighting detection method based on video
CN111263114A (en) * 2020-02-14 2020-06-09 北京百度网讯科技有限公司 Abnormal event alarm method and device
CN111626199A (en) * 2020-05-27 2020-09-04 多伦科技股份有限公司 Abnormal behavior analysis method for large-scale multi-person carriage scene
CN111860457A (en) * 2020-08-04 2020-10-30 广州市微智联科技有限公司 Fighting behavior recognition early warning method and recognition early warning system thereof
CN111860430A (en) * 2020-07-30 2020-10-30 浙江大华技术股份有限公司 Identification method and device of fighting behavior, storage medium and electronic device
CN112634561A (en) * 2020-12-15 2021-04-09 中标慧安信息技术股份有限公司 Safety alarm method and system based on image recognition

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102098492A (en) * 2009-12-11 2011-06-15 上海弘视通信技术有限公司 Audio and video conjoint analysis-based fighting detection system and detection method thereof
CN107103267A (en) * 2016-02-23 2017-08-29 北京文安智能技术股份有限公司 A kind of fight behavioral value method, device based on video
CN107370947A (en) * 2017-07-28 2017-11-21 惠州市伊涅科技有限公司 Area monitoring method
CN107465893A (en) * 2017-07-28 2017-12-12 惠州市伊涅科技有限公司 Colony's close quarters method for early warning
CN111008601A (en) * 2019-12-06 2020-04-14 江西洪都航空工业集团有限责任公司 Fighting detection method based on video
CN111263114A (en) * 2020-02-14 2020-06-09 北京百度网讯科技有限公司 Abnormal event alarm method and device
CN111626199A (en) * 2020-05-27 2020-09-04 多伦科技股份有限公司 Abnormal behavior analysis method for large-scale multi-person carriage scene
CN111860430A (en) * 2020-07-30 2020-10-30 浙江大华技术股份有限公司 Identification method and device of fighting behavior, storage medium and electronic device
CN111860457A (en) * 2020-08-04 2020-10-30 广州市微智联科技有限公司 Fighting behavior recognition early warning method and recognition early warning system thereof
CN112634561A (en) * 2020-12-15 2021-04-09 中标慧安信息技术股份有限公司 Safety alarm method and system based on image recognition

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
彭奕飞: "公共场合打架和抢劫行为识别研究", 《中国优秀博硕士学位论文全文数据库(硕士) 信息科技辑》 *
高文静: "基于时空特征的人体动作识别研究", 《中国优秀博硕士学位论文全文数据库(硕士) 信息科技辑》 *

Also Published As

Publication number Publication date
CN114220165B (en) 2022-07-08

Similar Documents

Publication Publication Date Title
CN109670441B (en) Method, system, terminal and computer readable storage medium for realizing wearing recognition of safety helmet
US9985963B2 (en) Method and system for authenticating liveness face, and computer program product thereof
CN106056079B (en) A kind of occlusion detection method of image capture device and human face five-sense-organ
JP4569190B2 (en) Suspicious person countermeasure system and suspicious person detection device
JP2018508875A (en) Method and apparatus for biological face detection
CN111144293A (en) Human face identity authentication system with interactive living body detection and method thereof
CN114842397B (en) Real-time old man falling detection method based on anomaly detection
CN111767823A (en) Sleeping post detection method, device, system and storage medium
CN108171138A (en) A kind of biological information acquisition methods and device
CN110781844A (en) Security patrol monitoring method and device
CN112200108A (en) Mask face recognition method
CN110659588A (en) Passenger flow volume statistical method and device and computer readable storage medium
JP2005071009A (en) Image input device and authentication device using the same
CN114220165B (en) Automatic alarm method and system based on motion recognition
CN114092875A (en) Operation site safety supervision method and device based on machine learning
CN108932465A (en) Reduce the method, apparatus and electronic equipment of Face datection false detection rate
KR102277929B1 (en) Real time face masking system based on face recognition and real time face masking method using the same
CN114764895A (en) Abnormal behavior detection device and method
TW201907329A (en) Entry access system having facil recognition
CN113537165B (en) Detection method and system for pedestrian alarm
Long et al. Video frame deletion and duplication
JP2005084979A (en) Face authentication system, method and program
Pynadath et al. Drowsiness Detection Based on Image Processing with Video Compression
Srivastava et al. Face mask detection using convolutional neural network
CN114220142B (en) Face feature recognition method of deep learning algorithm

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant