CN110795963A - Monitoring method, device and equipment based on face recognition - Google Patents

Monitoring method, device and equipment based on face recognition Download PDF

Info

Publication number
CN110795963A
CN110795963A CN201810865117.4A CN201810865117A CN110795963A CN 110795963 A CN110795963 A CN 110795963A CN 201810865117 A CN201810865117 A CN 201810865117A CN 110795963 A CN110795963 A CN 110795963A
Authority
CN
China
Prior art keywords
monitored object
information
behavior
dangerous
monitoring
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201810865117.4A
Other languages
Chinese (zh)
Inventor
谢利民
黄成武
孙健峰
刘晓辉
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Intellifusion Technologies Co Ltd
Original Assignee
Shenzhen Intellifusion Technologies Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Intellifusion Technologies Co Ltd filed Critical Shenzhen Intellifusion Technologies Co Ltd
Priority to CN201810865117.4A priority Critical patent/CN110795963A/en
Publication of CN110795963A publication Critical patent/CN110795963A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/172Classification, e.g. identification
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/18Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast

Abstract

The embodiment of the invention discloses a monitoring method, a monitoring device and monitoring equipment based on face recognition, wherein the method comprises the following steps: acquiring video data in a monitoring area; extracting a face image from the video data, identifying the face image, and identifying whether a monitored object corresponding to the face image is a trusted object; if the monitored object is not a trusted object, acquiring the loitering duration of the monitored object in the monitoring area and acquiring behavior information of the monitored object; determining that the loitering duration exceeds a preset duration and/or the behavior information of the monitored object is dangerous behavior, and sending alarm information; and if the monitored object is a trusted object and the monitored object and the peer personnel are confirmed to belong to a non-safety association relation, sending out alarm information. The technical scheme of the embodiment of the invention can help to investigate suspicious personnel, reduce the incidence rate of malignant events and reduce the probability of false alarm.

Description

Monitoring method, device and equipment based on face recognition
Technical Field
The invention relates to the technical field of monitoring, in particular to a monitoring method, a monitoring device and monitoring equipment based on face recognition.
Background
With the progress and development of society, the requirements of various industries on safety management work are higher and higher. At present, most of monitoring systems on the market are household monitoring systems, the monitoring range is small, the monitoring real-time performance is poor, and the requirements of omnibearing safety monitoring of large places (such as schools, markets and the like) cannot be well met, so that a large amount of manpower is often needed to sequentially patrol and check each area of the places repeatedly. Particularly, it is not uncommon for a bady school and a middle and primary school to have a vicious injury, so that a general suspect stays around a school and waits for a case to be made at a certain time, and when the suspect is suspicious (such as holding a knife or a stick with a hand or the like), an alarm cannot be timely given to enhance the alertness of students and teachers, so that tragedy is easily caused.
Disclosure of Invention
The invention mainly aims to provide a monitoring method, a monitoring device and monitoring equipment based on face recognition, and aims to solve the problem that when suspicious people stay inside and outside a school for too long time and/or have dangerous behaviors, alarm reminding cannot be carried out in time, and a malignant event is easily caused.
In order to achieve the above object, a first aspect of the embodiments of the present invention provides a monitoring method based on face recognition, including:
acquiring video data in a monitoring area;
extracting a face image from the video data, identifying the face image, and identifying whether a monitored object corresponding to the face image is a trusted object;
if the monitored object is not a trusted object, acquiring loitering duration of the monitored object in a monitoring area, and acquiring behavior information of the monitored object;
determining that the loitering duration exceeds a preset duration and/or the behavior information of the monitored object is dangerous behavior, and sending alarm information;
and if the monitored object is a trusted object and the monitored object and the peer personnel are confirmed to belong to a non-safety association relation, sending out alarm information.
Further, if the monitored object is a trust object, and the trust object is monitored to have a plurality of accompanying persons; and judging whether the trust object and the plurality of accompanying people have a preset safety association relationship, and if the trust object and at least one of the plurality of accompanying people do not have the preset safety association relationship, sending an alarm to remind a video monitor.
A second aspect of the embodiments of the present invention provides a monitoring device based on face recognition, including:
the first acquisition unit is used for acquiring video data in a monitoring area;
the identification unit is used for extracting a face image from the video data, identifying the face image and identifying whether a monitored object corresponding to the face image is a trusted object or not;
a second acquiring unit, configured to acquire a loitering duration of the monitored object in the monitoring area and acquire behavior information of the monitored object when the identifying unit identifies that the monitored object is not a trusted object;
the alarm unit is used for determining that the loitering duration exceeds a preset duration and/or the behavior information of the monitored object is dangerous behavior, and sending alarm information;
and the alarm unit is also used for confirming that the monitored object and the personnel in the same row belong to the non-safety association relationship when the identification unit identifies that the monitored object is the trust object, and sending out alarm information.
A third aspect of the embodiments of the present invention provides a monitoring device based on face recognition, including: the monitoring device comprises a memory, a processor and a communication interface, wherein the memory is used for storing executable program codes and data, the communication interface is used for the monitoring device to perform communication interaction with external equipment, and the processor is used for calling the executable program codes stored in the memory and executing the steps in the monitoring method based on the face recognition provided by the first aspect.
A fourth aspect of the embodiments of the present invention provides a computer-readable storage medium, where a computer program is stored, and when the computer program is executed by a processor, the computer program implements the steps in the monitoring method based on face recognition provided in the first aspect.
The scheme of the invention at least comprises the following beneficial effects:
in the embodiment of the invention, a face image is extracted from video data shot in a monitoring area, the extracted face image is identified to identify whether a monitored object corresponding to the face image is a pre-recorded trusted object, if the monitored object is not the trusted object, the loitering duration of the monitored object in the monitoring area and the behavior information of the monitored object are acquired, and alarm information is sent when the loitering duration exceeds the preset duration and/or the behavior information of the monitored object is determined to be dangerous behavior; further, if the monitored object is a trusted object and the monitored object and the peer personnel are confirmed to belong to a non-safety association relation with each other, alarm information is sent out. According to the embodiment of the invention, the identity of the monitored object is confirmed, and when the monitored object is confirmed to be an untrusted object, the monitored object stays for too long time and/or has dangerous behaviors, alarm processing is carried out, so that suspicious personnel can be checked, and the incidence rate of malignant events is reduced; in addition, when the monitored object is a trusted object, the alarm processing is carried out when the monitored object and the persons in the same row are confirmed to be in the unsafe relation, so that the missing of the suspicious persons can be prevented, the possibility of missing the suspicious persons is reduced, any suspicious person cannot be left as far as possible, and the probability of false alarm is reduced.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to the structures shown in the drawings without creative efforts.
Fig. 1 is a schematic diagram of a network architecture according to an embodiment of the present invention;
fig. 2 is a flowchart of a monitoring method based on face recognition according to an embodiment of the present invention;
fig. 3 is a flowchart of another monitoring method based on face recognition according to an embodiment of the present invention;
fig. 4 is a flowchart of another monitoring method based on face recognition according to an embodiment of the present invention;
fig. 5 is a schematic structural diagram of a monitoring device based on face recognition according to an embodiment of the present invention;
fig. 6 is a schematic structural diagram of another monitoring device based on face recognition according to an embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, not all, embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
The following description will first be made by way of example with reference to the accompanying drawings, in which the aspects of the embodiments of the present application may be applied.
As shown in fig. 1, an embodiment of the present invention provides a network architecture. The network architecture may include at least a monitoring server, at least one video capture device deployed within a monitored area. The monitoring server may be understood as a server that executes the monitoring method based on face recognition provided by the embodiment of the present invention, that is, the monitoring device based on face recognition provided by the embodiment of the present invention. The monitoring server records basic information such as face images, identity information, age, gender and the like of trusted objects. The monitoring area can be a designated monitoring place, such as a school, an old fashioned house and the like, and specifically is a door of a corresponding scene and the like; the trusted objects may include frequent persons, guardians, etc. within the monitoring facility, e.g., at a school, the trusted objects may be students of the school, parents of the students, teachers of the school, teaching employees, etc. The video acquisition equipment is used for acquiring video data in an acquisition range, and specifically can be a camera, a snapshot machine and the like.
Referring to fig. 2, an embodiment of the present invention provides a flow chart of a monitoring method based on face recognition. As shown in fig. 2, the monitoring method based on face recognition may include the following steps:
210. and acquiring video data in the monitored area.
In the embodiment of the invention, the monitoring area is an area which needs to be monitored safely, can be a designated monitoring place such as a school, an old fashioned house and the like, and particularly is a doorway of a corresponding scene and the like, and the monitoring area can be one or more. Specifically, the video data of each monitoring area can be acquired through the video acquisition equipment deployed in each monitoring area. One monitoring area can be provided with one or more video acquisition devices with different shooting angles. The video acquisition equipment can be a camera, a snapshot machine and the like. In practical application, according to actual monitoring requirements, video data in the whole monitoring area can be obtained in real time, so that monitoring of all objects in the monitoring area is guaranteed, and missing or monitoring dead angles are avoided.
220. Extracting a face image from the video data, identifying the face image, identifying whether a monitored object corresponding to the face image is a trusted object, and if not, executing step 230; if so, step 250 is performed.
In the embodiment of the invention, the image frame containing the face can be obtained from the video data, and the image of the face part is intercepted from the image frame by using a face extraction algorithm. Further, face recognition is performed on the part of the face image by using a face recognition algorithm to recognize whether a monitored object (i.e., an owner of the face) corresponding to the face image is a trusted object. The trusted object is an object in which personal information is stored in the monitoring device in advance, and the personal information may include but is not limited to basic information such as a face image, identity information, age, gender, and the like. Trusted objects may include resident people and guardians, etc. within a monitoring facility, e.g., at a school, trusted objects may be students at the school, parents of students, teachers at the school, teaching employees, etc. It should be noted that the face recognition and the image extraction in the embodiment of the present invention can be implemented by using the existing algorithm, and the process of face recognition and image extraction is not described in detail herein.
As an optional implementation manner, the recognizing the face image in step 220, and an implementation manner of recognizing whether the monitored object corresponding to the face image is a trusted object may specifically include the following steps:
21) extracting the feature information of the face image, matching the feature information with model data in a face database stored in advance, and executing the step 22) if the matching is successful; if the matching fails, go to step 23)
22) Identifying a monitored object corresponding to the face image as a trusted object;
23) and identifying the monitored object corresponding to the face image as an untrusted object.
In this embodiment, a face database may be stored in the monitoring device in advance, and the face database may include face model data of all trusted objects. Specifically, after the face images of the trusted objects are acquired, the model training operation is performed by extracting the feature information of the face images to train to obtain model data, so that the model data can be conveniently used as a comparison sample for identity confirmation in a later period. The method specifically comprises the steps of extracting feature information in a face image by using a feature extraction algorithm, matching the feature information with model data in a face database, and if the matching rate reaches a preset value, determining that the matching is successful, wherein the monitored object is one trust object; and if the matching rate is lower than the preset value, the matching is considered to be failed, and the monitored object is not a trusted object, namely the monitored object is an untrusted object. It should be noted that the face characteristic information extraction related in this embodiment may be implemented by using an existing algorithm, and the face characteristic information extraction process is not described in detail herein.
230. The method comprises the steps of obtaining the loitering duration of a monitored object in a monitoring area, and obtaining behavior information of the monitored object.
Specifically, the loitering duration of the monitored object can be determined according to the duration of the monitored object appearing in the video in the acquired video data.
240. And determining that the loitering time length exceeds a preset time length and/or the behavior information of the monitored object is dangerous behavior, and sending alarm information.
In the embodiment of the invention, the monitoring equipment can store the preset time in advance, and the preset time can be modified according to actual requirements and/or application scenes. When the wandering time of the monitored object in the monitoring area is longer than the preset time, the fact that the monitored object lingers for too long time possibly purposefully can be shown, and an alarm prompt can be sent out at the moment. And/or when the behavior information of the monitored object is dangerous behavior, alarm information is also sent out. The sending of the alarm information may specifically be sending the alarm information to a predetermined person, and the predetermined person may include, but is not limited to, at least one of a video monitor, a security person, a monitored object, and the like. When the alarm information is sent to the video monitor, the monitoring equipment can send the alarm information to the terminal of the video monitor bound with the monitoring equipment, and the video monitor can be notified to the video monitor at the first time when the video monitor is not in a monitoring room, so that the suspicious personnel can be prevented from being organically superior due to the fact that the suspicious personnel are not notified in time. The video monitor can be directly a monitoring person in a monitoring room, and when the monitoring device monitors suspicious persons, the monitoring person can be directly reminded of the attention of the monitoring person through ringing and/or text display. When the alarm information is sent to the security personnel, the monitoring equipment can send the alarm information to the terminal of the security personnel bound with the monitoring equipment, so that the security personnel can take precautionary measures at the first time. When the alarm information is sent to the monitored object, the monitoring equipment can control the voice players in or around the monitoring area to send the alarm information to the monitored object, and remind the suspicious people of stopping dangerous behaviors, so that the possibility of giving up the crime by the suspicious people is improved, and the occurrence rate of malignant events is reduced. The terminal referred to herein may include, but is not limited to, a smart terminal such as a smart phone, a tablet computer, and the like.
250. Confirming that the co-workers of the monitored object and the monitored object belong to a non-safety association relation, and sending out alarm information.
In the embodiment of the present invention, there may be more than one peer of the monitored object, and the determining that the monitored object and the peer belong to the non-secure association relationship may specifically be determining that at least one of the peer of the monitored object and the monitored object belong to the non-secure association relationship, and the peer does not have the secure association relationship. For example, the activity track of the monitored object is determined, and the persons who contact the monitored object more than the preset number of times in the activity track of the monitored object are selected in a clustering mode and determined as the persons who are in the same line with the monitored object. The monitoring device may store a security association relationship in advance, where the security association relationship may be regarded as an association relationship between another person and a monitored object, and may include, for example, an association relationship between a guardian and a monitored person, a parent-child relationship, a friend relationship, a sibling relationship, a grandfather-grandson relationship, and the like. A non-secure association may be considered as having no secure association with the monitored object. Further, if the persons in the same row who belong to the non-safety association relationship with the monitored object are in the safety association relationship with each other, the persons in the same row who belong to the non-safety association relationship can be excluded, that is, the persons are not considered as suspicious persons.
In the embodiment of the present invention, when it is determined that the monitored object and the peer thereof belong to the non-security association relationship, the sending of the alarm information may specifically be sending the alarm information to a predetermined person, where the predetermined person may include but is not limited to at least one of a video monitor, a security protection person, a monitored object, and the like.
Further, if the monitored object is a trusted object, and the trusted object is monitored to have a number of accompanying people (e.g., one or more children); and judging whether the trust object and the plurality of accompanying people have a preset safety association relationship (such as the relationship between a parent and children of the parent), and if the trust object and at least one of the plurality of accompanying people do not have the preset safety association relationship and the accompanying people do not have the preset safety association relationship, sending alarm information.
That is to say, the trusted object may also perform an untrusted action, for example, a child taken away by the trusted object is not his child, but may be a child of another person, that is, the follower and the trusted object are not in a preset security association relationship, which is also dangerous, and through the above mechanism, the untrusted action of the trusted object may be further warned, so as to further improve the security. In addition, it is confirmed that there is no security association relationship between peer persons, for example, a trusted object and children at other homes belong to peer persons and belong to a non-security management relationship, but if parents of children are also peer persons who trust the object, no alarm information should be sent out. Thus, the present embodiment may also reduce the probability of false alarms.
Optionally, supervisory equipment can also control the voice player and play danger warning, if inform danger to the teacher and students of school through voice broadcast's mode, let the teacher and students of school can do precautionary measure as early as possible. In addition, the voice player can be controlled to send warning prompts to suspicious people, and the suspicious people can be reminded to stop dangerous behaviors in time, so that the possibility that the suspicious people give up doing a case is improved, and the incidence rate of malignant events is reduced. It can be understood that the monitoring device may control the voice player to perform voice broadcast, or the video monitor or security personnel may manually control the voice player to perform voice broadcast after receiving the alarm information, which is not limited herein.
Implementing the method described in fig. 2, extracting a face image from video data captured in a monitoring area, identifying the extracted face image to identify whether a monitored object corresponding to the face image is a pre-recorded trusted object, if the monitored object is not a trusted object, acquiring loitering duration of the monitored object in the monitoring area and acquiring behavior information of the monitored object, and sending alarm information when it is determined that the loitering duration exceeds a preset duration and/or the behavior information of the monitored object is a dangerous behavior; further, if the monitored object is a trusted object and the monitored object and the peer personnel are confirmed to belong to the non-safety association relationship, alarm information is sent out. Therefore, by carrying out identity confirmation on the monitored object, when the monitored object is confirmed to be an untrusted object, and if the monitored object stays for too long time and/or has dangerous behaviors, alarm processing is carried out, so that suspicious personnel can be checked, and the incidence rate of malignant events is reduced; in addition, when the monitored object is a trusted object, alarm processing is carried out when the monitored object and the peer personnel thereof are confirmed to be in a non-safety relation, so that missing inspection of suspicious personnel can be prevented, the possibility of missing inspection is reduced, any suspicious personnel are prevented from being passed as far as possible, and meanwhile, the probability of false alarm can be reduced.
Referring to fig. 3, an embodiment of the present invention provides a flow chart of another monitoring method based on face recognition. As shown in fig. 3, the monitoring method based on face recognition may include the following steps:
310. and acquiring video data in the monitored area.
320. Extracting a face image from the video data, identifying the face image, identifying whether a monitored object corresponding to the face image is a trusted object, and if not, executing a step 330; if so, step 370 is performed.
330. The method comprises the steps of obtaining the loitering duration of a monitored object in a monitoring area, and obtaining behavior information of the monitored object.
As an optional implementation manner, the acquiring behavior information of the monitored object in step 330 may include the following steps:
31) extracting video information of a monitored object from the video data;
32) and performing behavior analysis on the video information to determine behavior information of the monitored object.
Because the video data contains too much information, some data may be useless data, it is necessary to filter out the useless information from the video data collected by the video collecting device, and keep the useful information, for example, when the monitored object keeps an action for a long time, only one frame of image data can be extracted.
As an optional implementation manner, step 32) performs behavior analysis on the video information, and a specific implementation manner of determining behavior information of the monitored object may include the following steps:
33) extracting each frame of image and the acquisition time of each frame of image from the video information;
34) identifying limb actions of the monitored object in each frame of image;
35) and determining the behavior information of the monitored object according to the limb action of the monitored object in each frame of image and the acquisition time sequence of each frame of image.
The acquisition time of each frame of image refers to the time when each frame of image is acquired by the video acquisition equipment. The behavior trend of the monitored object can be known by identifying the limb action of the monitored object in each frame of image and the acquisition time sequence of each frame of image, and the behavior information of the monitored object in a period of time can be estimated by analyzing more image frames.
340. And determining that the behavior information of the monitored object is dangerous behavior and the loitering time does not exceed a preset time.
As an optional implementation, the specific implementation of determining that the behavior information of the monitored object is the dangerous behavior in step 340 may include the following steps:
36) judging whether behavior information of the monitored object is matched with at least one pre-stored dangerous behavior;
37) if the behavior information of the monitored object is matched with at least one pre-stored dangerous behavior, determining that the behavior information is the dangerous behavior;
38) and if the behavior information of the monitored object is not matched with all the pre-stored dangerous behaviors, determining that the behavior information is not the dangerous behavior.
In this embodiment, a sample consisting of a plurality of dangerous behaviors (such as a beating behavior, a tool holding behavior, a stick behavior, or the like) may be stored in advance in the monitoring device, the behavior information of the monitored object may be matched with the sample, and when the matching degree of the behavior information with at least one dangerous behavior in the sample is higher than the preset matching degree, the behavior information may be regarded as a dangerous behavior. When the matching degree of the behavior information and all dangerous behaviors in the sample is lower than the preset matching degree, the behavior information can be considered not to be dangerous behaviors.
350. And determining the danger level corresponding to the dangerous behavior matched with the behavior information according to a pre-stored dangerous behavior level table.
Wherein, the dangerous behavior grade table contains the corresponding dangerous grades of different dangerous behaviors.
360. And sending out alarm information corresponding to the danger level according to the danger level corresponding to the dangerous behavior matched with the behavior information.
In this embodiment, the monitoring device may store a dangerous behavior level table in advance, where the dangerous behavior level table includes dangerous levels corresponding to different dangerous behaviors, and the dangerous levels corresponding to different dangerous behaviors may be the same or different. A higher risk level indicates a more dangerous behavior of the monitored object. Corresponding alarm information can be set for each danger level, and the alarm information corresponding to different danger levels can be different. Specifically, the danger level corresponding to the dangerous behavior sent by the monitored object can be inquired from the dangerous behavior level table, and the alarm information corresponding to the danger level is sent to the predetermined personnel, so that different countermeasures can be taken according to different danger levels, and further deterioration of the event is avoided. Wherein the predetermined personnel may include, but is not limited to, at least one of a video monitor, a security personnel, a monitored object, and the like.
370. And confirming that the monitored object and the personnel in the same row belong to a non-safety association relation, and sending out alarm information.
The method described in fig. 3 is implemented, a face image is extracted from video data captured in a monitoring area, the extracted face image is identified to identify whether a monitored object corresponding to the face image is a pre-recorded trusted object, if the monitored object is not a trusted object, loitering duration of the monitored object in the monitoring area and behavior information of the monitored object are acquired, when the behavior information of the monitored object is determined to be dangerous behavior, a danger level of the dangerous behavior is further determined, and corresponding alarm information is sent according to the danger level. Therefore, different counter-measures can be taken according to different danger levels, and further deterioration of the event is avoided.
Referring to fig. 4, an embodiment of the present invention provides a flowchart of another monitoring method based on face recognition. As shown in fig. 4, the monitoring method based on face recognition may include the following steps:
410. and acquiring video data in the monitored area.
420. Extracting a face image from the video data, identifying the face image, identifying whether a monitored object corresponding to the face image is a trusted object, and if not, executing step 430; if so, step 480 is directed.
430. The method comprises the steps of obtaining the loitering duration of a monitored object in a monitoring area, and obtaining behavior information of the monitored object.
440. Determining that the loitering duration exceeds a preset duration and the behavior information of the monitored object is not dangerous behavior.
450. Personal information of the monitored object is identified.
The personal information of the monitored object at least comprises information such as age, sex, dressing, expression and the like.
460. And analyzing the danger index of the monitored object according to the personal information to determine the danger index of the monitored object.
In the embodiment of the invention, the risk indexes corresponding to different personal information are different, for example, the risk indexes corresponding to different age groups can be different, such as the risk indexes of middle and small age groups are low, the risk indexes are high from young to middle age, the risk indexes are moderate in the elderly and the like; risk indices for different sexes may be different, e.g., a female risk index is lower than a male risk index; the risk indexes corresponding to different dresses can be different, such as the objective risk index of the dress is lower than the objective risk index of the dress; the risk index may be different for different expressions, e.g. a calm or happy expression may have a lower risk index than an angry or angry expression, etc. The monitoring device can pre-store danger index samples corresponding to different personal information, and comprehensively count the danger index of the monitored object by extracting the personal information of the monitored object and comparing the personal information with the pre-stored samples.
470. And determining that the danger index of the monitored object exceeds a preset threshold value, and sending out alarm information.
In the embodiment of the invention, after the danger index of the monitored object is analyzed, the danger index can be compared with a preset threshold value stored in the monitoring equipment in advance, when the danger index is greater than the preset threshold value, the danger index of the monitored object can be considered to exceed the standard and belong to high-risk personnel, and at the moment, an alarm prompt is sent to a video monitor; when the risk index is smaller than the preset threshold value, the risk index of the monitored object is considered to be low and belongs to safety personnel, and at the moment, the alarm processing is not needed. The preset threshold value can be adaptively modified according to actual requirements and/or application scenarios.
As an optional implementation manner, when it is determined that the risk index of the monitored object does not exceed the preset threshold, the monitoring method based on face recognition as described in fig. 4 may further include the following steps:
41) acquiring historical monitoring data of a monitored object;
42) weighting the current risk index of the monitored object and the risk index in the historical monitoring data to obtain a weighted risk index;
43) and if the weighted danger index exceeds a preset threshold value, sending out alarm information.
In the embodiment, when the risk index of the monitored object is smaller than the preset threshold, historical monitoring data of the monitored object can be further acquired, the risk index in the historical monitoring data and the risk index analyzed currently are subjected to weighted summation to obtain a weighted risk index, the weighted risk index is compared with the preset threshold, if the weighted risk index is larger than the preset threshold, the risk index of the monitored object can be considered to be over-standard and belongs to high-risk personnel, and an alarm prompt is sent to a video monitor at this moment; if the weighted risk index is still smaller than the preset threshold, the risk index of the monitored object is considered to be low and belongs to safety personnel, and at the moment, the alarm processing is not needed. The historical monitoring data may be a risk index analyzed each time the monitored object appears in the monitoring area within a preset time period, which may be within a week, a half month, a month, or a half year, etc. Therefore, the method can prevent the suspicious personnel from overlooking, reduce the possibility of overlooking and avoid any suspicious personnel as much as possible.
480. And confirming that the monitored object and the personnel in the same row belong to a non-safety association relation, and sending out alarm information.
The method described in the figure 4 is implemented, a face image is extracted from video data shot in a monitoring area, the extracted face image is identified to identify whether a monitored object corresponding to the face image is a pre-recorded trusted object, if the monitored object is not the trusted object, loitering duration of the monitored object in the monitoring area and behavior information of the monitored object are acquired, when it is determined that the loitering duration exceeds the preset duration and the behavior information of the monitored object is not dangerous behavior, personal information of the monitored object can be further identified to perform danger index analysis, and when the danger index exceeds a standard, alarm information is sent. Therefore, the alarm can be given in time when the monitored object stays for a long time and belongs to high-risk personnel, so that precautionary measures can be conveniently made in advance, and the incidence rate of malignant events is reduced.
Referring to fig. 5, an embodiment of the present invention provides a monitoring apparatus based on face recognition, which can be used to execute the monitoring method based on face recognition provided in the embodiment of the present invention. As shown in fig. 5, the monitoring apparatus may include:
a first obtaining unit 51 for obtaining video data in the monitored area;
the identification unit 52 is configured to extract a face image from the video data, identify the face image, and identify whether a monitored object corresponding to the face image is a trusted object;
a second acquiring unit 53, configured to acquire the loitering time length of the monitored object in the monitoring area and acquire behavior information of the monitored object when the identifying unit 52 identifies that the monitored object is not a trusted object;
an alarm unit 54, configured to determine that the loitering duration exceeds a preset duration and/or behavior information of the monitored object is dangerous behavior, and send alarm information;
and the alarm unit 54 is further configured to, when the identification unit 52 identifies that the monitored object is a trusted object, confirm that the monitored object and the peer personnel are in a non-secure association relationship with each other, and issue an alarm message.
Further, the alarm unit 54 may be further configured to monitor that the trusted object has several following persons (e.g. one or more children) if the identification unit 52 identifies that the monitored object is a trusted object; and judging whether the trust object and the plurality of accompanying people have a preset safety association relationship (such as the relationship between a parent and children of the parent), and if the trust object and at least one of the plurality of accompanying people do not have the preset safety association relationship and the accompanying people do not have the preset safety association relationship, sending alarm information.
That is to say, the trusted object may also perform an untrusted action, for example, a child taken away by the trusted object is not his child, but may be a child of another person, that is, the follower and the trusted object are not in a preset security association relationship, which is also dangerous, and through the above mechanism, the untrusted action of the trusted object may be further warned, so as to further improve the security. In addition, it is confirmed that there is no security association relationship between peer persons, for example, a trusted object and children at other homes belong to peer persons and belong to a non-security management relationship, but if parents of children are also peer persons who trust the object, no alarm information should be sent out. Thus, the present embodiment may also reduce the probability of false alarms.
Optionally, a specific implementation manner of the second obtaining unit 53 obtaining the behavior information of the monitored object may be: extracting video information of a monitored object from the video data; and performing behavior analysis on the video information to determine behavior information of the monitored object.
Optionally, the second obtaining unit 53 performs behavior analysis on the video information, and a specific implementation manner of determining the behavior information of the monitored object may be: extracting each frame of image and the acquisition time of each frame of image from the video information; identifying limb actions of the monitored object in each frame of image; and determining the behavior information of the monitored object according to the limb action of the monitored object in each frame of image and the acquisition time sequence of each frame of image.
Optionally, the specific implementation of the alarm unit 54 determining that the behavior information of the monitored object is a dangerous behavior may be: judging whether behavior information of the monitored object is matched with at least one pre-stored dangerous behavior; if the behavior information of the monitored object is matched with at least one pre-stored dangerous behavior, determining that the behavior information is the dangerous behavior; and if the behavior information of the monitored object is not matched with all the pre-stored dangerous behaviors, determining that the behavior information is not the dangerous behavior.
Optionally, the monitoring device shown in fig. 5 may further include a determining unit (not shown in the figure), wherein:
a determining unit, configured to determine, according to a pre-stored dangerous behavior level table, a dangerous level corresponding to a dangerous behavior matched with behavior information after the alarm unit 54 determines that the behavior information of the monitored object is a dangerous behavior, where the dangerous behavior level table includes dangerous levels corresponding to different dangerous behaviors;
accordingly, the specific implementation of the alarm unit 54 sending out the alarm information may be: and sending out alarm information corresponding to the danger level according to the danger level corresponding to the dangerous behavior matched with the behavior information.
Optionally, the identifying unit 52 may be further configured to identify personal information of the monitored subject when the alarm unit 54 determines that the loitering duration exceeds the preset duration and the behavior information of the monitored subject is not dangerous behavior, where the personal information may include at least age, gender, dressing, and expression;
correspondingly, the monitoring device shown in fig. 5 may further comprise an analysis unit (not shown in the figure), wherein:
the analysis unit is used for carrying out danger index analysis on the monitored object according to the personal information identified by the identification unit 52 and determining the danger index of the monitored object;
and the alarm unit 54 is further configured to determine that the risk index of the monitored object exceeds a preset threshold, and send out alarm information.
Optionally, the monitoring device shown in fig. 5 may further include a third obtaining unit and a weighting unit (not shown in the figure), wherein:
a third obtaining unit, configured to obtain historical monitoring data of the monitored object when the alarm unit 54 determines that the risk index of the monitored object does not exceed the preset threshold;
the weighting unit is used for weighting the current danger index of the monitored object and the danger index in the historical monitoring data to obtain a weighted danger index;
the alarm unit 54 is further configured to determine whether the weighted risk index exceeds a preset threshold, and if so, send an alarm message.
By implementing the monitoring equipment based on face recognition shown in fig. 5, the monitored object is subjected to identity confirmation, and when the monitored object is confirmed to be an untrusted object, and the monitored object stays for too long time and/or has dangerous behaviors, alarm processing is performed, so that suspicious personnel can be checked, and the incidence rate of malignant events is reduced; in addition, when the monitored object is a trusted object, alarm processing is carried out when the monitored object is confirmed to be in a non-safety relation with the peer personnel, so that missing inspection of suspicious personnel can be prevented, the possibility of missing inspection is reduced, any suspicious personnel can be prevented from being passed through as far as possible, and meanwhile, the probability of false alarm can be reduced.
Further, when the behavior information of the monitored object is determined to be dangerous behavior, the danger level of the dangerous behavior is further determined, and corresponding alarm information is sent out according to the danger level. Therefore, different counter-measures can be taken according to different danger levels, and further deterioration of the event is avoided.
In addition, when the stay time is too long, the personal information of the monitored object can be extracted for risk index analysis, and when the risk index exceeds the standard, an alarm is sent to remind a video monitor. Therefore, the alarm can be given in time when the monitored object stays for a long time and belongs to high-risk personnel, so that precautionary measures can be conveniently made in advance, and the incidence rate of malignant events is reduced.
Referring to fig. 6, another monitoring device based on face recognition according to an embodiment of the present invention may be used to execute the monitoring method based on face recognition according to the embodiment of the present invention. As shown in fig. 6, the monitoring device may include at least a memory 10, at least one processor 20, such as a CPU, and at least one communication interface 30, through which the monitoring device communicates with external devices via the communication interface 30. Wherein the memory 10, processor 20, and communication interface 30 may be communicatively coupled via one or more buses. Those skilled in the art will appreciate that the configuration of the monitoring device shown in fig. 6 is not intended to limit embodiments of the present invention, and may be a bus configuration, a star configuration, a configuration including more or less components than those shown, a combination of certain components, or a different arrangement of components. Wherein:
the memory 10 may be a high-speed RAM memory or a non-volatile memory (non-volatile memory), such as at least one disk memory. The memory 10 may optionally be at least one memory device located remotely from the processor 20. As shown in fig. 6, the memory 10 may store executable program codes, data and the like, and the embodiment of the present invention is not limited thereto.
In the monitoring device shown in fig. 6, the processor 20 may be configured to call the executable program code stored in the memory 10, and perform the following operations:
acquiring video data in a monitoring area;
extracting a face image from the video data, identifying the face image, and identifying whether a monitored object corresponding to the face image is a trusted object;
if the monitored object is not a trusted object, acquiring loitering duration of the monitored object in the monitoring area and acquiring behavior information of the monitored object;
determining that the loitering duration exceeds a preset duration and/or the behavior information of the monitored object is dangerous behavior, and controlling the communication interface 30 to send out alarm information;
if the monitored object is a trusted object and the monitored object and the peer are confirmed to be in a non-secure association with each other, the control communication interface 30 sends out alarm information.
Further, the processor 20 may be configured to monitor that the trusted object has a number of accompanying people (e.g., one or more children) if the monitored object is a trusted object; whether the trusted object and the plurality of accompanying people have a preset safety association relationship (for example, the relationship between a parent and a child of the parent) or not is judged, and if the trusted object and at least one of the plurality of accompanying people do not have the preset safety association relationship and the accompanying people do not have the preset safety association relationship, the communication interface 30 is controlled to send out alarm information.
That is to say, the trusted object may also perform an untrusted action, for example, a child taken away by the trusted object is not his child, but may be a child of another person, that is, the follower and the trusted object are not in a preset security association relationship, which is also dangerous, and through the above mechanism, the untrusted action of the trusted object may be further warned, so as to further improve the security. In addition, it is confirmed that there is no security association relationship between peer persons, for example, a trusted object and children at other homes belong to peer persons and belong to a non-security management relationship, but if parents of children are also peer persons who trust the object, no alarm information should be sent out. Thus, the present embodiment may also reduce the probability of false alarms.
Optionally, the specific implementation of the processor 20 obtaining the behavior information of the monitored object may be:
extracting video information of a monitored object from the video data;
and performing behavior analysis on the video information to determine behavior information of the monitored object.
Optionally, the processor 20 performs behavior analysis on the video information, and the manner of determining the behavior information of the monitored object may specifically be:
extracting each frame of image and the acquisition time of each frame of image from the video information;
identifying limb actions of the monitored object in each frame of image;
and determining the behavior information of the monitored object according to the limb action of the monitored object in each frame of image and the acquisition time sequence of each frame of image.
Optionally, the mode of determining that the behavior information of the monitored object is a dangerous behavior by the processor 20 may specifically be:
judging whether the behavior information of the monitored object is matched with at least one dangerous behavior pre-stored in the memory 10;
if the behavior information of the monitored object is matched with at least one dangerous behavior stored in the memory 10 in advance, determining that the behavior information is the dangerous behavior;
if the behavior information of the monitored object does not match all dangerous behaviors stored in the memory 10 in advance, it is determined that the behavior information is not a dangerous behavior.
Optionally, after determining that the behavior information of the monitored object is dangerous behavior, the processor 20 may be further configured to call the executable program code stored in the memory 10, and perform the following operations:
determining a danger level corresponding to the dangerous behavior matched with the behavior information according to a dangerous behavior level table pre-stored in the memory 10, wherein the dangerous behavior level table contains danger levels corresponding to different dangerous behaviors;
accordingly, the way in which the processor 20 controls the communication interface 30 to send out the alarm information may specifically be:
according to the danger level corresponding to the dangerous behavior matched with the behavior information, the control communication interface 30 sends out the alarm information corresponding to the danger level.
Optionally, when it is determined that the loitering duration exceeds the preset duration and the behavior information of the monitored object is not dangerous behavior, the processor 20 may be further configured to call the executable program code stored in the memory 10, and perform the following operations:
identifying personal information of the monitored object, wherein the personal information at least comprises age, gender, dressing and expression;
carrying out danger index analysis on the monitored object according to the personal information to determine the danger index of the monitored object;
accordingly, the way in which the processor 20 controls the communication interface 30 to send the alarm reminder to the video monitor may specifically be:
it is determined that the risk index of the monitored object exceeds a preset threshold value stored in the memory 10, and the communication interface 30 is controlled to emit alarm information.
Optionally, the processor 20 may be further configured to call the executable program code stored in the memory 10 to perform the following operations:
when the danger index of the monitored object is determined not to exceed the preset threshold value stored in the memory 10, acquiring historical monitoring data of the monitored object from the memory 10;
weighting the current risk index of the monitored object and the risk index in the historical monitoring data to obtain a weighted risk index;
and judging whether the weighted risk index exceeds a preset threshold value stored in the memory 10, and if so, controlling the communication interface 30 to send out alarm information.
Optionally, the processor 20 identifies a face image, and a manner of identifying whether a monitored object corresponding to the face image is a trusted object may specifically be:
extracting the feature information of the face image, and matching the feature information with model data in a face database pre-stored in a memory 10;
if the matching is successful, identifying the monitored object corresponding to the face image as a trusted object;
and if the matching fails, identifying the monitored object corresponding to the face image as an untrusted object.
Optionally, the manner in which the processor 20 controls the communication interface 30 to send the alarm information may specifically be:
the control communication interface 30 issues an alarm message to a predetermined person, which may include, but is not limited to, at least one of a video monitor, security personnel, and a monitored object, etc.
By implementing the monitoring equipment based on face recognition shown in fig. 6, the monitored object is subjected to identity confirmation, and when the monitored object is confirmed to be an untrusted object, and the monitored object stays for too long time and/or has dangerous behaviors, alarm processing is performed, so that suspicious personnel can be checked, and the incidence rate of malignant events is reduced; in addition, when the monitored object is a trusted object, alarm processing is carried out when the monitored object is confirmed to be in a non-safety relation with the peer personnel, so that missing inspection of suspicious personnel can be prevented, the possibility of missing inspection is reduced, any suspicious personnel can be prevented from being passed through as far as possible, and meanwhile, the probability of false alarm can be reduced.
Further, when the behavior information of the monitored object is determined to be dangerous behavior, the danger level of the dangerous behavior is further determined, and corresponding alarm information is sent out according to the danger level. Therefore, different counter-measures can be taken according to different danger levels, and further deterioration of the event is avoided.
In addition, when the stay time is too long, the personal information of the monitored object can be extracted for risk index analysis, and when the risk index exceeds the standard, an alarm is sent to remind a video monitor. Therefore, the alarm can be given in time when the monitored object stays for a long time and belongs to high-risk personnel, so that precautionary measures can be conveniently made in advance, and the incidence rate of malignant events is reduced.
Specifically, the monitoring device described in the embodiment of the present invention may implement part or all of the processes in the embodiment of the monitoring method based on face recognition described in conjunction with fig. 2, fig. 3, or fig. 4 of the present invention.
An embodiment of the present invention further provides a computer-readable storage medium, where a computer program is stored, and when the computer program is executed by a processor, the computer program implements part or all of the processes in the monitoring method embodiments based on face recognition described in fig. 2, fig. 3, or fig. 4.
In particular, the computer program when executed by the processor implements the steps of: acquiring video data in a monitoring area; extracting a face image from the video data, identifying the face image, and identifying whether a monitored object corresponding to the face image is a trusted object; if the monitored object is not a trusted object, acquiring loitering duration of the monitored object in the monitoring area and acquiring behavior information of the monitored object; determining that the loitering duration exceeds a preset duration and/or the behavior information of the monitored object is dangerous behavior, and sending alarm information; and if the monitored object is a trusted object and the monitored object and the peer personnel are confirmed to belong to a non-safety association relation with each other, alarm information is sent out.
Further, the computer program when executed by the processor performs the steps of: if the monitored object is a trusted object and the trusted object is monitored to have a number of accompanying people (e.g., one or more children); whether the trusted object and the plurality of accompanying people have a preset safety association relationship (for example, the relationship between a parent and a child of the parent) or not is judged, and if the trusted object and at least one of the plurality of accompanying people do not have the preset safety association relationship and the accompanying people do not have the preset safety association relationship, the communication interface 30 is controlled to send out alarm information.
That is to say, the trusted object may also perform an untrusted action, for example, a child taken away by the trusted object is not his child, but may be a child of another person, that is, the follower and the trusted object are not in a preset security association relationship, which is also dangerous, and through the above mechanism, the untrusted action of the trusted object may be further warned, so as to further improve the security. In addition, it is confirmed that there is no security association relationship between peer persons, for example, a trusted object and children at other homes belong to peer persons and belong to a non-security management relationship, but if parents of children are also peer persons who trust the object, no alarm information should be sent out. Thus, the present embodiment may also reduce the probability of false alarms.
Optionally, the manner of acquiring the behavior information of the monitored object by the processor may specifically be: extracting video information of a monitored object from the video data; and performing behavior analysis on the video information to determine behavior information of the monitored object.
Optionally, the processor performs behavior analysis on the video information, and the manner of determining the behavior information of the monitored object may specifically be: extracting each frame of image and the acquisition time of each frame of image from the video information; identifying limb actions of the monitored object in each frame of image; and determining the behavior information of the monitored object according to the limb action of the monitored object in each frame of image and the acquisition time sequence of each frame of image.
Optionally, the mode for determining that the behavior information of the monitored object is the dangerous behavior by the processor may specifically be: judging whether behavior information of the monitored object is matched with at least one pre-stored dangerous behavior; if the behavior information of the monitored object is matched with at least one pre-stored dangerous behavior, determining that the behavior information is the dangerous behavior; and if the behavior information of the monitored object is not matched with all the pre-stored dangerous behaviors, determining that the behavior information is not the dangerous behavior.
Optionally, after the processor determines that the behavior information of the monitored object is a dangerous behavior, the computer program further implements the following steps when executed by the processor: determining a danger level corresponding to the dangerous behavior matched with the behavior information according to a pre-stored dangerous behavior level table, wherein the dangerous behavior level table contains danger levels corresponding to different dangerous behaviors; accordingly, the manner of sending the alarm information by the processor may specifically be: and sending out alarm information corresponding to the danger level according to the danger level corresponding to the dangerous behavior matched with the behavior information.
Optionally, when it is determined that the loitering duration exceeds the preset duration and the behavior information of the monitored object is not dangerous behavior, the computer program further implements the following steps when executed by the processor: identifying personal information of the monitored object, wherein the personal information at least comprises age, gender, dressing and expression; carrying out danger index analysis on the monitored object according to the personal information to determine the danger index of the monitored object; accordingly, the way for the processor to send the alarm reminder to the video monitor may specifically be: and determining that the danger index of the monitored object exceeds a preset threshold value, and sending out alarm information.
Optionally, the computer program when executed by the processor further implements the steps of: when the danger index of the monitored object is determined not to exceed a preset threshold value, acquiring historical monitoring data of the monitored object; weighting the current risk index of the monitored object and the risk index in the historical monitoring data to obtain a weighted risk index; and judging whether the weighted danger index exceeds a preset threshold value, and if so, sending alarm information.
Optionally, the processor identifies the face image, and the manner of identifying whether the monitored object corresponding to the face image is a trusted object may specifically be: extracting the feature information of the face image, and matching the feature information with model data in a pre-stored face database; if the matching is successful, identifying the monitored object corresponding to the face image as a trusted object; and if the matching fails, identifying the monitored object corresponding to the face image as an untrusted object.
Optionally, the mode of the processor sending the alarm information may specifically be: and sending alarm information to a predetermined person, wherein the predetermined person can comprise but is not limited to at least one of a video monitor, a security personnel, a monitored object and the like.
That is, when being executed by a processor, a computer program of the computer-readable storage medium implements part or all of the processes in the embodiments of the monitoring method based on face recognition described in fig. 2, fig. 3, or fig. 4, so as to alarm in time, facilitate taking precautionary measures in advance, and reduce the incidence rate of malignant events.
Illustratively, the computer program of the computer-readable storage medium comprises computer program code, which may be in the form of source code, object code, an executable file or some intermediate form, and the like. The computer-readable medium may include: any entity or device capable of carrying the computer program code, recording medium, usb disk, removable hard disk, magnetic disk, optical disk, computer Memory, Read-Only Memory (ROM), Random Access Memory (RAM), electrical carrier wave signals, telecommunications signals, software distribution medium, etc.
It should be noted that, since the computer program of the computer-readable storage medium is executed by the processor to implement the steps of the monitoring method based on face recognition, all the embodiments of the monitoring method based on face recognition are applicable to the computer-readable storage medium, and can achieve the same or similar beneficial effects.
The above description is only a preferred embodiment of the present invention, and not intended to limit the scope of the present invention, and all modifications of equivalent structures and equivalent processes, which are made by using the contents of the present specification and the accompanying drawings, or directly or indirectly applied to other related technical fields, are included in the scope of the present invention.

Claims (10)

1. A monitoring method based on face recognition is characterized by comprising the following steps:
acquiring video data in a monitoring area;
extracting a face image from the video data, identifying the face image, and identifying whether a monitored object corresponding to the face image is a trusted object;
if the monitored object is not a trusted object, acquiring the loitering duration of the monitored object in the monitoring area, and acquiring the behavior information of the monitored object;
determining that the loitering duration exceeds a preset duration and/or the behavior information of the monitored object is dangerous behavior, and sending alarm information;
and if the monitored object is a trusted object and the monitored object and the peer personnel are confirmed to belong to a non-safety association relation, sending out alarm information.
2. The method according to claim 1, wherein the obtaining behavior information of the monitored object comprises:
extracting video information of the monitored object from the video data;
and performing behavior analysis on the video information to determine behavior information of the monitored object.
3. The method of claim 2, wherein the performing behavior analysis on the video information to determine behavior information of the monitored object comprises:
extracting each frame of image and the acquisition time of each frame of image from the video information;
identifying limb actions of the monitored object in each frame of image;
and determining the behavior information of the monitored object according to the limb action of the monitored object in each frame of image and the acquisition time sequence of each frame of image.
4. The method of claim 1, wherein the determining that the behavior information of the monitored object is dangerous behavior comprises:
judging whether the behavior information of the monitored object is matched with at least one pre-stored dangerous behavior;
if the behavior information of the monitored object is matched with at least one pre-stored dangerous behavior, determining that the behavior information is the dangerous behavior;
and if the behavior information of the monitored object is not matched with all the pre-stored dangerous behaviors, determining that the behavior information is not the dangerous behavior.
5. The method according to any one of claims 1-4, wherein after determining that the behavior information of the monitored object is dangerous behavior, the method further comprises:
determining a danger level corresponding to the dangerous behavior matched with the behavior information according to a pre-stored dangerous behavior level table, wherein the dangerous behavior level table contains danger levels corresponding to different dangerous behaviors;
wherein, the sending out the alarm information comprises:
and sending out alarm information corresponding to the danger level according to the danger level corresponding to the dangerous behavior matched with the behavior information.
6. The method of any one of claims 1-4, wherein when it is determined that the loitering length exceeds a preset length and the behavior information of the monitored object is not dangerous behavior, the method further comprises:
identifying personal information of the monitored object, wherein the personal information at least comprises age, gender, dressing and expression;
carrying out danger index analysis on the monitored object according to the personal information to determine the danger index of the monitored object;
and determining that the danger index of the monitored object exceeds a preset threshold value, and sending out alarm information.
7. The method of claim 6, further comprising:
when the danger index of the monitored object is determined not to exceed the preset threshold value, acquiring historical monitoring data of the monitored object;
weighting the current risk index of the monitored object and the risk index in the historical monitoring data to obtain a weighted risk index;
and judging whether the weighted danger index exceeds the preset threshold value, and if so, sending alarm information.
8. A monitoring device based on face recognition is characterized by comprising:
the first acquisition unit is used for acquiring video data in a monitoring area;
the identification unit is used for extracting a face image from the video data, identifying the face image and identifying whether a monitored object corresponding to the face image is a trusted object or not;
a second acquiring unit, configured to acquire a loitering duration of the monitored object in the monitoring area and acquire behavior information of the monitored object when the identifying unit identifies that the monitored object is not a trusted object;
the alarm unit is used for determining that the loitering duration exceeds a preset duration and/or the behavior information of the monitored object is dangerous behavior, and sending alarm information;
and the alarm unit is also used for confirming that the monitored object and the personnel in the same row belong to the non-safety association relationship when the identification unit identifies that the monitored object is the trust object, and sending out alarm information.
9. A monitoring device based on face recognition is characterized by comprising: the monitoring device comprises a memory, a processor and a communication interface, wherein the memory is used for storing executable program codes and data, the communication interface is used for the monitoring device to carry out communication interaction with an external device, and the processor is used for calling the executable program codes stored in the memory and executing the steps in the monitoring method based on the face recognition according to any one of claims 1 to 7.
10. A computer-readable storage medium, in which a computer program is stored, which, when being executed by a processor, carries out the steps of the method for monitoring based on face recognition according to any one of claims 1 to 7.
CN201810865117.4A 2018-08-01 2018-08-01 Monitoring method, device and equipment based on face recognition Pending CN110795963A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810865117.4A CN110795963A (en) 2018-08-01 2018-08-01 Monitoring method, device and equipment based on face recognition

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810865117.4A CN110795963A (en) 2018-08-01 2018-08-01 Monitoring method, device and equipment based on face recognition

Publications (1)

Publication Number Publication Date
CN110795963A true CN110795963A (en) 2020-02-14

Family

ID=69425033

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810865117.4A Pending CN110795963A (en) 2018-08-01 2018-08-01 Monitoring method, device and equipment based on face recognition

Country Status (1)

Country Link
CN (1) CN110795963A (en)

Cited By (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111586355A (en) * 2020-05-08 2020-08-25 湖北中亿百纳科技有限公司 Algorithm system for capturing portrait and analyzing behavior characteristics of portrait by high-definition camera
CN111737309A (en) * 2020-05-22 2020-10-02 深圳市天彦通信股份有限公司 User management method and related product
CN112907808A (en) * 2021-01-29 2021-06-04 西藏宁算科技集团有限公司 Specific area face recognition early warning system based on cloud computing
CN113139508A (en) * 2021-05-12 2021-07-20 深圳他米科技有限公司 Hotel safety early warning method, device and equipment based on artificial intelligence
CN113347358A (en) * 2021-06-02 2021-09-03 临沂边锋自动化设备有限公司 Data identification system based on video monitoring
CN113408495A (en) * 2021-07-30 2021-09-17 广州汇图计算机信息技术有限公司 Safety guard system for security
CN114187723A (en) * 2021-11-25 2022-03-15 深圳市中西视通科技有限公司 Security monitoring method
CN114627612A (en) * 2022-03-10 2022-06-14 阿里云计算有限公司 Early warning processing method, device and system
CN114973567A (en) * 2022-04-06 2022-08-30 福建长盛亿信息科技有限公司 Automatic alarm method and terminal based on face recognition
WO2022194147A1 (en) * 2021-03-15 2022-09-22 中科智云科技有限公司 Target object monitoring method and monitoring device
CN115396591A (en) * 2022-07-15 2022-11-25 深圳市创尼电子有限公司 Intelligent double-light camera image processing method and device, camera and medium
CN116343422A (en) * 2023-05-26 2023-06-27 深圳市越玛智能科技有限公司 Anti-theft door alarm method and system based on visual identification control
CN116434145A (en) * 2023-04-21 2023-07-14 北京日立电梯工程有限公司 Escalator passenger dangerous behavior analysis and monitoring system based on image recognition

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103246869A (en) * 2013-04-19 2013-08-14 福建亿榕信息技术有限公司 Crime monitoring method based on face recognition technology and behavior and sound recognition
CN105049792A (en) * 2015-06-26 2015-11-11 中国石油化工股份有限公司胜利油田分公司 Intelligent monitoring method for oil field in video manner
CN105608467A (en) * 2015-12-16 2016-05-25 西北工业大学 Kinect-based non-contact type student physical fitness evaluation method
CN105701467A (en) * 2016-01-13 2016-06-22 河海大学常州校区 Many-people abnormal behavior identification method based on human body shape characteristic
CN106454253A (en) * 2016-11-02 2017-02-22 北京弘恒科技有限公司 Method and system for detecting area wandering
CN108133569A (en) * 2017-12-21 2018-06-08 深圳市赛亿科技开发有限公司 A kind of kindergarten's monitoring method and platform
CN109658666A (en) * 2018-12-06 2019-04-19 中山乐心电子有限公司 A kind of protection from hazards method, equipment, system, electronic equipment and storage medium

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103246869A (en) * 2013-04-19 2013-08-14 福建亿榕信息技术有限公司 Crime monitoring method based on face recognition technology and behavior and sound recognition
CN105049792A (en) * 2015-06-26 2015-11-11 中国石油化工股份有限公司胜利油田分公司 Intelligent monitoring method for oil field in video manner
CN105608467A (en) * 2015-12-16 2016-05-25 西北工业大学 Kinect-based non-contact type student physical fitness evaluation method
CN105701467A (en) * 2016-01-13 2016-06-22 河海大学常州校区 Many-people abnormal behavior identification method based on human body shape characteristic
CN106454253A (en) * 2016-11-02 2017-02-22 北京弘恒科技有限公司 Method and system for detecting area wandering
CN108133569A (en) * 2017-12-21 2018-06-08 深圳市赛亿科技开发有限公司 A kind of kindergarten's monitoring method and platform
CN109658666A (en) * 2018-12-06 2019-04-19 中山乐心电子有限公司 A kind of protection from hazards method, equipment, system, electronic equipment and storage medium

Cited By (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111586355A (en) * 2020-05-08 2020-08-25 湖北中亿百纳科技有限公司 Algorithm system for capturing portrait and analyzing behavior characteristics of portrait by high-definition camera
CN111737309A (en) * 2020-05-22 2020-10-02 深圳市天彦通信股份有限公司 User management method and related product
CN112907808A (en) * 2021-01-29 2021-06-04 西藏宁算科技集团有限公司 Specific area face recognition early warning system based on cloud computing
WO2022194147A1 (en) * 2021-03-15 2022-09-22 中科智云科技有限公司 Target object monitoring method and monitoring device
CN113139508B (en) * 2021-05-12 2023-11-14 深圳他米科技有限公司 Hotel safety early warning method, device and equipment based on artificial intelligence
CN113139508A (en) * 2021-05-12 2021-07-20 深圳他米科技有限公司 Hotel safety early warning method, device and equipment based on artificial intelligence
CN113347358A (en) * 2021-06-02 2021-09-03 临沂边锋自动化设备有限公司 Data identification system based on video monitoring
CN113408495A (en) * 2021-07-30 2021-09-17 广州汇图计算机信息技术有限公司 Safety guard system for security
CN114187723A (en) * 2021-11-25 2022-03-15 深圳市中西视通科技有限公司 Security monitoring method
CN114627612A (en) * 2022-03-10 2022-06-14 阿里云计算有限公司 Early warning processing method, device and system
CN114973567A (en) * 2022-04-06 2022-08-30 福建长盛亿信息科技有限公司 Automatic alarm method and terminal based on face recognition
CN114973567B (en) * 2022-04-06 2024-01-16 福建长盛亿信息科技有限公司 Automatic alarm method and terminal based on face recognition
CN115396591A (en) * 2022-07-15 2022-11-25 深圳市创尼电子有限公司 Intelligent double-light camera image processing method and device, camera and medium
CN116434145A (en) * 2023-04-21 2023-07-14 北京日立电梯工程有限公司 Escalator passenger dangerous behavior analysis and monitoring system based on image recognition
CN116434145B (en) * 2023-04-21 2024-04-16 北京日立电梯工程有限公司 Escalator passenger dangerous behavior analysis and monitoring system based on image recognition
CN116343422A (en) * 2023-05-26 2023-06-27 深圳市越玛智能科技有限公司 Anti-theft door alarm method and system based on visual identification control

Similar Documents

Publication Publication Date Title
CN110795963A (en) Monitoring method, device and equipment based on face recognition
CN110110575A (en) A kind of personnel leave post detection method and device
CN103714631B (en) ATM cash dispenser intelligent monitor system based on recognition of face
CN109492595B (en) Behavior prediction method and system suitable for fixed group
EP3640935B1 (en) Notification information output method, server and monitoring system
CN110111515A (en) A kind of border intrusion detection method, apparatus, server and system
CN110113561A (en) A kind of personnel are detained detection method, device, server and system
CN111241883B (en) Method and device for preventing cheating of remote tested personnel
CN109993946A (en) A kind of monitoring alarm method, camera, terminal, server and system
CN111985428A (en) Security detection method and device, electronic equipment and storage medium
CN115050126B (en) Smart dormitory safety management method, smart dormitory safety management device and storage medium
CN112149576A (en) Elevator safety real-time monitoring management system based on image analysis
CN113642507A (en) Examination monitoring method, system, equipment and medium based on multi-camera one-person detection
US20220198895A1 (en) Frictionless security processing
CN105354552A (en) Human face identification and expression analysis based online monitoring system and method
CN114173094A (en) Video monitoring method and device, computer equipment and storage medium
CN110795971A (en) User behavior identification method, device, equipment and computer storage medium
CN110111436A (en) A kind of face is registered method, apparatus and system
CN109460714B (en) Method, system and device for identifying object
CN116778657A (en) Method and system for intelligently identifying intrusion behavior
US10750133B2 (en) Systems and methods for automatic video recording
CN111126167A (en) Method and system for quickly identifying series activities of multiple specific persons
CN108694388B (en) Campus monitoring method and device based on intelligent camera
CN114281656A (en) Intelligent central control system
CN113989738A (en) On-duty monitoring method and system for staff on duty

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20200214