CN111263114A - Abnormal event alarm method and device - Google Patents

Abnormal event alarm method and device Download PDF

Info

Publication number
CN111263114A
CN111263114A CN202010093135.2A CN202010093135A CN111263114A CN 111263114 A CN111263114 A CN 111263114A CN 202010093135 A CN202010093135 A CN 202010093135A CN 111263114 A CN111263114 A CN 111263114A
Authority
CN
China
Prior art keywords
image
target
abnormal event
occurred
person
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010093135.2A
Other languages
Chinese (zh)
Other versions
CN111263114B (en
Inventor
冯博豪
张小帅
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Baidu Netcom Science and Technology Co Ltd
Original Assignee
Beijing Baidu Netcom Science and Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Baidu Netcom Science and Technology Co Ltd filed Critical Beijing Baidu Netcom Science and Technology Co Ltd
Priority to CN202010093135.2A priority Critical patent/CN111263114B/en
Publication of CN111263114A publication Critical patent/CN111263114A/en
Application granted granted Critical
Publication of CN111263114B publication Critical patent/CN111263114B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/18Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast
    • H04N7/181Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast for receiving images from a plurality of remote sources
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/46Descriptors for shape, contour or point-related descriptors, e.g. scale invariant feature transform [SIFT] or bags of words [BoW]; Salient regional features
    • G06V10/462Salient features, e.g. scale invariant feature transforms [SIFT]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • G06V10/75Organisation of the matching processes, e.g. simultaneous or sequential comparisons of image or video features; Coarse-fine approaches, e.g. multi-scale approaches; using context analysis; Selection of dictionaries
    • G06V10/757Matching configurations of points or features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/174Facial expression recognition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/20Movements or behaviour, e.g. gesture recognition
    • GPHYSICS
    • G08SIGNALLING
    • G08BSIGNALLING OR CALLING SYSTEMS; ORDER TELEGRAPHS; ALARM SYSTEMS
    • G08B21/00Alarms responsive to a single specified undesired or abnormal condition and not otherwise provided for
    • GPHYSICS
    • G08SIGNALLING
    • G08BSIGNALLING OR CALLING SYSTEMS; ORDER TELEGRAPHS; ALARM SYSTEMS
    • G08B31/00Predictive alarm systems characterised by extrapolation or other computation using updated historic data

Abstract

The embodiment of the application discloses an abnormal event alarming method and device. One embodiment of the method comprises: acquiring a first image obtained by shooting a target person in a target place by a first camera; processing the first image to determine whether an abnormal event occurs in the target site; and if the abnormal event occurs, sending an alarm instruction based on the occurred abnormal event. According to the embodiment, whether the abnormal event occurs in the target place is determined by processing the image of the target person in the target place, so that the condition in the target place can be monitored in real time, and the abnormal event can be found in time.

Description

Abnormal event alarm method and device
Technical Field
The embodiment of the application relates to the technical field of computers, in particular to an abnormal event alarm method and device.
Background
With the development of social, economic and cultural causes, large activities such as large academic conferences, large exhibitions, large sports meetings and the like are increasing. Large-scale activities generally have the characteristics of long activity time, more people participating, large activity field, strong activity temporality and the like. Thus, security issues for large activities are particularly important. Recently, abnormal events occur frequently in large-scale activities, which also makes security issues more prominent.
Typically, there is video surveillance in large activity venues. During the activity, the related monitoring personnel stare at the video monitoring screen all the time and find out abnormal events through naked eyes.
Disclosure of Invention
The embodiment of the application provides an abnormal event alarm method and device.
In a first aspect, an embodiment of the present application provides an abnormal event alarm method, including: acquiring a first image obtained by shooting a target person in a target place by a first camera; processing the first image to determine whether an abnormal event occurs in the target site; and if the abnormal event occurs, sending an alarm instruction based on the occurred abnormal event.
In some embodiments, image processing the first image to determine whether an anomalous event occurred within the target site includes: performing target detection on the first image by using a target detection model, and determining the category and the confidence coefficient of the article abnormal event existing in the first image; and/or extracting space-time characteristics from the first image by using a behavior recognition model, and recognizing the category and the confidence coefficient of the behavior abnormal event existing in the first image based on the space-time characteristics; and/or performing expression recognition on the first image by using an expression recognition model, determining the expression of the target person, and determining the category, confidence and position of the abnormal event in the target place based on the expression of the target person.
In some embodiments, image processing the first image to determine whether an abnormal event occurs in the target site further comprises: if the confidence coefficient of the occurred abnormal event is lower than a preset confidence coefficient threshold value, sending a re-shooting instruction to the first camera, wherein the re-shooting instruction is a close-range shooting instruction or a multi-angle shooting instruction; acquiring a third image obtained by shooting the target person again by the first camera; and performing image processing on the third image to determine whether an abnormal event occurs in the target site.
In some embodiments, after acquiring the third image obtained by the first camera shooting the target person again, the method further includes: and if the re-shooting instruction is a multi-angle shooting instruction, carrying out image fusion on the multi-angle third image.
In some embodiments, sending an alarm instruction based on the occurred exception event comprises: performing text classification on the abnormal events by using a text classification model, and determining the grade of the abnormal events; an alarm instruction is sent based on the level of the abnormal event that occurred.
In some embodiments, before acquiring the first image obtained by the first camera shooting the target person in the target site, the method further includes: acquiring a second image obtained by shooting a person entering a target place by a second camera; extracting features of the person from the second image; matching the characteristics of the personnel in a prestored participant information set, wherein the participant information in the participant information set comprises the characteristics and information of the participants; if the matching is successful, determining target personnel from the successfully matched participants; and if the matching fails, sending an alarm instruction.
In some embodiments, determining the target person from the successfully matched participants comprises: and if the information of the successfully matched participants comprises the historical abnormal event information, determining the successfully matched participants as target personnel.
In some embodiments, before performing image processing on the first image and determining whether an abnormal event occurs in the target site, the method further includes: and performing image preprocessing on the first image, wherein the image preprocessing comprises at least one of image enhancement and image fusion.
In some embodiments, the method further comprises: and tracking the target personnel corresponding to the occurred abnormal event by using the target tracking model, and determining the motion trail of the target personnel corresponding to the occurred abnormal event.
In a second aspect, an embodiment of the present application provides an abnormal event warning device, including: the first acquisition unit is configured to acquire a first image obtained by shooting a target person in a target site by a first camera; a processing unit configured to perform image processing on the first image and determine whether an abnormal event occurs in the target site; the first alarm unit is configured to send an alarm instruction based on the abnormal event if the abnormal event occurs.
In some embodiments, the processing unit comprises: the object detection subunit is configured to perform object detection on the first image by using an object detection model, and determine the category and the confidence coefficient of the article abnormal event in the first image; and/or a behavior recognition subunit, configured to extract spatio-temporal features from the first image by using a behavior recognition model, and recognize the category and the confidence coefficient of the behavior abnormal event existing in the first image based on the spatio-temporal features; and/or an expression recognition subunit, configured to perform expression recognition on the first image by using the expression recognition model, determine the expression of the target person, and determine the category, confidence and position of the abnormal event occurring in the target place based on the expression of the target person.
In some embodiments, the processing unit further comprises: the sending subunit is configured to send a re-shooting instruction to the first camera if the confidence coefficient of the occurred abnormal event is lower than a preset confidence coefficient threshold, wherein the re-shooting instruction is a close-range shooting instruction or a multi-angle shooting instruction; the acquisition subunit is configured to acquire a third image obtained by shooting the target person again by the first camera; and the processing subunit is configured to perform image processing on the third image and determine whether an abnormal event occurs in the target site.
In some embodiments, the processing unit further comprises: and the fusion subunit is configured to perform image fusion on the third image of multiple angles if the re-shooting instruction is a multiple-angle shooting instruction.
In some embodiments, the alarm unit comprises: the classification subunit is configured to perform text classification on the occurred abnormal events by using a text classification model, and determine the grade of the occurred abnormal events; an alarm subunit configured to send an alarm instruction based on the level of the occurred abnormal event.
In some embodiments, the apparatus further comprises: the second acquisition unit is configured to acquire a second image obtained by shooting the person entering the target place by the second camera; an extraction unit configured to extract features of the person from the second image; the matching unit is configured to match the characteristics of the persons in a prestored participant information set, wherein the participant information in the participant information set comprises the characteristics and information of the participants; the determining unit is configured to determine a target person from the successfully matched participants if the matching is successful; and the second alarm unit is configured to send an alarm instruction if the matching fails.
In some embodiments, the determining unit is further configured to: and if the information of the successfully matched participants comprises the historical abnormal event information, determining the successfully matched participants as target personnel.
In some embodiments, the apparatus further comprises: and the preprocessing unit is configured to perform image preprocessing on the first image, wherein the image preprocessing comprises at least one of image enhancement and image fusion.
In some embodiments, the apparatus further comprises: and the tracking unit is configured to track the target person corresponding to the occurred abnormal event by using the target tracking model and determine the motion track of the target person corresponding to the occurred abnormal event.
In a third aspect, an embodiment of the present application provides an electronic device, including: one or more processors; a storage device having one or more programs stored thereon; when the one or more programs are executed by the one or more processors, the one or more processors are caused to implement the method as described in any implementation of the first aspect.
In a fourth aspect, the present application provides a computer-readable medium, on which a computer program is stored, which, when executed by a processor, implements the method as described in any implementation manner of the first aspect.
According to the abnormal event alarming method and device provided by the embodiment of the application, a first image obtained by shooting a target person in a target place by a first camera is obtained; then, carrying out image processing on the first image, and determining whether an abnormal event occurs in the target site; and finally, if an abnormal event occurs, sending an alarm instruction based on the occurred abnormal event. Whether an abnormal event occurs in the target place is determined by carrying out image processing on the image of the target person in the target place, so that the condition in the target place can be monitored in real time, and the abnormal event can be found in time. Moreover, the whole process does not need manual participation, and the labor cost is greatly reduced.
Drawings
Other features, objects and advantages of the present application will become more apparent upon reading of the following detailed description of non-limiting embodiments thereof, made with reference to the accompanying drawings in which:
FIG. 1 is an exemplary system architecture to which the present application may be applied;
FIG. 2 is a flow diagram of one embodiment of an abnormal event alert method according to the present application;
FIG. 3 is a flow chart of yet another embodiment of an abnormal event alert method according to the present application;
FIG. 4 is a schematic diagram of an abnormal event alert system;
FIG. 5 is a schematic block diagram of one embodiment of an abnormal event alert device according to the present application;
FIG. 6 is a schematic block diagram of a computer system suitable for use in implementing an electronic device according to embodiments of the present application.
Detailed Description
The present application will be described in further detail with reference to the following drawings and examples. It is to be understood that the specific embodiments described herein are merely illustrative of the relevant invention and not restrictive of the invention. It should be noted that, for convenience of description, only the portions related to the related invention are shown in the drawings.
It should be noted that the embodiments and features of the embodiments in the present application may be combined with each other without conflict. The present application will be described in detail below with reference to the embodiments with reference to the attached drawings.
Fig. 1 illustrates an exemplary system architecture 100 to which embodiments of the abnormal event alert method or abnormal event alert apparatus of the present application may be applied.
As shown in fig. 1, the system architecture 100 may include cameras 101, 102, 103, a network 104, and a server 105. The network 104 serves as a medium to provide communication links between the cameras 101, 102, 103 and the server 105. Network 104 may include various connection types, such as wired, wireless communication links, or fiber optic cables, to name a few.
The cameras 101, 102, 103 may be distributed at a plurality of corners of the target site, and are used to take an omnidirectional image of the target site.
The server 105 may provide various services. For example, the server 105 may perform processing such as analysis on data such as the first images acquired from the cameras 101, 102, 103, and determine whether to transmit an alarm instruction based on the processing result.
The server 105 may be hardware or software. When the server 105 is hardware, it may be implemented as a distributed server cluster composed of a plurality of servers, or may be implemented as a single server. When the server 105 is software, it may be implemented as multiple pieces of software or software modules (e.g., to provide distributed services), or as a single piece of software or software module. And is not particularly limited herein.
It should be noted that the abnormal event warning method provided in the embodiment of the present application is generally executed by the server 105, and accordingly, the abnormal event warning apparatus is generally disposed in the server 105.
It should be understood that the number of cameras, networks, and servers in fig. 1 is merely illustrative. There may be any number of cameras, networks, and servers, as desired for implementation.
With continued reference to FIG. 2, a flow 200 of one embodiment of an abnormal event alert method according to the present application is shown. The abnormal event alarming method comprises the following steps:
step 201, acquiring a first image obtained by shooting a target person in a target place by a first camera.
In this embodiment, an execution subject of the abnormal event warning method (for example, the server 105 shown in fig. 1) may acquire a first image obtained by shooting a target person in a target location by a first camera (for example, the cameras 101, 102, and 103 shown in fig. 1).
In general, the target site may be a site of a large event, including an indoor site and/or an outdoor site. The target person may be a person entering the target site, such as a participant in a large event. The first cameras can be distributed at multiple corners of the target place and used for shooting the target place in an all-around mode so as to monitor the behaviors of target people and the safety condition of the target place. The first image may be an image or a video frame in a video captured by the first camera after the target person enters the target location.
Step 202, image processing is performed on the first image, and whether an abnormal event occurs in the target place is determined.
In this embodiment, the executing subject may perform image processing on the first image to determine whether an abnormal event occurs in the target site. Specifically, the executing subject may detect or identify the first image to determine whether an abnormal event exists in the first image. Wherein the abnormal event may include, but is not limited to, an event that the target person carries contraband, an event that the target person carries an item, an event that the target person quarrelings, an event that the target person has a seizure, an event that the target site fires, and the like.
In step 203, if an abnormal event occurs, an alarm command is sent based on the occurred abnormal event.
In this embodiment, if an abnormal event occurs, the execution main body may send an alarm command based on the abnormal event. For example, if an abnormal event occurs, the execution main body may send alarm information to the terminal device of the relevant responsible person to prompt the relevant responsible person to process the abnormal event in time. For another example, if an abnormal event occurs, the execution main body may send an alarm instruction to an alarm in the target location, so that the alarm reminds or warns the target person to take corresponding action in the form of sound, light, air pressure, and the like.
The abnormal event alarming method provided by the embodiment of the application comprises the steps of firstly, obtaining a first image obtained by shooting a target person in a target place by a first camera; then, carrying out image processing on the first image, and determining whether an abnormal event occurs in the target site; and finally, if an abnormal event occurs, sending an alarm instruction based on the occurred abnormal event. Whether an abnormal event occurs in the target place is determined by carrying out image processing on the image of the target person in the target place, so that the condition in the target place can be monitored in real time, and the abnormal event can be found in time. Moreover, the whole process does not need manual participation, and the labor cost is greatly reduced.
With further reference to FIG. 3, a flow 300 of yet another embodiment of an abnormal event alert method according to the present application is shown. The abnormal event alarming method comprises the following steps:
and 301, acquiring a second image obtained by shooting the person entering the target place by the second camera.
In this embodiment, an execution subject of the abnormal event warning method (for example, the server 105 shown in fig. 1) may acquire a second image obtained by shooting a person entering the target location by the second camera.
Typically, the second cameras may be distributed at the entrance of the target site for photographing the person entering the target site. The second image may be an image or a video frame in a video captured by the second camera when the person enters the target site.
Step 302, extracting features of the person from the second image.
In this embodiment, the execution subject may extract features of the person from the second image. Typically, the second image may include a facial region of a person entering the target site. Thus, the execution subject may extract facial features of the person from the second image.
Step 303, determining whether the characteristics of the person are successfully matched in the pre-stored participant information set.
In this embodiment, the executing entity may match the characteristics of the person in a pre-stored participant information set to obtain a matching result. Wherein the participant information in the participant information set may include characteristics and information of the participant. The participants may be participants of a large activity. The information of the participant may include, but is not limited to, identity information of the participant, historical activity participation information, and the like. The executing main body can match the characteristics of the personnel with the characteristics in each participant information one by one, if the matching is successful, the personnel entering the target place is the participant of the large-scale activity, and step 304 is executed; if the matching fails, it indicates that the person entering the target location is not a participant in the large-scale event, and step 305 is executed.
And step 304, determining target persons from the successfully matched participants.
In this embodiment, if the matching is successful, the execution subject may determine the target person from the participants who are successfully matched. For example, the executing entity may directly determine all the participants who successfully match as the target people. For another example, if the information of the successfully matched participant includes historical abnormal event information, the execution subject determines the successfully matched participant as the target participant.
Additionally, a body region of a person entering the target site may also be included in the second image. Therefore, the executing body can also determine the object carried by the target person from the second image and bind the object carried by the target person with the target person. The object carried by the target person may include, but is not limited to, a cell phone, a computer, a backpack, and the like.
Step 305, an alarm instruction is sent.
In this embodiment, if the matching fails, the execution body may send an alarm command. In general, if the matching fails, it indicates that an event that non-participating persons enter the target location occurs, and at this time, the execution subject may send an alarm instruction to prompt. For example, the execution subject may send alarm information to a terminal device of a related security worker, or send an alarm instruction to an alarm at an entrance of a target site. The relevant security personnel may then manually verify the identity of the person entering the target site to determine whether access to the venue is permitted.
Step 306 obtains a first image obtained by shooting the target person in the target place by the first camera.
In this embodiment, the specific operation of step 306 is described in detail in step 201 in the embodiment shown in fig. 2, and is not described herein again.
Step 307, image preprocessing is performed on the first image.
In this embodiment, the execution subject may perform image preprocessing on the first image. The image preprocessing may include, but is not limited to, at least one of image enhancement and image fusion.
In general, the first image taken by the first camera may not be very sharp, and in addition, since the first image is taken by a plurality of first cameras in multiple angles, image preprocessing is required. The image preprocessing may include, but is not limited to, at least one of image enhancement and image fusion.
Image enhancement is primarily to increase the contrast of the image. Due to the light, the first image captured by the first camera may have a low chromaticity and a low contrast. At this time, the executing body may improve the contrast of the image by using an Opencv image enhancement algorithm. It should be noted that the image enhancement may be applied to each frame of the first image captured by the first camera.
The image fusion is mainly to fuse multi-angle images into one image. Because the condition of sheltering from appears easily in single camera, need a plurality of first cameras to carry out the multi-angle and shoot. At this time, the execution subject may fuse the first images of the plurality of angles using the multi-map fusion algorithm PVNet. The PVNet utilizes a point cloud and multi-view image feature learning network and carries out image fusion based on an embedded attention mechanism. The branch network applied by the point cloud is DGCNN, and the branch network applied by the multi-view image is MVNN. It should be noted that image fusion is generally applied only when an abnormal event is found, and the abnormal event can be presented in all directions by image fusion.
And 308, performing target detection on the first image by using a target detection model, and determining the category and the confidence coefficient of the article abnormal event in the first image.
In this embodiment, the executing entity may perform target detection on the first image by using a target detection model, and determine a category and a confidence level of the article abnormal event existing in the first image. The item exception event may include, but is not limited to, an event that the target person carries a dangerous item, an event that the target person carries an item stolen, and the like.
Typically, the target detection model may be Yolo. Yolo has the advantage of being fast and therefore able to capture the point of interest very quickly. Yolo may divide the whole graph into multiple lattices, and then perform target detection on each lattice. Yolo can predict whether all the grids contain the article abnormal events at one time, and the type and the confidence coefficient of the article abnormal events, so the detection speed is very high.
Yolo can quickly locate the location of the target person and the item carried within the target site. Therefore, Yolo can detect whether the target person carries prohibited articles (such as a cutter, a lighter, etc.), or whether the carried articles are lost.
Step 309, extracting a spatio-temporal feature from the first image by using the behavior recognition model, and recognizing the category and the confidence of the behavior abnormal event existing in the first image based on the spatio-temporal feature.
In this embodiment, the execution subject may extract a spatiotemporal feature from the first image by using a behavior recognition model, and recognize a category and a confidence level of the presence of the behavior abnormal event in the first image based on the spatiotemporal feature. The behavioral anomaly event may include, but is not limited to, a quarrel event of the target person, a seizure event of the target person, and the like.
In general, the behavior recognition model may be C3D. C3D mainly extracts spatio-temporal features from video data for motion recognition. C3D may include multiple 3D feature extractors. These 3D feature extractors capture motion information in video data in both spatial and temporal dimensions, and C3D may then generate multiple channels of information from adjacent video frames and perform convolution and sub-sampling separately in each channel of information, and finally obtain final feature identifications by combining the information of all channels of information to identify the behavior of the target person. Whether abnormal behaviors exist in the target person can be monitored in real time through behavior recognition.
And 310, performing expression recognition on the first image by using an expression recognition model, determining the expression of the target person, and determining the category, confidence and position of the abnormal event in the target place based on the expression of the target person.
In this embodiment, the executing agent may perform expression recognition on the first image by using an expression recognition model, determine an expression of the target person, and determine a category, a confidence level, and a position of an abnormal event occurring in the target location based on the expression of the target person.
In general, the expression recognition models may be VGG and Resnet. The two models can well identify the expression of the target person. When an abnormal event occurs in a target site, target people in the target site are often found out of the abnormal event at the first time. Therefore, the abnormal events and the positions of the abnormal events can be found in time by analyzing the expressions and the eye directions of the target personnel.
It should be noted that the execution body may execute at least a part of the steps 308 and 310. For example, the execution body may execute all of the steps 308-310 in parallel, or may execute only one or two of the steps 308-310.
And 311, if the confidence coefficient of the occurred abnormal event is lower than a preset confidence coefficient threshold value, sending a re-shooting instruction to the first camera.
In this embodiment, if the confidence of the occurred abnormal event is lower than the preset confidence threshold, the executing entity may send a re-shooting instruction to the first camera. Wherein, the re-shooting instruction can be a close-range shooting instruction or a multi-angle shooting instruction. If the re-shooting instruction is a close shot instruction, the first camera can change a far and close shot and convert the shooting direction to shoot again according to the requirement so as to obtain an amplified third image. If the shooting instruction again is a multi-angle shooting instruction, the first camera can carry out multi-angle shooting according to requirements so as to obtain a multi-angle third image.
And step 312, acquiring a third image obtained by shooting the target person again by the first camera.
In this embodiment, the executing body may obtain a third image obtained by re-shooting the target person by the first camera.
In some optional implementation manners of this embodiment, if the re-shooting instruction is a multi-angle shooting instruction, the execution subject further needs to perform image fusion on a multi-angle third image.
And step 313, performing image processing on the third image to determine whether an abnormal event occurs in the target place.
In this embodiment, the executing entity may perform image processing on the third image to determine whether an abnormal event occurs in the target location. That is, the executing body may perform object detection, behavior recognition, or expression recognition on the third image.
And step 314, if the abnormal event occurs, performing text classification on the abnormal event by using a text classification model, and determining the grade of the abnormal event.
In this embodiment, if an abnormal event occurs, the execution body may perform text classification on the abnormal event by using a text classification model, and determine the level of the abnormal event.
Typically, the text classification model may be ERINE. ERINE is a superior model in natural language processing at present. ERINE may determine a rank of an abnormal event based on a category of the abnormal event that occurred. For example, the degree of urgency for an abnormal event may be classified into A, B, C, D, E five levels from high to low. Wherein, the target person carries the article loss event and corresponds to the E grade; the target person quarreling event corresponds to a D level; the target person carries prohibited article events corresponding to the grade C; the target person has an episode of disease corresponding to grade B; the target site fire event corresponds to a class A.
Step 315, an alarm command is sent based on the level of the abnormal event that occurred.
In this embodiment, the execution subject may send an alarm instruction based on the level of the abnormal event that occurs. For example, for the E level corresponding to the event that the target person carries the article loss event, the execution main body may send the article loss prompt message to the terminal device of the target person. If the target person simply borrows the item, this information is ignored. If the target person confirms that the item is lost, a tracking request can be sent to the execution main body through the terminal device. The execution main body can send article loss processing information to the terminal equipment of the related security personnel to prompt the related security personnel to track and process article loss events in time. For the D level corresponding to the quarrel event of the target person, the execution main body can send a voice prompt instruction to the alarm. The alarm can prompt the target person to pay attention to the meeting place order through voice. In addition, the execution main body can also send the personnel quarreling processing information to the terminal equipment of the related security personnel so as to prompt the related security personnel to intervene in the quarreling event of the personnel in time. For the class C corresponding to the event that the target person carries the prohibited articles, the executing body can send the article contraband processing information to the terminal device of the related security personnel to prompt the related security personnel to track and process the article contraband event in time. In addition, the execution main body can also send a voice alarm instruction to the alarm. The alarm can broadcast danger early warning signals. For the level B corresponding to the disease attack event of the target person, the execution main body can send a voice prompt instruction to the alarm. The alarm can prompt the target person of the attack of the disease through voice. In addition, the execution main body can also send seizure processing information to the terminal equipment of the related medical staff to prompt the related medical staff to timely rescue the seizure event of the staff. For the class A corresponding to the fire incident of the target place, the execution main body can send a voice alarm instruction to the alarm. The alarm can broadcast danger early warning signals. In addition, the execution main body can also send a text alarm instruction through a display screen. The display screen can prompt safe evacuation. In addition, the execution main body can also send the site fire processing information to the terminal equipment of the relevant fire fighters to prompt the relevant fire fighters to timely carry out fire extinguishing processing on the site fire events.
And step 316, tracking the target personnel corresponding to the occurred abnormal event by using the target tracking model, and determining the motion track of the target personnel corresponding to the occurred abnormal event.
In this embodiment, the executing entity may track the target person corresponding to the occurred abnormal event by using the target tracking model, and determine the motion trajectory of the target person corresponding to the occurred abnormal event.
Generally, target personnel corresponding to the abnormal event are mixed into the crowd at an extremely fast speed after the abnormal event is produced. The target tracking model can lock related target personnel and then continuously track, and assist related responsible personnel to track until the target personnel is captured. For single target person tracking, the target tracking model may be SiamRPN + +, and for multi-target person tracking, the target tracking model may be YOLOV3+ Deep _ Sort. The SimRPN + + is a target tracking model with a good effect at present, and when a target is tracked, an algorithm detection frame is very stable and cannot shake, and a single target person can be well locked. The Yolov3+ Deep _ Sort can distinguish different target persons and the same target person at different times, so that the multi-target person tracking can be completed. Moreover, by automatically analyzing and extracting the trajectory features, a plurality of target persons cannot be tracked more accurately.
In some optional implementation manners of this embodiment, if an abnormal event occurs, the execution main body may store the first image of the target person corresponding to the abnormal event in the participating person information set, so as to facilitate subsequent verification at any time.
With further reference to fig. 4, a schematic diagram of the abnormal event alert system is shown. As shown in fig. 4, the abnormal event warning system may include five parts of image collection and preprocessing, image processing, target tracking, hierarchical warning and information base. The image collection and preprocessing can comprise three parts of field image collection, field image collection and image preprocessing. After the approach images are collected, a match may be made in the information base to determine if an event occurs in which non-participant persons enter the target site. The image processing can comprise three parts of target detection, behavior recognition and expression recognition, and is used for determining whether an abnormal event occurs in a target place. And when an abnormal event occurs in the target site, performing graded alarm. The target tracking can comprise three parts of target locking, single target tracking and multi-target tracking and is used for tracking target personnel corresponding to the abnormal event.
As can be seen from fig. 3, compared with the embodiment corresponding to fig. 2, the flow 300 of the abnormal event warning method in the present embodiment highlights the steps of image processing and hierarchical warning. Therefore, the scheme described in the embodiment combines with artificial intelligence technologies such as target detection, behavior recognition and expression recognition, and improves the accuracy of finding abnormal events. In addition, an alarm instruction is sent based on the grade of the abnormal event, so that the corresponding abnormal event can be processed by reasonably arranging manpower.
With further reference to fig. 5, as an implementation of the methods shown in the above-mentioned figures, the present application provides an embodiment of an abnormal event warning apparatus, which corresponds to the embodiment of the method shown in fig. 2, and which can be applied to various electronic devices.
As shown in fig. 5, the abnormal event warning device 500 of the present embodiment may include: a first obtaining unit 501, a processing unit 502 and a first alarm unit 503. The first acquiring unit 501 is configured to acquire a first image obtained by shooting a target person in a target location by a first camera; a processing unit 502 configured to perform image processing on the first image and determine whether an abnormal event occurs in the target site; the first alarm unit 503 is configured to send an alarm command based on an abnormal event if the abnormal event occurs.
In the present embodiment, in the abnormal event alert device 500: the specific processing of the first obtaining unit 501, the processing unit 502 and the first alarm unit 503 and the technical effects thereof can refer to the related descriptions of step 201 and step 203 in the corresponding embodiment of fig. 2, which are not described herein again.
In some optional implementations of this embodiment, the processing unit 502 includes: a target detection subunit (not shown in the figure) configured to perform target detection on the first image by using a target detection model, and determine a category and a confidence level of an article abnormal event existing in the first image; and/or a behavior recognition subunit (not shown in the figure) configured to extract spatio-temporal features from the first image by using a behavior recognition model, and recognize the category and the confidence coefficient of the behavior abnormal event existing in the first image based on the spatio-temporal features; and/or an expression recognition subunit (not shown in the figure) configured to perform expression recognition on the first image by using an expression recognition model, determine the expression of the target person, and determine the category, confidence and position of the abnormal event occurring in the target place based on the expression of the target person.
In some optional implementations of this embodiment, the processing unit 502 further includes: a sending subunit (not shown in the figure), configured to send a re-shooting instruction to the first camera if the confidence of the occurred abnormal event is lower than a preset confidence threshold, wherein the re-shooting instruction is a close-range shooting instruction or a multi-angle shooting instruction; an acquisition subunit (not shown in the figure) configured to acquire a third image obtained by the first camera shooting the target person again; and a processing subunit (not shown in the figure) configured to perform image processing on the third image and determine whether an abnormal event occurs in the target site.
In some optional implementations of this embodiment, the processing unit 502 further includes: and a fusion subunit (not shown in the figure) configured to perform image fusion on the third image of the multi-angle if the re-shooting instruction is a multi-angle shooting instruction.
In some optional implementations of this embodiment, the alarm unit 503 includes: a classification subunit (not shown in the figure) configured to perform text classification on the occurred abnormal event by using a text classification model, and determine the level of the occurred abnormal event; an alarm subunit (not shown in the figures) configured to send an alarm instruction based on the level of the occurred abnormal event.
In some optional implementations of the present embodiment, the abnormal event warning apparatus 500 further includes: a second acquisition unit (not shown in the figure) configured to acquire a second image obtained by shooting the person entering the target site by the second camera; an extraction unit (not shown in the figure) configured to extract features of the person from the second image; a matching unit (not shown in the figure) configured to match the characteristics of the persons in a pre-stored participant information set, wherein the participant information in the participant information set comprises the characteristics and information of the participants; a determination unit (not shown in the figure) configured to determine a target person from the successfully matched participants if the matching is successful; and a second alarm unit (not shown) configured to send an alarm command if the matching fails.
In some optional implementations of this embodiment, the determining unit is further configured to: and if the information of the successfully matched participants comprises the historical abnormal event information, determining the successfully matched participants as target personnel.
In some optional implementations of the present embodiment, the abnormal event warning apparatus 500 further includes: a pre-processing unit (not shown in the figure) configured to perform image pre-processing on the first image, wherein the image pre-processing includes at least one of image enhancement and image fusion.
In some optional implementations of the present embodiment, the abnormal event warning apparatus 500 further includes: and a tracking unit (not shown in the figure) configured to track the target person corresponding to the occurred abnormal event by using the target tracking model, and determine a motion track of the target person corresponding to the occurred abnormal event.
Referring now to FIG. 6, a block diagram of a computer system 600 suitable for use in implementing an electronic device (e.g., server 105 of FIG. 1) of an embodiment of the present application is shown. The electronic device shown in fig. 6 is only an example, and should not bring any limitation to the functions and the scope of use of the embodiments of the present application.
As shown in fig. 6, the computer system 600 includes a Central Processing Unit (CPU)601 that can perform various appropriate actions and processes according to a program stored in a Read Only Memory (ROM)602 or a program loaded from a storage section 608 into a Random Access Memory (RAM) 603. In the RAM 603, various programs and data necessary for the operation of the system 600 are also stored. The CPU 601, ROM 602, and RAM 603 are connected to each other via a bus 604. An input/output (I/O) interface 605 is also connected to bus 604.
The following components are connected to the I/O interface 605: an input portion 606 including a keyboard, a mouse, and the like; an output portion 607 including a display such as a Cathode Ray Tube (CRT), a Liquid Crystal Display (LCD), and the like, and a speaker; a storage section 608 including a hard disk and the like; and a communication section 609 including a network interface card such as a LAN card, a modem, or the like. The communication section 609 performs communication processing via a network such as the internet. The driver 610 is also connected to the I/O interface 605 as needed. A removable medium 611 such as a magnetic disk, an optical disk, a magneto-optical disk, a semiconductor memory, or the like is mounted on the drive 610 as necessary, so that a computer program read out therefrom is mounted in the storage section 608 as necessary.
In particular, according to an embodiment of the present disclosure, the processes described above with reference to the flowcharts may be implemented as computer software programs. For example, embodiments of the present disclosure include a computer program product comprising a computer program embodied on a computer readable medium, the computer program comprising program code for performing the method illustrated in the flow chart. In such an embodiment, the computer program may be downloaded and installed from a network through the communication section 609, and/or installed from the removable medium 611. The computer program performs the above-described functions defined in the method of the present application when executed by a Central Processing Unit (CPU) 601.
It should be noted that the computer readable medium described herein can be a computer readable signal medium or a computer readable storage medium or any combination of the two. A computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples of the computer readable storage medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the present application, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. In this application, however, a computer readable signal medium may include a propagated data signal with computer readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated data signal may take many forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to: wireless, wire, fiber optic cable, RF, etc., or any suitable combination of the foregoing.
Computer program code for carrying out operations for aspects of the present application may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, Smalltalk, C + + or the like and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or electronic device. In the case of a remote computer, the remote computer may be connected to the user's computer through any type of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet service provider).
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present application. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The units described in the embodiments of the present application may be implemented by software or hardware. The described units may also be provided in a processor, and may be described as: a processor includes a first acquisition unit, a processing unit, and a first alarm unit. The names of these units do not constitute a limitation to the unit itself in this case, and for example, the first acquisition unit may also be described as "a unit that acquires a first image obtained by the first camera taking a target person in the target site".
As another aspect, the present application also provides a computer-readable medium, which may be contained in the electronic device described in the above embodiments; or may exist separately without being assembled into the electronic device. The computer readable medium carries one or more programs which, when executed by the electronic device, cause the electronic device to: acquiring a first image obtained by shooting a target person in a target place by a first camera; processing the first image to determine whether an abnormal event occurs in the target site; and if the abnormal event occurs, sending an alarm instruction based on the occurred abnormal event.
The above description is only a preferred embodiment of the application and is illustrative of the principles of the technology employed. It will be appreciated by those skilled in the art that the scope of the invention herein disclosed is not limited to the particular combination of features described above, but also encompasses other arrangements formed by any combination of the above features or their equivalents without departing from the spirit of the invention. For example, the above features may be replaced with (but not limited to) features having similar functions disclosed in the present application.

Claims (20)

1. An abnormal event warning method, comprising:
acquiring a first image obtained by shooting a target person in a target place by a first camera;
performing image processing on the first image, and determining whether an abnormal event occurs in the target site;
and if the abnormal event occurs, sending an alarm instruction based on the occurred abnormal event.
2. The method of claim 1, wherein said image processing said first image to determine if an anomalous event occurred within said target site comprises:
performing target detection on the first image by using a target detection model, and determining the category and the confidence coefficient of the article abnormal event existing in the first image; and/or
Extracting spatiotemporal features from the first image by using a behavior recognition model, and recognizing the category and the confidence coefficient of the behavior abnormal event existing in the first image based on the spatiotemporal features; and/or
Performing expression recognition on the first image by using an expression recognition model, determining the expression of the target person, and determining the category, confidence and position of the abnormal event in the target place based on the expression of the target person.
3. The method of claim 2, wherein said image processing said first image to determine if an anomalous event occurred within said target site further comprises:
if the confidence coefficient of the abnormal event is lower than a preset confidence coefficient threshold value, sending a re-shooting instruction to the first camera, wherein the re-shooting instruction is a close-range shooting instruction or a multi-angle shooting instruction;
acquiring a third image obtained by shooting the target person again by the first camera;
and carrying out image processing on the third image to determine whether an abnormal event occurs in the target site.
4. The method of claim 3, wherein after said obtaining a third image obtained by said first camera retaking said target person, further comprising:
and if the re-shooting instruction is the multi-angle shooting instruction, carrying out image fusion on the third image at multiple angles.
5. The method of claim 1, wherein said sending an alarm instruction based on said occurred exception event comprises:
performing text classification on the abnormal events by using a text classification model, and determining the grade of the abnormal events;
an alarm instruction is sent based on the level of the abnormal event that occurred.
6. The method of any one of claims 1-5, wherein prior to said acquiring the first image from the first camera of the target person within the target site, further comprising:
acquiring a second image obtained by shooting the person entering the target place by a second camera;
extracting features of the person from the second image;
matching the characteristics of the personnel in a prestored participant information set, wherein the participant information in the participant information set comprises the characteristics and information of the participants;
if the matching is successful, determining target personnel from the successfully matched participants;
and if the matching fails, sending an alarm instruction.
7. The method of claim 6, wherein the determining the target person from the matching successful participants comprises:
and if the information of the successfully matched participants comprises historical abnormal event information, determining the successfully matched participants as target personnel.
8. The method according to one of claims 1-5, wherein prior to said image processing said first image to determine whether an anomalous event occurred within said target site, further comprising:
and performing image preprocessing on the first image, wherein the image preprocessing comprises at least one of image enhancement and image fusion.
9. The method according to one of claims 1-5, wherein the method further comprises:
and tracking the target personnel corresponding to the occurred abnormal event by using the target tracking model, and determining the motion trail of the target personnel corresponding to the occurred abnormal event.
10. An abnormal event warning device comprising:
the first acquisition unit is configured to acquire a first image obtained by shooting a target person in a target site by a first camera;
a processing unit configured to perform image processing on the first image and determine whether an abnormal event occurs in the target site;
the first alarm unit is configured to send an alarm instruction based on the abnormal event if the abnormal event occurs.
11. The apparatus of claim 10, wherein the processing unit comprises:
the target detection subunit is configured to perform target detection on the first image by using a target detection model, and determine the category and the confidence coefficient of the article abnormal event existing in the first image; and/or
A behavior recognition subunit configured to extract spatiotemporal features from the first image using a behavior recognition model, and recognize a category and a confidence level of a behavior abnormal event existing in the first image based on the spatiotemporal features; and/or
The expression recognition subunit is configured to perform expression recognition on the first image by using an expression recognition model, determine the expression of the target person, and determine the category, confidence and position of the abnormal event occurring in the target place based on the expression of the target person.
12. The apparatus of claim 11, wherein the processing unit further comprises:
the sending subunit is configured to send a re-shooting instruction to the first camera if the confidence of the occurred abnormal event is lower than a preset confidence threshold, wherein the re-shooting instruction is a close-range shooting instruction or a multi-angle shooting instruction;
the acquisition subunit is configured to acquire a third image obtained by shooting the target person again by the first camera;
and the processing subunit is configured to perform image processing on the third image and determine whether an abnormal event occurs in the target site.
13. The apparatus of claim 12, wherein the processing unit further comprises:
and the fusion subunit is configured to perform image fusion on the third images in multiple angles if the re-shooting instruction is the multiple-angle shooting instruction.
14. The apparatus of claim 10, wherein the alarm unit comprises:
the classification subunit is configured to perform text classification on the occurred abnormal events by using a text classification model, and determine the grade of the occurred abnormal events;
an alarm subunit configured to send an alarm instruction based on the level of the occurred abnormal event.
15. The apparatus according to one of claims 10-14, wherein the apparatus further comprises:
the second acquisition unit is configured to acquire a second image obtained by shooting the person entering the target place by the second camera;
an extraction unit configured to extract features of the person from the second image;
a matching unit configured to match the characteristics of the persons in a pre-stored participant information set, wherein the participant information in the participant information set comprises the characteristics and information of the participants;
the determining unit is configured to determine a target person from the successfully matched participants if the matching is successful;
and the second alarm unit is configured to send an alarm instruction if the matching fails.
16. The apparatus of claim 15, wherein the determining unit is further configured to:
and if the information of the successfully matched participants comprises historical abnormal event information, determining the successfully matched participants as target personnel.
17. The apparatus according to one of claims 10-14, wherein the apparatus further comprises:
a preprocessing unit configured to perform image preprocessing on the first image, wherein the image preprocessing includes at least one of image enhancement and image fusion.
18. The apparatus according to one of claims 10-14, wherein the apparatus further comprises:
and the tracking unit is configured to track the target person corresponding to the occurred abnormal event by using the target tracking model and determine the motion track of the target person corresponding to the occurred abnormal event.
19. An electronic device, comprising:
one or more processors;
a storage device having one or more programs stored thereon,
when executed by the one or more processors, cause the one or more processors to implement the method of any one of claims 1-9.
20. A computer-readable medium, on which a computer program is stored, wherein the computer program, when being executed by a processor, carries out the method according to any one of claims 1-9.
CN202010093135.2A 2020-02-14 2020-02-14 Abnormal event alarm method and device Active CN111263114B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010093135.2A CN111263114B (en) 2020-02-14 2020-02-14 Abnormal event alarm method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010093135.2A CN111263114B (en) 2020-02-14 2020-02-14 Abnormal event alarm method and device

Publications (2)

Publication Number Publication Date
CN111263114A true CN111263114A (en) 2020-06-09
CN111263114B CN111263114B (en) 2022-06-17

Family

ID=70951117

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010093135.2A Active CN111263114B (en) 2020-02-14 2020-02-14 Abnormal event alarm method and device

Country Status (1)

Country Link
CN (1) CN111263114B (en)

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111757069A (en) * 2020-07-10 2020-10-09 广州博冠智能科技有限公司 Monitoring anti-theft method and device based on intelligent doorbell
CN111965726A (en) * 2020-08-12 2020-11-20 浙江科技学院 System and method for inspecting field entrance and exit objects for nuclear power safety
CN112633133A (en) * 2020-12-18 2021-04-09 江苏省苏力环境科技有限责任公司 AI-based intelligent water station operation and maintenance method, system, terminal and storage medium
CN113610816A (en) * 2021-08-11 2021-11-05 湖北中烟工业有限责任公司 Automatic detection and early warning method and device for transverse filter tip rod and electronic equipment
CN114220165A (en) * 2021-11-25 2022-03-22 慧之安信息技术股份有限公司 Automatic alarm method and system based on motion recognition
WO2022062396A1 (en) * 2020-09-28 2022-03-31 深圳市商汤科技有限公司 Image processing method and apparatus, and electronic device and storage medium
US20220189266A1 (en) * 2020-12-11 2022-06-16 Patriot One Technologies Inc. System and method for real-time multi-person threat tracking and re-identification
CN115146878A (en) * 2022-09-05 2022-10-04 深圳市海邻科信息技术有限公司 Commanding and scheduling method, system, vehicle-mounted equipment and computer readable storage medium
CN115361504A (en) * 2022-10-24 2022-11-18 中国铁塔股份有限公司 Monitoring video processing method and device and electronic equipment
TWI799821B (en) * 2021-03-30 2023-04-21 許維綸 Hazard Prediction and Prevention System

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102938058A (en) * 2012-11-14 2013-02-20 南京航空航天大学 Method and system for video driving intelligent perception and facing safe city
US20170124821A1 (en) * 2015-10-28 2017-05-04 Xiaomi Inc. Alarm method and device
CN107818312A (en) * 2017-11-20 2018-03-20 湖南远钧科技有限公司 A kind of embedded system based on abnormal behaviour identification
CN108805071A (en) * 2018-06-06 2018-11-13 北京京东金融科技控股有限公司 Identity verification method and device, electronic equipment, storage medium
CN109918989A (en) * 2019-01-08 2019-06-21 平安科技(深圳)有限公司 The recognition methods of personage's behavior type, device, medium and equipment in monitored picture
CN110428522A (en) * 2019-07-24 2019-11-08 青岛联合创智科技有限公司 A kind of intelligent safety and defence system of wisdom new city

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102938058A (en) * 2012-11-14 2013-02-20 南京航空航天大学 Method and system for video driving intelligent perception and facing safe city
US20170124821A1 (en) * 2015-10-28 2017-05-04 Xiaomi Inc. Alarm method and device
CN107818312A (en) * 2017-11-20 2018-03-20 湖南远钧科技有限公司 A kind of embedded system based on abnormal behaviour identification
CN108805071A (en) * 2018-06-06 2018-11-13 北京京东金融科技控股有限公司 Identity verification method and device, electronic equipment, storage medium
CN109918989A (en) * 2019-01-08 2019-06-21 平安科技(深圳)有限公司 The recognition methods of personage's behavior type, device, medium and equipment in monitored picture
CN110428522A (en) * 2019-07-24 2019-11-08 青岛联合创智科技有限公司 A kind of intelligent safety and defence system of wisdom new city

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
SHUIWANG JI 等: "3D Convolutional Neural Networks for Human Action Recognition", 《IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE》 *

Cited By (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111757069A (en) * 2020-07-10 2020-10-09 广州博冠智能科技有限公司 Monitoring anti-theft method and device based on intelligent doorbell
CN111757069B (en) * 2020-07-10 2022-03-15 广州博冠智能科技有限公司 Monitoring anti-theft method and device based on intelligent doorbell
CN111965726A (en) * 2020-08-12 2020-11-20 浙江科技学院 System and method for inspecting field entrance and exit objects for nuclear power safety
CN111965726B (en) * 2020-08-12 2023-09-08 浙江科技学院 Inspection system and method for field access device for nuclear power safety
WO2022062396A1 (en) * 2020-09-28 2022-03-31 深圳市商汤科技有限公司 Image processing method and apparatus, and electronic device and storage medium
US20220189266A1 (en) * 2020-12-11 2022-06-16 Patriot One Technologies Inc. System and method for real-time multi-person threat tracking and re-identification
CN112633133A (en) * 2020-12-18 2021-04-09 江苏省苏力环境科技有限责任公司 AI-based intelligent water station operation and maintenance method, system, terminal and storage medium
TWI799821B (en) * 2021-03-30 2023-04-21 許維綸 Hazard Prediction and Prevention System
CN113610816A (en) * 2021-08-11 2021-11-05 湖北中烟工业有限责任公司 Automatic detection and early warning method and device for transverse filter tip rod and electronic equipment
CN114220165A (en) * 2021-11-25 2022-03-22 慧之安信息技术股份有限公司 Automatic alarm method and system based on motion recognition
CN115146878A (en) * 2022-09-05 2022-10-04 深圳市海邻科信息技术有限公司 Commanding and scheduling method, system, vehicle-mounted equipment and computer readable storage medium
CN115361504A (en) * 2022-10-24 2022-11-18 中国铁塔股份有限公司 Monitoring video processing method and device and electronic equipment

Also Published As

Publication number Publication date
CN111263114B (en) 2022-06-17

Similar Documents

Publication Publication Date Title
CN111263114B (en) Abnormal event alarm method and device
US10812761B2 (en) Complex hardware-based system for video surveillance tracking
KR101215948B1 (en) Image information masking method of monitoring system based on face recognition and body information
CN106780250B (en) Intelligent community security event processing method and system based on Internet of things technology
US20160019427A1 (en) Video surveillence system for detecting firearms
KR102149832B1 (en) Automated Violence Detecting System based on Deep Learning
US20160351031A1 (en) Warning System and Method Using Spatio-Temporal Situation Data
US20190087464A1 (en) Regional population management system and method
CN108319926A (en) A kind of the safety cap wearing detecting system and detection method of building-site
CN102306304A (en) Face occluder identification method and device
CN111241883B (en) Method and device for preventing cheating of remote tested personnel
CN111091098A (en) Training method and detection method of detection model and related device
CN112084963B (en) Monitoring early warning method, system and storage medium
CN111259682B (en) Method and device for monitoring safety of construction site
KR20210030791A (en) Server, method and computer program for detecting abnormal state of monitoring target video
CN110602453B (en) Internet of things big data intelligent video security monitoring system
CN111126411B (en) Abnormal behavior identification method and device
KR102653485B1 (en) Electronic apparatus for building fire detecting model and method thereof
JP5088463B2 (en) Monitoring system
CN111860187A (en) High-precision worn mask identification method and system
CN113628172A (en) Intelligent detection algorithm for personnel handheld weapons and smart city security system
CN114067396A (en) Vision learning-based digital management system and method for live-in project field test
Ng et al. Surveillance system with motion and face detection using histograms of oriented gradients
CN115797125B (en) Rural digital intelligent service platform
CN109327681B (en) Specific personnel identification alarm system and method thereof

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant