CN113822216A - Event detection method, device, system, electronic equipment and storage medium - Google Patents

Event detection method, device, system, electronic equipment and storage medium Download PDF

Info

Publication number
CN113822216A
CN113822216A CN202111154083.6A CN202111154083A CN113822216A CN 113822216 A CN113822216 A CN 113822216A CN 202111154083 A CN202111154083 A CN 202111154083A CN 113822216 A CN113822216 A CN 113822216A
Authority
CN
China
Prior art keywords
event
abnormal event
image information
snapshot image
information
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111154083.6A
Other languages
Chinese (zh)
Inventor
刘华凯
吴佳飞
张广程
吴晓明
徐慧敏
张义保
叶建云
冷冰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Sensetime Intelligent Technology Co Ltd
Original Assignee
Shanghai Sensetime Intelligent Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Sensetime Intelligent Technology Co Ltd filed Critical Shanghai Sensetime Intelligent Technology Co Ltd
Priority to CN202111154083.6A priority Critical patent/CN113822216A/en
Publication of CN113822216A publication Critical patent/CN113822216A/en
Pending legal-status Critical Current

Links

Images

Landscapes

  • Alarm Systems (AREA)

Abstract

The present disclosure relates to an event detection method, an event detection apparatus, an event detection system, an electronic device, and a storage medium, where the method is applied to a server, and the method includes: receiving event information sent by a panoramic view field detection terminal, wherein the event information comprises a target area where an abnormal event occurs and first snapshot image information corresponding to the abnormal event; acquiring second snapshot image information corresponding to the abnormal event from a local view field detection terminal corresponding to the target area; determining a target face image corresponding to the abnormal event by associating the first snapshot image information with the second snapshot image information; and determining the identity information of the agent corresponding to the abnormal event according to the target face image. The embodiment of the disclosure can effectively realize identity positioning of the agent of the abnormal event, and improve the detection and processing efficiency of the abnormal event.

Description

Event detection method, device, system, electronic equipment and storage medium
Technical Field
The present disclosure relates to the field of computer technologies, and in particular, to an event detection method, an event detection device, an event detection system, an electronic device, and a storage medium.
Background
With the rapid development of computer vision and video security hardware, based on intelligent video analysis, it has become practical to automatically identify abnormal events such as illegal events or civilized events and to store and evidence-obtain related images. In the related technology, event monitoring is realized by adopting an intelligent image acquisition terminal with a large view field. However, the resolution of the intelligent image acquisition terminal with a large view field is low, and it is difficult to acquire a clear face, so that it is difficult to locate the specific information of the agent corresponding to the abnormal event.
Disclosure of Invention
The disclosure provides an event detection method, an event detection device, an event detection system, electronic equipment and a storage medium.
According to an aspect of the present disclosure, there is provided an event detection method, which is applied to a server, the method including: receiving event information sent by a panoramic view field detection terminal, wherein the event information comprises a target area where an abnormal event occurs and first snapshot image information corresponding to the abnormal event; acquiring second snapshot image information corresponding to the abnormal event from a local view field detection terminal corresponding to the target area; determining a target face image corresponding to the abnormal event by associating the first snapshot image information with the second snapshot image information; and determining the identity information of the agent corresponding to the abnormal event according to the target face image.
In a possible implementation manner, the first snapshot image information includes a panoramic human body snapshot, and the second snapshot image information includes a close-up human body snapshot; the determining the target face image corresponding to the abnormal event by associating the first snapshot image information with the second snapshot image information includes: performing feature extraction on the panoramic human body snapshot to obtain panoramic human body features, and performing feature extraction on the close-up human body snapshot to obtain close-up human body features; performing feature matching on the panoramic human body features and the close-up human body features; under the condition that the panoramic human body features and the close-up human body features are successfully matched, determining whether a face snapshot image which is associated with the close-up human body snapshot image exists in the second snapshot image information or not; and under the condition that the second snapshot image information comprises the face snapshot image which is associated with the close-up human body snapshot image, determining the face snapshot image which is associated with the close-up human body snapshot image as the target face image.
In a possible implementation manner, the determining, according to the target face image, identity information of an agent corresponding to the abnormal event includes: extracting the features of the target face image to obtain the target face features; performing feature matching on the target face features and a plurality of reference face features in a target library, wherein each reference face feature in the target library corresponds to identity information; and under the condition that the target library comprises the reference face features successfully matched with the target face features, determining the identity information corresponding to the successfully matched reference face features as the identity information of the agent.
In one possible implementation, the method further includes: and generating first event warning information according to the event information, the second snapshot image information and the identity information of the agent.
In one possible implementation, the method further includes: determining whether the abnormal event is ended or not under the condition that the target library does not contain the reference human face features successfully matched with the target human face features; and under the condition that the abnormal event is not finished, controlling the local view field detection terminal to update the second snapshot image information.
In one possible implementation, the method further includes: in the event that the panoramic and close-up body features fail to match, determining whether the abnormal event is over; and under the condition that the abnormal event is not finished, controlling the panoramic view field detection terminal to update the first snapshot image information, and/or controlling the local view field detection terminal to update the second snapshot image information.
In one possible implementation, the method further includes: determining whether the abnormal event is ended or not in the case that a face snapshot having an association relationship with the close-up human body snapshot is not included in the second snapshot image information; and under the condition that the abnormal event is not finished, controlling the local view field detection terminal to update the second snapshot image information.
In one possible implementation, the method further includes: and under the condition that the abnormal event is ended, generating second event alarm information according to the event information and the second snapshot image information.
In one possible implementation, the method further includes: sending an acquisition instruction to the local view field detection terminal, wherein the acquisition instruction is used for indicating that the abnormal event occurs in the target area; and/or sending a stop instruction to the local field-of-view detection terminal, wherein the stop instruction is used for indicating that the abnormal event has ended.
According to an aspect of the present disclosure, there is provided an event detection apparatus, which is applied to a server, the apparatus including: the panoramic view field detection terminal comprises a receiving module, a capturing module and a display module, wherein the receiving module is used for receiving event information sent by the panoramic view field detection terminal, and the event information comprises a target area where an abnormal event occurs and first snapshot image information corresponding to the abnormal event; the acquisition module is used for acquiring second snapshot image information corresponding to the abnormal event from a local view field detection terminal corresponding to the target area; the association module is used for determining a target face image corresponding to the abnormal event by associating the first snapshot image information with the second snapshot image information; and the identity information determining module is used for determining the identity information of the agent corresponding to the abnormal event according to the target face image.
According to an aspect of the present disclosure, there is provided an event detection system, the system including: the system comprises a panoramic view field detection terminal, a local view field detection terminal and a server; the panoramic view field detection terminal sends event information to the server, wherein the event information comprises a target area where an abnormal event occurs and first snapshot image information corresponding to the abnormal event; the server receives the event information and acquires second snapshot image information corresponding to the abnormal event from the local view field detection terminal corresponding to the target area; the server determines a target face image corresponding to the abnormal event by associating the first snapshot image information with the second snapshot image information; and the server determines the identity information of the agent corresponding to the abnormal event according to the target face image.
According to an aspect of the present disclosure, there is provided an electronic device including: a processor; a memory for storing processor-executable instructions; wherein the processor is configured to invoke the memory-stored instructions to perform the above-described method.
According to an aspect of the present disclosure, there is provided a computer readable storage medium having stored thereon computer program instructions which, when executed by a processor, implement the above-described method.
In the embodiment of the disclosure, a server receives event information including a target area where an abnormal event occurs and first snapshot image information corresponding to the abnormal event, and acquires second snapshot image information corresponding to the abnormal event from a local view field detection terminal corresponding to the target area, and associates the first snapshot image information obtained in the panoramic view field and the second snapshot image information obtained in the local close-up view field, so as to determine a target face image corresponding to the abnormal event, and further, according to the target face image, identity information of an agent corresponding to the abnormal event can be quickly determined, thereby effectively implementing identity positioning of the agent of the abnormal event, and improving detection and processing efficiency of the abnormal event.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the disclosure. Other features and aspects of the present disclosure will become apparent from the following detailed description of exemplary embodiments, which proceeds with reference to the accompanying drawings.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the present disclosure and, together with the description, serve to explain the principles of the disclosure.
FIG. 1 shows a schematic diagram of an event detection system according to an embodiment of the present disclosure;
FIG. 2 illustrates a flow diagram of a method of event detection in accordance with an embodiment of the present disclosure;
FIG. 3 illustrates a workflow diagram of a server according to an embodiment of the present disclosure;
FIG. 4 shows a flow diagram of a method of event detection according to an embodiment of the present disclosure;
FIG. 5 shows a workflow diagram of a global image detection terminal according to an embodiment of the disclosure;
FIG. 6 shows a flow diagram of a method of event detection according to an embodiment of the present disclosure;
FIG. 7 illustrates a workflow diagram of a local field of view detection terminal according to an embodiment of the disclosure;
FIG. 8 shows a block diagram of an event detection device according to an embodiment of the present disclosure;
FIG. 9 shows a block diagram of an electronic device in accordance with an embodiment of the disclosure;
FIG. 10 shows a block diagram of an electronic device in accordance with an embodiment of the disclosure.
Detailed Description
Various exemplary embodiments, features and aspects of the present disclosure will be described in detail below with reference to the accompanying drawings. In the drawings, like reference numbers can indicate functionally identical or similar elements. While the various aspects of the embodiments are presented in drawings, the drawings are not necessarily drawn to scale unless specifically indicated.
The word "exemplary" is used exclusively herein to mean "serving as an example, embodiment, or illustration. Any embodiment described herein as "exemplary" is not necessarily to be construed as preferred or advantageous over other embodiments.
The term "and/or" herein is merely an association describing an associated object, meaning that three relationships may exist, e.g., a and/or B, may mean: a exists alone, A and B exist simultaneously, and B exists alone. In addition, the term "at least one" herein means any one of a plurality or any combination of at least two of a plurality, for example, including at least one of A, B, C, and may mean including any one or more elements selected from the group consisting of A, B and C.
Furthermore, in the following detailed description, numerous specific details are set forth in order to provide a better understanding of the present disclosure. It will be understood by those skilled in the art that the present disclosure may be practiced without some of these specific details. In some instances, methods, means, elements and circuits that are well known to those skilled in the art have not been described in detail so as not to obscure the present disclosure.
In public scenes such as communities, shopping malls, office buildings, public transportation and the like, abnormal events such as illegal or non-moral behaviors and the like are automatically identified, and the method has important significance for maintaining public safety. In the related art, an intelligent image acquisition terminal with a large view field is arranged in a public scene, so that event detection is realized in the view field range of the intelligent image acquisition terminal, and whether an abnormal event occurs or not is determined.
However, the field of view of the intelligent image acquisition terminal is large, and accordingly, the acquisition resolution of the intelligent image acquisition terminal for the image is low, so that based on the intelligent image acquisition terminal, only an abnormal event can be detected, and since the resolution of the image of the agent corresponding to the abnormal event acquired by the intelligent image acquisition terminal is low, a clear face image of the agent cannot be acquired, and finally, specific information of the agent cannot be determined, that is, information of the identity of the agent corresponding to the abnormal event cannot be located, so that subsequent processing such as reminding or stopping of the abnormal event cannot be performed.
The embodiment of the disclosure provides an event detection method, which can be applied to public scenes such as communities, shopping malls, office buildings, public transportation and the like. Fig. 1 shows a schematic diagram of an event detection system of an embodiment of the present disclosure. The event detection system shown in fig. 1 may perform the event detection method of the embodiments of the present disclosure. As shown in fig. 1, the event detection system includes: a panoramic view detection terminal 10, a local view detection terminal 20, and a server 30.
The panoramic view field detection terminal 10 sends event information to the server 30, wherein the event information comprises a target area where an abnormal event occurs and first snapshot image information corresponding to the abnormal event; the server 30 receives the event information and acquires second snapshot image information corresponding to the abnormal event from the local view field detection terminal 20 corresponding to the target area; the server 30 determines a target face image corresponding to the abnormal event by associating the first snap shot image information with the second snap shot image information; the server 30 determines the identity information of the agent corresponding to the abnormal event according to the target face image.
The panoramic view field detection terminal 10 may be an image capturing device with computing capability, and the corresponding view field range is large and the focal length is small. The local view field detection terminal 20 may also be an image acquisition device with computing power, and the corresponding view field range is smaller and the focal length is larger.
In the initialization stage, the system times of the panoramic view detection terminal 10, the local view detection terminal 20, and the server 30 inside the system are synchronized.
The panoramic view field detection terminal 10 detects an abnormal event occurring in the panoramic view field range thereof, and generates event information when the abnormal event is detected, wherein the event information includes a target area where the abnormal event occurs and first snapshot image information corresponding to the abnormal event. Wherein the target area is a local close-up field of view within the panoramic field of view of the panoramic field of view detection terminal 10.
Since the field of view of the panoramic field of view detection terminal 10 is large and the focal length is small, the first snapshot image information only includes the panoramic human snapshot.
A plurality of local view field detection terminals 20 are arranged in the panoramic view field range of the panoramic view field detection terminal 10, and the local close-up view fields of different local view field detection terminals correspond to different local close-up view fields in the panoramic view field range of the panoramic view field detection terminal 10.
The panoramic view field detection terminal 10 detects an abnormal event and generates event information corresponding to the abnormal event, and then transmits the event information to the server 30. The server 30 starts the local view field detection terminal 20 with the local close-up view field as the target area according to the target area included in the event information, and the local view field detection terminal 20 performs image acquisition and image processing on the target area to determine second snapshot image information.
Since the local field of view detection terminal 20 has a small field of view range and a large focal length, the second snapshot image information may include a close-up human snapshot and a human face snapshot.
The server 30 acquires the second snapshot image information from the local view field detection terminal 20, associates the first snapshot image information obtained in the panoramic view field and the second snapshot image information obtained in the local close-up view field, and can determine the target face image corresponding to the abnormal event, and further, according to the target face image, can quickly determine the identity information of the agent corresponding to the abnormal event, thereby effectively realizing identity positioning of the agent of the abnormal event, and improving the detection and processing efficiency of the abnormal event.
The following describes in detail specific procedures of the panoramic view detection terminal 10, the local view detection terminal 20, and the server 30 to execute the event detection method according to the embodiment of the present disclosure, respectively.
Fig. 2 shows a flow diagram of a method of event detection according to an embodiment of the present disclosure. The event detection method is applied to the server 30 shown in fig. 1. As shown in fig. 2, the event detection method may include:
in step S21, event information sent by the panoramic view field detection terminal is received, where the event information includes a target area where an abnormal event occurs and first captured image information corresponding to the abnormal event.
In step S22, second captured image information corresponding to the abnormal event is acquired from the local visual field detection terminal corresponding to the target area.
In step S23, a target face image corresponding to the abnormal event is determined by associating the first captured image information and the second captured image information.
In step S24, the identity information of the agent corresponding to the abnormal event is determined from the target face image.
In the embodiment of the disclosure, a server receives event information including a target area where an abnormal event occurs and first snapshot image information corresponding to the abnormal event, and acquires second snapshot image information corresponding to the abnormal event from a local view field detection terminal corresponding to the target area, and associates the first snapshot image information obtained in the panoramic view field and the second snapshot image information obtained in the local close-up view field, so as to determine a target face image corresponding to the abnormal event, and further, according to the target face image, identity information of an agent corresponding to the abnormal event can be quickly determined, thereby effectively implementing identity positioning of the agent of the abnormal event, and improving detection and processing efficiency of the abnormal event.
The abnormal event may be an illegal violation event (for example, running a red light, entering a corridor or an elevator by an electric vehicle, etc.), may be an unconscious event (for example, fighting, spitting anywhere, etc.), and may also be other types of abnormal events, which is not specifically limited by the present disclosure.
The server 30 is provided with a preset association algorithm, and has strong computing power. The server 30 has a data link with the panoramic view detection terminal 10 and the local view detection terminal 20, respectively, for data transmission.
Still taking the above example 1 as an example, as shown in fig. 1, there is a data link a between the server 30 and the panoramic field detection terminal 10. A data link B exists between the server 30 and the local view detection terminal 20. Data chain a may be used to transmit event information and data chain B may be used to transmit second snap image information.
In an example, the data chain may transmit data based on a network, may transmit data based on a transmission protocol, and may also transmit data based on other transmission manners, which is not specifically limited in this disclosure.
The transmission protocol includes, but is not limited to, hypertext transfer security protocol (https), google remote procedure call protocol (grpc), instant messaging protocol (mqtt), and the like, which is not limited in this disclosure.
Based on the data transmission, the server 30 may receive the event information transmitted by the global field-of-view detection terminal 10 and acquire the second snap image information from the local field-of-view detection terminal 20.
In one possible implementation manner, the event detection method further includes: and sending an acquisition instruction to the local view field detection terminal, wherein the acquisition instruction is used for indicating that an abnormal event occurs in the target area.
Since the communication between the panoramic view detection terminal 10 and the local view detection terminal 20 is complicated, the server 30 can determine that an abnormal event has occurred in the target area after the server 30 receives the event information transmitted from the panoramic view detection terminal 10.
At this time, the server 30 generates an acquisition instruction according to the target area, and sends the acquisition instruction to the local view field detection terminal 20 with the local close-up view field as the target area, so that the local view field detection terminal 20 starts an image acquisition and image processing function in response to the acquisition instruction, determines second snapshot image information corresponding to the abnormal event, and sends the second snapshot image information to the server 30.
In an example, the local visual field detection terminal 20 with the local close-up visual field as the target area is started, and in a manner of determining the second snapshot image information corresponding to the abnormal event, a relay device may be further added between the panoramic visual field detection terminal 10 and the local visual field detection terminal 20.
After the panoramic view field detection terminal 10 detects that an abnormal event occurs in the target area, an event occurrence instruction is generated according to the target area and sent to the relay device, and then the relay device forwards the event occurrence instruction to the local view field detection terminal 20 with the local close-up view field as the target area, so as to start the local view field detection terminal 20 to determine second snapshot image information corresponding to the abnormal event.
The specific form of the transfer device may be determined according to actual conditions, and the disclosure does not specifically limit this.
After receiving the event information sent by the global field-of-view detection terminal 10 and acquiring the second snapshot image information from the local field-of-view detection terminal 20, the server 30 performs association processing on the first snapshot image information and the second snapshot image information included in the event information based on a preset association algorithm.
The first snapshot image information may include a panoramic human snapshot corresponding to the abnormal event collected by the global field of view detection terminal 10. Hereinafter, a process of determining the first snapshot image information by the local view detection terminal 20 will be described in detail with reference to possible implementation manners of the present disclosure, and details thereof are not described herein.
The second snapshot image information may include a close-up human body snapshot, a human face snapshot, and an association relationship between the two, which are acquired by the local field-of-view detection terminal 20 and appear in the target area. The incidence relation is used for indicating whether the close-up human body snapshot image and the human face snapshot image correspond to the same person or not.
For example, the second snapshot image information includes a close-up human body snapshot image a and a face snapshot image a, and an association relationship exists between the close-up human body snapshot image a and the face snapshot image a, so that it can be determined that the close-up human body snapshot image a and the face snapshot image a are obtained after the same person is subjected to image acquisition.
Hereinafter, a process of determining the second snapshot image information by the local view detection terminal 20 will be described in detail with reference to possible implementation manners of the present disclosure, and details thereof are not described herein.
In a possible implementation manner, the first snapshot image information includes a panoramic human body snapshot, and the second snapshot image information includes a close-up human body snapshot; determining a target face image corresponding to the abnormal event by associating the first snapshot image information with the second snapshot image information, wherein the method comprises the following steps: carrying out feature extraction on the panoramic human body snapshot to obtain panoramic human body features, and carrying out feature extraction on the close-up human body snapshot to obtain close-up human body features; carrying out feature matching on the panoramic human body features and the close-up human body features; under the condition that the panoramic human body features and the close-up human body features are successfully matched, determining whether the second snapshot image information comprises a human face snapshot image which is associated with the close-up human body snapshot image; and under the condition that the second snapshot image information comprises the face snapshot image which is associated with the close-up human body snapshot image, determining the face snapshot image which is associated with the close-up human body snapshot image as the target face image.
The first snapshot image information comprises a panoramic human body snapshot image acquired by panoramic view field detection equipment, and the second snapshot image information comprises a close-up human body snapshot image acquired by local view field detection equipment.
The server 30 performs feature matching on the panoramic human body features of the panoramic human body snap shot and the close-up human body features of the close-up human body snap shot.
In one possible implementation manner, the event detection method further includes: determining whether the abnormal event is ended or not under the condition that the matching of the panoramic human body features and the close-up human body features fails; and under the condition that the abnormal event is not finished, controlling the panoramic view field detection terminal to update the first snapshot image information, and/or controlling the local view field detection terminal to update the second snapshot image information.
After the server 30 matches the human body features of the panoramic human body snapshot and the close-up human body features of the close-up human body snapshot with the human body features, if the matching fails, it may be determined that the close-up human body snapshot of the agent corresponding to the abnormal event has not been acquired by the local view field detection terminal 20 for the moment, or that the quality of the panoramic human body snapshot acquired by the global view field detection terminal 10 is poor.
At this time, the server 30 determines whether the abnormal event is ended, and if not, controls the panoramic view field detection terminal 10 to update the first snapshot image information, and/or controls the local view field detection terminal 20 to update the second snapshot image information, so as to update the panoramic human body snapshot and/or the close-up human body snapshot, and performs the above-mentioned human body feature matching step again by using the updated panoramic human body snapshot and/or the updated close-up human body snapshot until the matching is successful, or until the abnormal event is ended.
Under the condition that the panoramic body features and the close-up body features are successfully matched, the server 30 can determine that the close-up body snap-shot image of the agent corresponding to the abnormal event is acquired by the local view field detection terminal 20.
At this time, the server 30 further determines whether or not the second captured image information acquired from the local visual field detection terminal 20 includes a face captured image having an association relationship with the close-up human body captured image.
In one possible implementation manner, the event detection method further includes: determining whether the abnormal event is ended or not under the condition that the second snapshot image information does not include a face snapshot image which has an association relation with the close-up human body snapshot image; and under the condition that the abnormal event is not finished, controlling the local view field detection terminal to update the second snapshot image information.
In the case where the face snapshot having an association relationship with the close-up human body snapshot is not included in the second snapshot information, it may be determined that the face snapshot of the agent corresponding to the abnormal event has not been acquired temporarily by the local visual field detection terminal 20.
At this time, the server 30 determines whether the abnormal event is ended, and if not, controls the local field-of-view detection terminal 20 to update the second snapshot image information to update the close-up human body snapshot image, and performs the above-mentioned human body feature matching and the step of determining whether there is a human face snapshot image with an association relationship again by using the updated close-up human body snapshot image until it is determined that there is a human face snapshot image with an association relationship, or until the abnormal event is ended.
When the server 30 determines that the face snapshot image associated with the association exists in the second snapshot image information, it may be determined that the target face image of the agent corresponding to the abnormal event is acquired by the local view field detection terminal 20.
For example, the close-up human body feature successfully matched with the panoramic human body feature corresponds to the close-up human body snapshot B included in the second snapshot image information, and if the second snapshot image information includes the face snapshot B associated with the close-up human body snapshot B, it may be determined that the face snapshot B is the target face image of the actor corresponding to the abnormal event.
At this time, the server 30 may determine the identity information of the agent corresponding to the abnormal event according to the determined target face image.
In one possible implementation manner, determining, according to a target face image, identity information of an agent corresponding to an abnormal event includes: extracting the features of the target face image to obtain the target face features; performing feature matching on the target face features and a plurality of reference face features in a target library, wherein each reference face feature in the target library corresponds to identity information; and under the condition that the target library comprises the reference face features successfully matched with the target face features, determining the identity information corresponding to the successfully matched reference face features as the identity information of the agent.
In one example, the target library may be constructed according to the resident persons in the preset target geographic area and the identity information of the resident persons. For example, the method includes the steps of collecting face images of standing people in a preset target geographic area, carrying out feature extraction on the face images to obtain reference face features, and further constructing and obtaining a target library according to the reference face features and identity information of the standing people corresponding to the reference face features.
The preset target geographic area may be a geographic area where the panoramic view field detection terminal 10 is located, or may be another preset geographic area, which is not specifically limited by the present disclosure.
After the server 30 determines the target face image corresponding to the abnormal event, it may be determined whether the target library includes the reference face feature successfully matched with the target face feature corresponding to the target face image by using a pedestrian re-recognition algorithm.
In one possible implementation manner, the event detection method further includes: determining whether the abnormal event is ended or not under the condition that the target library does not include the reference face features successfully matched with the target face features; and under the condition that the abnormal event is not finished, controlling the panoramic view field detection terminal to update the first snapshot image information, and/or controlling the local view field detection terminal to update the second snapshot image information.
If the target library does not include the reference face features successfully matched with the target face features, the quality of the target face image acquired by the local view field detection terminal 20 may be poor.
At this time, the server 30 determines whether the abnormal event is ended, and if not, controls the local view field detection terminal 20 to update the second snapshot image information to update the close-up human body snapshot image and the face snapshot image, and performs the above steps of human body feature matching, target face image determination, and face feature matching again by using the updated close-up human body snapshot image and face snapshot image until a reference face feature successfully matched with the target face feature is determined, or until the abnormal event is ended.
If the target library comprises the reference face features successfully matched with the target face features, the identity information corresponding to the successfully matched reference face features can be determined as the identity information of the agent corresponding to the abnormal event, and therefore identity positioning of the agent corresponding to the abnormal event is achieved.
In one possible implementation manner, the event detection method further includes: and generating first event warning information according to the event information, the second snapshot image information and the identity information of the agent.
After determining the identity information of the agent corresponding to the abnormal event, the server 30 generates first event warning information according to the event information, the second snapshot image information, and the identity information of the agent.
The server 30 may also send the first event warning information to the relevant management department, so that the relevant management department can quickly locate the agent corresponding to the abnormal event according to the identity information of the agent, and further remind, stop or correspondingly penalize the abnormal event in the target area, thereby effectively improving the event detection efficiency and the law enforcement duty efficiency.
In addition, the server 30 may further store the first event warning information as an evidence chain corresponding to the abnormal event, so as to facilitate subsequent query.
In one example, the server 30 determines whether the exception event is over, including: determining whether an event ending instruction sent by the panoramic view field detection terminal 10 is received; in the case where an event end instruction is received, it is determined that the abnormal event has ended.
As shown in the above embodiment, after the panoramic view field detection terminal 10 detects that an abnormal event occurs in the target area, an event occurrence instruction is generated according to the target area, and the event occurrence instruction is sent to the server 30 to prompt the server 30 to receive event information to be sent subsequently by the panoramic view field detection terminal 10.
The panoramic view field detection terminal 10 continuously performs event detection, and updates event information according to a first preset period and sends the event information to the server 30 in the process that an abnormal event is not finished. The first preset period may be 10 seconds, for example, the event information is updated every 10 seconds. The specific value of the first preset period can be flexibly set according to the actual situation, which is not specifically limited in the present disclosure.
In the case where the panoramic field detection terminal 10 detects that the abnormal event has ended, an event end instruction is generated and sent to the server 30 to prompt the server 30 that the abnormal event has ended.
In one possible implementation manner, the event detection method further includes: and sending a stop instruction to the local visual field detection terminal, wherein the stop instruction is used for indicating that the abnormal event is ended.
As shown in the above embodiment, the server 30 sends an acquisition instruction to the local visual field detection terminal 20 to control the local visual field detection terminal 20 to start the image acquisition and processing functions in response to the acquisition instruction, and determine the second captured image information corresponding to the abnormal event.
After determining that the abnormal event has ended according to the event ending instruction, the server 30 may generate a stopping instruction and send the stopping instruction to the local view field detection terminal 20 to prompt that the abnormal event has ended at the local view field detection terminal 20, and stop the image capturing and image processing functions.
Before the local view field detection terminal 20 receives the stop instruction, the second snapshot image information is updated according to a second preset period and sent to the server 30. The second preset period may be 5 seconds, and the second snapshot image information is updated every 5 seconds, for example. The specific value of the second preset period can be flexibly set according to the actual situation, which is not specifically limited in the present disclosure.
In one possible implementation manner, the event detection method further includes: and under the condition that the abnormal event is ended, generating second event alarm information according to the event information and the second snapshot image information.
The server 30 generates second event warning information according to the event information and the second snapshot image information when the identity information of the agent corresponding to the abnormal event is not determined yet and the abnormal event is determined to have ended.
The server 30 may also send the second event warning information to the relevant management department, so that the relevant management department reminds, stops or penalizes the abnormal event in the target area accordingly, thereby effectively improving the event detection efficiency and the law enforcement duty efficiency.
In addition, the server 30 may further store the second event warning information as an evidence chain corresponding to the abnormal event, so as to facilitate subsequent query.
Fig. 3 illustrates a workflow diagram of a server according to an embodiment of the present disclosure. As shown in fig. 3, the workflow of the server may include:
in step S31, event information including a target area where an abnormal event occurs and first captured image information corresponding to the abnormal event, which is sent by the panoramic field-of-view detection terminal, is received, and second captured image information corresponding to the abnormal event is acquired from the local field-of-view detection terminal corresponding to the target area.
In step S32, feature extraction is performed on the panoramic human body snap shot included in the first snap shot image information to obtain panoramic human body features, and feature extraction is performed on the close-up human body snap shot included in the second snap shot image information to obtain close-up human body features.
In step S33, it is determined whether the panoramic body feature and the close-up body feature match successfully. If the matching is successful, jumping to execute step S34; if the matching fails, the process goes to step S38.
In step S34, it is determined whether or not the second captured image information includes a target face image having an association relationship with the close-up human snap. If yes, jumping to execute step S35; if not, the process goes to step S38.
In step S35, feature extraction is performed on the target face image to obtain target face features.
In step S36, it is determined whether a reference facial feature that matches the target facial feature successfully is included in the target library. If yes, jumping to execute step S37; if not, the process goes to step S38.
In step S37, the identity information corresponding to the successfully matched reference face features is determined as the identity information of the agent corresponding to the abnormal event, and first event warning information is generated according to the event information, the second snapshot image information, and the identity information of the agent.
In step S38, it is determined whether the abnormal event is ended. If the abnormal event is not ended, jumping to execute step S31; if the abnormal event has ended, the process goes to step S39.
In step S39, second event warning information is generated based on the event information and the second snap image information.
FIG. 4 shows a flow diagram of a method of event detection according to an embodiment of the present disclosure. The event detection method is applied to the panoramic field detection terminal 10 shown in fig. 1. As shown in fig. 4, the event detection method may include:
in step S41, an event detection is performed on the video stream, and an event detection result indicating whether an abnormal event occurs is determined.
In step S42, in a case that the event detection result is used to indicate that an abnormal event occurs, event information is determined, where the event information includes a target area where the abnormal event occurs and first snapshot image information corresponding to the abnormal event, and the first snapshot image information includes a panoramic human snapshot.
In step S43, the event information is transmitted to the server.
The panoramic field detection terminal 10 is an image capture device with computing capabilities, in which an image detection module and an event detection algorithm are built.
The panoramic view field detection terminal 10 collects video streams within a view field range, and further, based on an event detection algorithm, can detect events of the video streams to determine whether abnormal events occur.
In the case where it is determined that an abnormal event occurs, a target area where the abnormal event occurs and first captured image information corresponding to the abnormal event may be determined based on the image detection module. And generating event information based on the target area where the abnormal event occurs and the first snapshot image information corresponding to the abnormal event.
The panoramic view detection terminal 10 sends the event information to the server 30, so that the server 30 performs subsequent identity positioning.
In one possible implementation manner, the event detection method further includes: determining whether the abnormal event is ended; under the condition that the abnormal event is not finished, updating the first snapshot image information; and sending the updated first snapshot image information to a server.
The manner in which the panoramic view field detection terminal 10 determines whether the abnormal event is ended or not and the manner in which the first snapshot image information is updated may refer to the detailed description of the above embodiments, and are not described herein again.
Fig. 5 shows a work flow diagram of a global field of view detection terminal according to an embodiment of the present disclosure. As shown in fig. 5, the workflow of the global field-of-view detection terminal may include:
in step S51, event detection is performed on the video stream.
In step S52, it is determined whether an abnormal event has occurred. If the abnormal event is determined to occur, jumping to execute step S53; if it is determined that the abnormal event does not occur, the process skips to perform step S51.
In step S53, a target area where an abnormal event occurs and first captured image information corresponding to the abnormal event are determined.
In step S54, event information is determined based on the target area where the abnormal event occurs and the first snapshot image information corresponding to the abnormal event, and the event information is sent to the server.
FIG. 6 shows a flow diagram of a method of event detection according to an embodiment of the present disclosure. The event detection method is applied to the local view field detection terminal 20 shown in fig. 1, and the local view field detection terminal 20 corresponds to a target area. As shown in fig. 6, the event detection method may include:
in step S61, when an acquisition instruction sent by the server is received, the video stream is subjected to human body detection to obtain a close-up human body snapshot, and the video stream is subjected to human face detection to obtain a human face snapshot, where the acquisition instruction is used to indicate that an abnormal event occurs in the target area.
In step S62, an association is established between the close-up human-body snapshot and the human-face snapshot corresponding to the same person.
In step S63, second captured image information is determined from the close-up human body captured image, the face captured image, and the association relationship.
In step S64, the second captured image information is transmitted to the server.
The specific process of the local visual field detection terminal 20 receiving the acquisition instruction sent by the server 30 may refer to the specific description of the above manner, which is not described herein again.
The local visual field detection terminal 20 is an image acquisition device with computing power, and an image detection module and a human face and human body association module are built in the local visual field detection terminal.
The local view field detection terminal 20 collects video streams within the view field range according to the acquisition instruction, and further, based on the image detection module, performs human body detection on the video streams to obtain close-up human body snapshot images, and performs human face detection on the video streams to obtain human face snapshot images.
And establishing an association relation between the close-up human body snapshot image and the human face snapshot image corresponding to the same person based on the human face and human body association module. Further, second captured image information is determined from the close-up human body captured image, the human face captured image, and the association relationship, and the second captured image information is sent to the server 30.
In one possible implementation manner, the event detection method further includes: under the condition that a stop instruction sent by the server is not received, updating the second snapshot image information, wherein the stop instruction is used for indicating that the abnormal event is ended; the updated second snap shot image information is sent to the server 30.
The manner in which the local view field detection terminal 20 receives the stop instruction sent by the server 30 and the manner in which the second snapshot image information is updated may refer to the detailed description of the above embodiments, and are not described herein again.
Fig. 7 shows a work flow diagram of a local view field detection terminal according to an embodiment of the present disclosure. As shown in fig. 7, the workflow of the local visual field detection terminal may include:
in step S71, when the acquisition instruction transmitted by the server is received, it is determined that an abnormal event has occurred in the target area.
In step S72, a video stream is acquired by image capturing the target area.
In step S73, human body detection is performed on the video stream to obtain a close-up human body snapshot, and human face detection is performed on the video stream to obtain a human face snapshot.
In step S74, an association is established between the close-up human-body snapshot and the human-face snapshot corresponding to the same person.
In step S75, second captured image information is determined from the close-up human body captured image, the human face captured image, and the association relationship, and the second captured image information is transmitted to the server.
It is understood that the above-mentioned method embodiments of the present disclosure can be combined with each other to form a combined embodiment without departing from the logic of the principle, which is limited by the space, and the detailed description of the present disclosure is omitted. Those skilled in the art will appreciate that in the above methods of the specific embodiments, the specific order of execution of the steps should be determined by their function and possibly their inherent logic.
In addition, the present disclosure also provides an event detection apparatus, an electronic device, a computer-readable storage medium, and a program, which can be used to implement any event detection method provided by the present disclosure, and the corresponding technical solutions and descriptions and corresponding descriptions in the method section are not repeated.
Fig. 8 shows a block diagram of an event detection device according to an embodiment of the present disclosure. The apparatus is applied to a server, and as shown in fig. 8, the apparatus 80 includes:
the receiving module 81 is configured to receive event information sent by the panoramic view field detection terminal, where the event information includes a target area where an abnormal event occurs and first snapshot image information corresponding to the abnormal event;
an obtaining module 82, configured to obtain second snapshot image information corresponding to the abnormal event from a local view field detection terminal corresponding to the target area;
the association module 83 is configured to determine a target face image corresponding to the abnormal event by associating the first snapshot image information with the second snapshot image information;
and the identity information determining module 84 is configured to determine identity information of an agent corresponding to the abnormal event according to the target face image.
In a possible implementation manner, the first snapshot image information includes a panoramic human body snapshot, and the second snapshot image information includes a close-up human body snapshot;
the association module 83 is specifically configured to:
carrying out feature extraction on the panoramic human body snapshot to obtain panoramic human body features, and carrying out feature extraction on the close-up human body snapshot to obtain close-up human body features;
carrying out feature matching on the panoramic human body features and the close-up human body features;
under the condition that the panoramic human body features and the close-up human body features are successfully matched, determining whether the second snapshot image information comprises a human face snapshot image which is associated with the close-up human body snapshot image;
and under the condition that the second snapshot image information comprises the face snapshot image which is associated with the close-up human body snapshot image, determining the face snapshot image which is associated with the close-up human body snapshot image as the target face image.
In one possible implementation, the identity information determining module 84 is specifically configured to:
extracting the features of the target face image to obtain the target face features;
performing feature matching on the target face features and a plurality of reference face features in a target library, wherein each reference face feature in the target library corresponds to identity information;
and under the condition that the target library comprises the reference face features successfully matched with the target face features, determining the identity information corresponding to the successfully matched reference face features as the identity information of the agent.
In one possible implementation, the apparatus 80 further includes:
and the first generation module is used for generating first event alarm information according to the event information, the second snapshot image information and the identity information of the agent.
In one possible implementation, the apparatus 80 further includes:
the determining module is used for determining whether the abnormal event is ended or not under the condition that the target base does not comprise the reference human face feature successfully matched with the target human face feature;
and the control module is used for controlling the local view field detection terminal to update the second snapshot image information under the condition that the abnormal event is not finished.
In a possible implementation manner, the first determining module is further configured to determine whether the abnormal event is ended in a case that matching of the panoramic human body feature and the close-up human body feature fails;
and the control module is also used for controlling the panoramic view field detection terminal to update the first snapshot image information and/or controlling the local view field detection terminal to update the second snapshot image information under the condition that the abnormal event is not finished.
In a possible implementation manner, the determining module is further configured to determine whether the abnormal event is ended or not in a case that the face snapshot having an association relationship with the close-up human body snapshot is not included in the second snapshot information;
and the control module is also used for controlling the local view field detection terminal to update the second snapshot image information under the condition that the abnormal event is not finished.
In one possible implementation, the apparatus 80 further includes:
and the second generation module is used for generating second event alarm information according to the event information and the second snapshot image information under the condition that the abnormal event is ended.
In one possible implementation, the apparatus 80 further includes:
the sending module is used for sending an acquisition instruction to the local view field detection terminal, wherein the acquisition instruction is used for indicating that the abnormal event occurs in the target area; and/or the presence of a gas in the gas,
and the sending module is further used for sending a stop instruction to the local view field detection terminal, wherein the stop instruction is used for indicating that the abnormal event is ended.
In some embodiments, functions of or modules included in the apparatus provided in the embodiments of the present disclosure may be used to execute the method described in the above method embodiments, and specific implementation thereof may refer to the description of the above method embodiments, and for brevity, will not be described again here.
Embodiments of the present disclosure also provide a computer-readable storage medium having stored thereon computer program instructions, which when executed by a processor, implement the above-mentioned method. The computer readable storage medium may be a volatile or non-volatile computer readable storage medium.
An embodiment of the present disclosure further provides an electronic device, including: a processor; a memory for storing processor-executable instructions; wherein the processor is configured to invoke the memory-stored instructions to perform the above-described method.
The disclosed embodiments also provide a computer program product comprising computer readable code or a non-transitory computer readable storage medium carrying computer readable code, which when run in a processor of an electronic device, the processor in the electronic device performs the above method.
The electronic device may be provided as a terminal, server, or other form of device.
Fig. 9 shows a block diagram of an electronic device in accordance with an embodiment of the disclosure. As shown in fig. 9, the electronic device 900 may be a mobile phone, a computer, a digital broadcast terminal, a messaging device, a game console, a tablet device, a medical device, an exercise device, a personal digital assistant, or the like terminal.
Referring to fig. 9, electronic device 900 may include one or more of the following components: processing component 902, memory 904, power component 906, multimedia component 908, audio component 910, input/output (I/O) interface 912, sensor component 914, and communication component 916.
The processing component 902 generally controls overall operation of the electronic device 900, such as operations associated with display, telephone calls, data communications, camera operations, and recording operations. Processing component 902 may include one or more processors 920 to execute instructions to perform all or a portion of the steps of the methods described above. Further, processing component 902 can include one or more modules that facilitate interaction between processing component 902 and other components. For example, the processing component 902 can include a multimedia module to facilitate interaction between the multimedia component 908 and the processing component 902.
The memory 904 is configured to store various types of data to support operation at the electronic device 900. Examples of such data include instructions for any application or method operating on the electronic device 900, contact data, phonebook data, messages, pictures, videos, and so forth. The memory 904 may be implemented by any type or combination of volatile or non-volatile memory devices such as Static Random Access Memory (SRAM), electrically erasable programmable read-only memory (EEPROM), erasable programmable read-only memory (EPROM), programmable read-only memory (PROM), read-only memory (ROM), magnetic memory, flash memory, magnetic or optical disks.
The power supply component 906 provides power to the various components of the electronic device 900. The power components 906 may include a power management system, one or more power supplies, and other components associated with generating, managing, and distributing power for the electronic device 900.
The multimedia components 908 include a screen that provides an output interface between the electronic device 900 and a user. In some embodiments, the screen may include a Liquid Crystal Display (LCD) and a Touch Panel (TP). If the screen includes a touch panel, the screen may be implemented as a touch screen to receive an input signal from a user. The touch panel includes one or more touch sensors to sense touch, slide, and gestures on the touch panel. The touch sensor may not only sense the boundary of a touch or slide action, but also detect the duration and pressure associated with the touch or slide operation. In some embodiments, the multimedia component 908 includes a front facing camera and/or a rear facing camera. The front camera and/or the rear camera may receive external multimedia data when the electronic device 900 is in an operating mode, such as a shooting mode or a video mode. Each front camera and rear camera may be a fixed optical lens system or have a focal length and optical zoom capability.
The audio component 910 is configured to output and/or input audio signals. For example, the audio component 910 includes a Microphone (MIC) configured to receive external audio signals when the electronic device 900 is in an operational mode, such as a call mode, a recording mode, and a voice recognition mode. The received audio signals may further be stored in the memory 904 or transmitted via the communication component 916. In some embodiments, audio component 910 also includes a speaker for outputting audio signals.
I/O interface 912 provides an interface between processing component 902 and peripheral interface modules, which may be keyboards, click wheels, buttons, etc. These buttons may include, but are not limited to: a home button, a volume button, a start button, and a lock button.
The sensor component 914 includes one or more sensors for providing status evaluations of various aspects of the electronic device 900. For example, sensor assembly 914 may detect an open/closed state of electronic device 900, the relative positioning of components, such as a display and keypad of electronic device 900, sensor assembly 914 may also detect a change in the position of electronic device 900 or a component of electronic device 900, the presence or absence of user contact with electronic device 900, orientation or acceleration/deceleration of electronic device 900, and a change in the temperature of electronic device 900. The sensor assembly 914 may include a proximity sensor configured to detect the presence of a nearby object in the absence of any physical contact. The sensor assembly 914 may also include a light sensor, such as a Complementary Metal Oxide Semiconductor (CMOS) or Charge Coupled Device (CCD) image sensor, for use in imaging applications. In some embodiments, the sensor assembly 914 may also include an acceleration sensor, a gyroscope sensor, a magnetic sensor, a pressure sensor, or a temperature sensor.
The communication component 916 is configured to facilitate wired or wireless communication between the electronic device 900 and other devices. The electronic device 900 may access a wireless network based on a communication standard, such as a wireless network (Wi-Fi), a second generation mobile communication technology (2G), a third generation mobile communication technology (3G), a fourth generation mobile communication technology (4G), a long term evolution of universal mobile communication technology (LTE), a fifth generation mobile communication technology (5G), or a combination thereof. In an exemplary embodiment, the communication component 916 receives a broadcast signal or broadcast associated information from an external broadcast management system via a broadcast channel. In an exemplary embodiment, the communication component 916 further includes a Near Field Communication (NFC) module to facilitate short-range communications. For example, the NFC module may be implemented based on Radio Frequency Identification (RFID) technology, infrared data association (IrDA) technology, Ultra Wideband (UWB) technology, Bluetooth (BT) technology, and other technologies.
In an exemplary embodiment, the electronic device 900 may be implemented by one or more Application Specific Integrated Circuits (ASICs), Digital Signal Processors (DSPs), Digital Signal Processing Devices (DSPDs), Programmable Logic Devices (PLDs), Field Programmable Gate Arrays (FPGAs), controllers, micro-controllers, microprocessors or other electronic components for performing the above-described methods.
In an exemplary embodiment, a non-transitory computer readable storage medium, such as the memory 904, is also provided, including computer program instructions executable by the processor 920 of the electronic device 900 to perform the above-described methods.
The disclosure relates to the field of augmented reality, and aims to detect or identify relevant features, states and attributes of a target object by means of various visual correlation algorithms by acquiring image information of the target object in a real environment, so as to obtain an AR effect combining virtual and reality matched with specific applications. For example, the target object may relate to a face, a limb, a gesture, an action, etc. associated with a human body, or a marker, a marker associated with an object, or a sand table, a display area, a display item, etc. associated with a venue or a place. The vision-related algorithms may involve visual localization, SLAM, three-dimensional reconstruction, image registration, background segmentation, key point extraction and tracking of objects, pose or depth detection of objects, and the like. The specific application can not only relate to interactive scenes such as navigation, explanation, reconstruction, virtual effect superposition display and the like related to real scenes or articles, but also relate to special effect treatment related to people, such as interactive scenes such as makeup beautification, limb beautification, special effect display, virtual model display and the like. The detection or identification processing of the relevant characteristics, states and attributes of the target object can be realized through the convolutional neural network. The convolutional neural network is a network model obtained by performing model training based on a deep learning framework.
FIG. 10 shows a block diagram of an electronic device in accordance with an embodiment of the disclosure. As shown in fig. 10, the electronic device 1900 may be provided as a server. Referring to fig. 10, electronic device 1900 includes a processing component 1922 further including one or more processors and memory resources, represented by memory 1932, for storing instructions, e.g., applications, executable by processing component 1922. The application programs stored in memory 1932 may include one or more modules that each correspond to a set of instructions. Further, the processing component 1922 is configured to execute instructions to perform the above-described method.
The electronic device 1900 may also include a power component 1926 configured to perform power management of the electronic device 1900, a wired or wireless network interface 1950 configured to connect the electronic device 1900 to a network, and an input/output (I/O) interface 1958. The electronic device 1900 may operate based on an operating system, such as the Microsoft Server operating system (Windows Server), stored in the memory 1932TM) Apple Inc. of the present application based on the graphic user interface operating System (Mac OS X)TM) Multi-user, multi-process computer operating system (Unix)TM) Free and open native code Unix-like operating System (Linux)TM) Open native code Unix-like operating System (FreeBSD)TM) Or the like.
In an exemplary embodiment, a non-transitory computer readable storage medium, such as the memory 1932, is also provided that includes computer program instructions executable by the processing component 1922 of the electronic device 1900 to perform the above-described methods.
The present disclosure may be systems, methods, and/or computer program products. The computer program product may include a computer-readable storage medium having computer-readable program instructions embodied thereon for causing a processor to implement various aspects of the present disclosure.
The computer readable storage medium may be a tangible device that can hold and store the instructions for use by the instruction execution device. The computer readable storage medium may be, for example, but is not limited to, an electronic memory device, a magnetic memory device, an optical memory device, an electromagnetic memory device, a semiconductor memory device, or any suitable combination of the foregoing. More specific examples (a non-exhaustive list) of the computer readable storage medium would include the following: a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), a Static Random Access Memory (SRAM), a portable compact disc read-only memory (CD-ROM), a Digital Versatile Disc (DVD), a memory stick, a floppy disk, a mechanical coding device, such as punch cards or in-groove projection structures having instructions stored thereon, and any suitable combination of the foregoing. Computer-readable storage media as used herein is not to be construed as transitory signals per se, such as radio waves or other freely propagating electromagnetic waves, electromagnetic waves propagating through a waveguide or other transmission medium (e.g., optical pulses through a fiber optic cable), or electrical signals transmitted through electrical wires.
The computer-readable program instructions described herein may be downloaded from a computer-readable storage medium to a respective computing/processing device, or to an external computer or external storage device via a network, such as the internet, a local area network, a wide area network, and/or a wireless network. The network may include copper transmission cables, fiber optic transmission, wireless transmission, routers, firewalls, switches, gateway computers and/or edge servers. The network adapter card or network interface in each computing/processing device receives computer-readable program instructions from the network and forwards the computer-readable program instructions for storage in a computer-readable storage medium in the respective computing/processing device.
The computer program instructions for carrying out operations of the present disclosure may be assembler instructions, Instruction Set Architecture (ISA) instructions, machine-related instructions, microcode, firmware instructions, state setting data, or source or object code written in any combination of one or more programming languages, including an object oriented programming language such as Smalltalk, C + + or the like and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The computer-readable program instructions may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any type of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet service provider). In some embodiments, the electronic circuitry that can execute the computer-readable program instructions implements aspects of the present disclosure by utilizing the state information of the computer-readable program instructions to personalize the electronic circuitry, such as a programmable logic circuit, a Field Programmable Gate Array (FPGA), or a Programmable Logic Array (PLA).
Various aspects of the present disclosure are described herein with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to embodiments of the disclosure. It will be understood that each block of the flowchart illustrations and/or block diagrams, and combinations of blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer-readable program instructions.
These computer-readable program instructions may be provided to a processor of a general purpose computer, special purpose computer, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks. These computer-readable program instructions may also be stored in a computer-readable storage medium that can direct a computer, programmable data processing apparatus, and/or other devices to function in a particular manner, such that the computer-readable medium storing the instructions comprises an article of manufacture including instructions which implement the function/act specified in the flowchart and/or block diagram block or blocks.
The computer readable program instructions may also be loaded onto a computer, other programmable data processing apparatus, or other devices to cause a series of operational steps to be performed on the computer, other programmable apparatus or other devices to produce a computer implemented process such that the instructions which execute on the computer, other programmable apparatus or other devices implement the functions/acts specified in the flowchart and/or block diagram block or blocks.
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of instructions, which comprises one or more executable instructions for implementing the specified logical function(s). In some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The computer program product may be embodied in hardware, software or a combination thereof. In an alternative embodiment, the computer program product is embodied in a computer storage medium, and in another alternative embodiment, the computer program product is embodied in a Software product, such as a Software Development Kit (SDK), or the like.
Having described embodiments of the present disclosure, the foregoing description is intended to be exemplary, not exhaustive, and not limited to the disclosed embodiments. Many modifications and variations will be apparent to those of ordinary skill in the art without departing from the scope and spirit of the described embodiments. The terminology used herein is chosen in order to best explain the principles of the embodiments, the practical application, or improvements made to the technology in the marketplace, or to enable others of ordinary skill in the art to understand the embodiments disclosed herein.

Claims (13)

1. An event detection method, applied to a server, includes:
receiving event information sent by a panoramic view field detection terminal, wherein the event information comprises a target area where an abnormal event occurs and first snapshot image information corresponding to the abnormal event;
acquiring second snapshot image information corresponding to the abnormal event from a local view field detection terminal corresponding to the target area;
determining a target face image corresponding to the abnormal event by associating the first snapshot image information with the second snapshot image information;
and determining the identity information of the agent corresponding to the abnormal event according to the target face image.
2. The method according to claim 1, characterized in that the first snap-shot image information includes a panoramic human body snap-shot image, and the second snap-shot image information includes a close-up human body snap-shot image;
the determining the target face image corresponding to the abnormal event by associating the first snapshot image information with the second snapshot image information includes:
performing feature extraction on the panoramic human body snapshot to obtain panoramic human body features, and performing feature extraction on the close-up human body snapshot to obtain close-up human body features;
performing feature matching on the panoramic human body features and the close-up human body features;
under the condition that the panoramic human body features and the close-up human body features are successfully matched, determining whether a face snapshot image which is associated with the close-up human body snapshot image exists in the second snapshot image information or not;
and under the condition that the second snapshot image information comprises the face snapshot image which is associated with the close-up human body snapshot image, determining the face snapshot image which is associated with the close-up human body snapshot image as the target face image.
3. The method according to claim 1 or 2, wherein the determining, according to the target face image, the identity information of the agent corresponding to the abnormal event comprises:
extracting the features of the target face image to obtain the target face features;
performing feature matching on the target face features and a plurality of reference face features in a target library, wherein each reference face feature in the target library corresponds to identity information;
and under the condition that the target library comprises the reference face features successfully matched with the target face features, determining the identity information corresponding to the successfully matched reference face features as the identity information of the agent.
4. The method according to any one of claims 1 to 3, further comprising:
and generating first event warning information according to the event information, the second snapshot image information and the identity information of the agent.
5. The method of claim 3, further comprising:
determining whether the abnormal event is ended or not under the condition that the target library does not contain the reference human face features successfully matched with the target human face features;
and under the condition that the abnormal event is not finished, controlling the local view field detection terminal to update the second snapshot image information.
6. The method of claim 2, further comprising:
in the event that the panoramic and close-up body features fail to match, determining whether the abnormal event is over;
and under the condition that the abnormal event is not finished, controlling the panoramic view field detection terminal to update the first snapshot image information, and/or controlling the local view field detection terminal to update the second snapshot image information.
7. The method of claim 2, further comprising:
determining whether the abnormal event is ended or not in the case that a face snapshot having an association relationship with the close-up human body snapshot is not included in the second snapshot image information;
and under the condition that the abnormal event is not finished, controlling the local view field detection terminal to update the second snapshot image information.
8. The method according to any one of claims 5 to 7, further comprising:
and under the condition that the abnormal event is ended, generating second event alarm information according to the event information and the second snapshot image information.
9. The method according to any one of claims 1 to 8, further comprising:
sending an acquisition instruction to the local view field detection terminal, wherein the acquisition instruction is used for indicating that the abnormal event occurs in the target area; and/or the presence of a gas in the gas,
and sending a stop instruction to the local view field detection terminal, wherein the stop instruction is used for indicating that the abnormal event is ended.
10. An event detection device, wherein the device is applied to a server, and the device comprises:
the panoramic view field detection terminal comprises a receiving module, a capturing module and a display module, wherein the receiving module is used for receiving event information sent by the panoramic view field detection terminal, and the event information comprises a target area where an abnormal event occurs and first snapshot image information corresponding to the abnormal event;
the acquisition module is used for acquiring second snapshot image information corresponding to the abnormal event from a local view field detection terminal corresponding to the target area;
the association module is used for determining a target face image corresponding to the abnormal event by associating the first snapshot image information with the second snapshot image information;
and the identity information determining module is used for determining the identity information of the agent corresponding to the abnormal event according to the target face image.
11. An event detection system, the system comprising: the system comprises a panoramic view field detection terminal, a local view field detection terminal and a server;
the panoramic view field detection terminal sends event information to the server, wherein the event information comprises a target area where an abnormal event occurs and first snapshot image information corresponding to the abnormal event;
the server receives the event information and acquires second snapshot image information corresponding to the abnormal event from the local view field detection terminal corresponding to the target area;
the server determines a target face image corresponding to the abnormal event by associating the first snapshot image information with the second snapshot image information;
and the server determines the identity information of the agent corresponding to the abnormal event according to the target face image.
12. An electronic device, comprising:
a processor;
a memory for storing processor-executable instructions;
wherein the processor is configured to invoke the memory-stored instructions to perform the method of any of claims 1 to 9.
13. A computer readable storage medium having computer program instructions stored thereon, which when executed by a processor implement the method of any one of claims 1 to 9.
CN202111154083.6A 2021-09-29 2021-09-29 Event detection method, device, system, electronic equipment and storage medium Pending CN113822216A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111154083.6A CN113822216A (en) 2021-09-29 2021-09-29 Event detection method, device, system, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111154083.6A CN113822216A (en) 2021-09-29 2021-09-29 Event detection method, device, system, electronic equipment and storage medium

Publications (1)

Publication Number Publication Date
CN113822216A true CN113822216A (en) 2021-12-21

Family

ID=78921757

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111154083.6A Pending CN113822216A (en) 2021-09-29 2021-09-29 Event detection method, device, system, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN113822216A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115474005A (en) * 2022-10-28 2022-12-13 通号通信信息集团有限公司 Data processing method, data processing device, electronic apparatus, and storage medium

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115474005A (en) * 2022-10-28 2022-12-13 通号通信信息集团有限公司 Data processing method, data processing device, electronic apparatus, and storage medium

Similar Documents

Publication Publication Date Title
CN113011290A (en) Event detection method and device, electronic equipment and storage medium
CN109948494B (en) Image processing method and device, electronic equipment and storage medium
CN110569777B (en) Image processing method and device, electronic device and storage medium
CN111815675A (en) Target object tracking method and device, electronic equipment and storage medium
CN110532957B (en) Face recognition method and device, electronic equipment and storage medium
CN111222404A (en) Method, device and system for detecting co-pedestrian, electronic equipment and storage medium
CN113093578A (en) Control method and device, electronic equipment and storage medium
CN112945207B (en) Target positioning method and device, electronic equipment and storage medium
CN112270288A (en) Living body identification method, access control device control method, living body identification device, access control device and electronic device
CN112991553A (en) Information display method and device, electronic equipment and storage medium
CN110909203A (en) Video analysis method and device, electronic equipment and storage medium
CN111523346A (en) Image recognition method and device, electronic equipment and storage medium
CN111435422B (en) Action recognition method, control method and device, electronic equipment and storage medium
CN114581525A (en) Attitude determination method and apparatus, electronic device, and storage medium
CN113822216A (en) Event detection method, device, system, electronic equipment and storage medium
CN106896917B (en) Method and device for assisting user in experiencing virtual reality and electronic equipment
CN113011291A (en) Event detection method and device, electronic equipment and storage medium
CN110826045B (en) Authentication method and device, electronic equipment and storage medium
CN107948876B (en) Method, device and medium for controlling sound box equipment
CN109598183B (en) Face authentication method, device and system
CN110910281A (en) Hotel room-returning handling method and device based on robot
CN110544335B (en) Object recognition system and method, electronic device, and storage medium
CN112883791B (en) Object recognition method, object recognition device, and storage medium
CN105608469A (en) Image resolution determination method and device
CN113920169A (en) Target tracking method, event detection method, target tracking device, event detection device, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination