CN113112722B - Event detection method, device, system and equipment - Google Patents

Event detection method, device, system and equipment Download PDF

Info

Publication number
CN113112722B
CN113112722B CN202110237180.5A CN202110237180A CN113112722B CN 113112722 B CN113112722 B CN 113112722B CN 202110237180 A CN202110237180 A CN 202110237180A CN 113112722 B CN113112722 B CN 113112722B
Authority
CN
China
Prior art keywords
camera
audio playing
monitoring
image
specified event
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110237180.5A
Other languages
Chinese (zh)
Other versions
CN113112722A (en
Inventor
冀建成
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hangzhou Hikvision Digital Technology Co Ltd
Original Assignee
Hangzhou Hikvision Digital Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hangzhou Hikvision Digital Technology Co Ltd filed Critical Hangzhou Hikvision Digital Technology Co Ltd
Priority to CN202110237180.5A priority Critical patent/CN113112722B/en
Publication of CN113112722A publication Critical patent/CN113112722A/en
Application granted granted Critical
Publication of CN113112722B publication Critical patent/CN113112722B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G08SIGNALLING
    • G08BSIGNALLING OR CALLING SYSTEMS; ORDER TELEGRAPHS; ALARM SYSTEMS
    • G08B13/00Burglar, theft or intruder alarms
    • G08B13/18Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength
    • G08B13/189Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength using passive radiation detection systems
    • G08B13/194Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength using passive radiation detection systems using image scanning and comparing systems
    • G08B13/196Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength using passive radiation detection systems using image scanning and comparing systems using television cameras
    • GPHYSICS
    • G08SIGNALLING
    • G08BSIGNALLING OR CALLING SYSTEMS; ORDER TELEGRAPHS; ALARM SYSTEMS
    • G08B3/00Audible signalling systems; Audible personal calling systems
    • G08B3/10Audible signalling systems; Audible personal calling systems using electric transmission; using electromagnetic transmission
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/698Control of cameras or camera modules for achieving an enlarged field of view, e.g. panoramic image capture

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Electromagnetism (AREA)
  • Alarm Systems (AREA)

Abstract

The application provides an event detection method, device, system and equipment, comprising: acquiring a recognition result for recognizing at least one frame of monitoring image collected by a camera; if the recognition result indicates that the specified event is recognized from the at least one frame of monitoring image, determining a target monitoring area corresponding to the monitoring image, and sending an audio playing instruction aiming at the recognized specified event to audio playing equipment deployed in the target monitoring area so as to enable the audio playing equipment to play the voice information corresponding to the recognized specified event. By using the method provided by the application, the voice information related to the event can be automatically played after the specified event is detected so as to warn the specified event initiator, thereby facilitating the management of the monitoring area.

Description

Event detection method, device, system and equipment
Technical Field
The present application relates to the field of monitoring, and in particular, to a method, an apparatus, a system, and a device for detecting an event.
Background
In the conventional event detection mechanism, when the electronic device identifies a specific event (for example, an abnormal vehicle intrusion in a certain area is detected) from a monitoring image collected by a camera, the electronic device notifies an administrator, and the administrator goes to the scene to strange the abnormal vehicle to leave. This approach can cause inconvenience to the administrator monitoring the area.
Disclosure of Invention
In view of this, the present application provides an event detection method, apparatus, system and device, which are used to implement that after a specified event is detected, an audio playing device automatically plays voice information related to the event to warn a specified event initiator, so as to facilitate management of a monitoring area.
According to a first aspect of the present application, there is provided an event detection method, the method comprising:
acquiring a recognition result for recognizing at least one frame of monitoring image collected by a camera;
if the recognition result indicates that the specified event is recognized from the at least one frame of monitoring image, determining a target monitoring area corresponding to the monitoring image, and sending an audio playing instruction aiming at the recognized specified event to audio playing equipment deployed in the target monitoring area so as to enable the audio playing equipment to play the voice information corresponding to the recognized specified event.
Optionally, the determining the target monitoring area corresponding to the monitoring image includes:
searching a target monitoring area corresponding to the acquisition information of the camera for acquiring the at least one frame of monitoring image in the corresponding relation between the preset acquisition information and the monitoring area; the acquisition information in the corresponding relation is related information of a camera for acquiring a monitoring image;
alternatively, the first and second electrodes may be,
carrying out image recognition on the background of the monitoring image to obtain the target monitoring area;
alternatively, the first and second electrodes may be,
searching a monitoring area corresponding to the specified event identified from the at least one frame of monitoring image in a corresponding relation between the preset specified event and the monitoring area, and taking the monitoring area as a target monitoring area;
alternatively, the first and second liquid crystal display panels may be,
and determining the image position of the identified specified event in the monitoring image, and searching the monitoring area corresponding to the determined image position as a target monitoring area in the corresponding relation between the preset image position and the monitoring area.
Alternatively to this, the first and second parts may,
under the condition that the camera is a panoramic camera, the panoramic camera comprises a plurality of cameras, monitoring areas corresponding to the cameras are different, and the at least one frame of monitoring image is acquired by any camera in the panoramic camera; the acquisition information is an identifier of a camera for acquiring the at least one frame of monitoring image;
and under the condition that the camera is a rotatable camera, different corners of the rotatable camera correspond to different monitoring areas, and the acquisition information is the corner of the camera when the at least one frame of monitoring image is acquired.
Optionally, the method is applied to a central server; the central server is connected with at least one camera; the central server is connected with the audio playing equipment of each monitoring area;
the acquisition information is a camera identifier of a camera which acquires the at least one frame of monitoring image.
Optionally, the target monitoring area is deployed with a plurality of audio playing devices, and each audio playing device stores voice information of a specified event corresponding to the audio playing device;
the sending of the audio playing instruction for the identified specified event to the audio playing device deployed in the target monitoring area includes:
and selecting audio playing equipment corresponding to the identified specified event from the plurality of audio playing equipment deployed in the target monitoring area, and sending an audio playing instruction to the selected audio playing equipment so that the selected audio playing equipment plays the stored voice information.
Optionally, an audio playing device is deployed in the target monitoring area;
the sending of the audio playing instruction for the identified specified event to the audio playing device deployed in the target monitoring area includes:
sending an audio playing instruction to audio playing equipment in the target monitoring area, wherein the audio playing instruction carries the voice information of the identified specified event, so that the audio playing equipment plays the carried voice information;
alternatively, the first and second electrodes may be,
sending an audio playing instruction to audio playing equipment in the target monitoring area; and the audio playing instruction carries the identification of the identified specified event, so that the audio playing equipment acquires the voice information of the specified event and plays the voice information based on the identification of the specified event.
According to a second aspect of the present application, there is provided an event detection system, the system comprising: a camera and an audio playback device;
the camera is used for identifying at least one frame of monitoring image acquired by the camera; if a specified event is identified from the at least one frame of monitoring image, determining a target monitoring area corresponding to the monitoring image, and sending an audio playing instruction aiming at the identified specified event to audio playing equipment deployed in the target monitoring area,
and the audio playing device is used for responding to the audio playing instruction and playing the voice information corresponding to the identified specified event.
According to a third aspect of the present application, there is provided an event detection system, the system comprising: the system comprises a camera, a central server and an audio playing device;
the camera is used for collecting at least one frame of monitoring image;
the central server is used for acquiring an identification result for identifying at least one frame of monitoring image acquired by the camera;
if the identification result indicates that the specified event is identified from the at least one frame of monitoring image, determining a target monitoring area corresponding to the monitoring image, and sending an audio playing instruction aiming at the identified specified event to audio playing equipment deployed in the target monitoring area;
and the audio playing device is used for responding to the audio playing instruction and playing the voice information corresponding to the identified specified event.
According to a fourth aspect of the present application, there is provided an event detection apparatus, the apparatus comprising:
the device comprises an acquisition unit, a processing unit and a processing unit, wherein the acquisition unit is used for acquiring an identification result for identifying at least one frame of monitoring image acquired by a camera;
and a sending unit, configured to determine a target monitoring area corresponding to the monitoring image if the identification result indicates that a specified event is identified from the at least one monitoring image, and send an audio playing instruction for the identified specified event to an audio playing device deployed in the target monitoring area, so that the audio playing device plays voice information corresponding to the identified specified event.
Optionally, the sending unit is configured to, when determining the target monitoring area corresponding to the monitoring image, search for the target monitoring area corresponding to the acquisition information that the camera is configured to acquire the at least one frame of monitoring image in a preset correspondence between the acquisition information and the monitoring area; the acquisition information in the corresponding relation is related information of a camera for acquiring a monitoring image; or, carrying out image recognition on the background of the monitoring image to obtain the target monitoring area; or, in the preset corresponding relation between the specified event and the monitored area, searching the monitored area corresponding to the specified event identified from the at least one frame of monitored image as a target monitored area; or determining the image position of the identified specified event in the monitored image, and searching the monitored area corresponding to the determined image position as the target monitored area in the corresponding relation between the preset image position and the monitored area.
Optionally, in a case that the camera is a panoramic camera, the panoramic camera includes a plurality of cameras, monitoring areas corresponding to the cameras are different, and the at least one frame of monitoring image is acquired by any camera in the panoramic camera; the acquisition information is an identifier of a camera for acquiring the at least one frame of monitoring image; or, in the case that the camera is a rotatable camera, different rotation angles of the rotatable camera correspond to different monitoring areas, and the acquisition information is a rotation angle of the camera when the at least one frame of monitoring image is acquired.
Optionally, the apparatus is applied to a central server, and the central server is connected to at least one of the cameras; the central server is connected with the audio playing equipment of each monitoring area; the acquisition information is a camera identifier of a camera which acquires the at least one frame of monitoring image.
Optionally, the target monitoring area is deployed with a plurality of audio playing devices, and each audio playing device stores voice information of a specified event corresponding to the audio playing device; the sending unit is configured to, when sending an audio playing instruction for the identified specified event to the audio playing devices deployed in the target monitoring area, select an audio playing device corresponding to the identified specified event from the multiple audio playing devices deployed in the target monitoring area, and send the audio playing instruction to the selected audio playing device, so that the selected audio playing device plays the stored voice information.
Optionally, an audio playing device is deployed in the target monitoring area; the sending unit is configured to send an audio playing instruction to the audio playing device in the target monitoring area when sending an audio playing instruction for the identified specified event to the audio playing device deployed in the target monitoring area, where the audio playing instruction carries the voice information of the identified specified event, so that the audio playing device plays the carried voice information; or sending an audio playing instruction to audio playing equipment in the target monitoring area; and the audio playing instruction carries the identification of the identified specified event, so that the audio playing equipment acquires the voice information of the specified event and plays the voice information based on the identification of the specified event.
According to a fifth aspect of the present application, there is provided an electronic device comprising a readable storage medium and a processor;
wherein the readable storage medium is configured to store machine executable instructions;
the processor is configured to read the machine executable instructions on the readable storage medium and execute the instructions to implement the event detection method.
According to a sixth aspect of the present application, there is provided a computer-readable storage medium having stored therein a computer program which, when executed by a processor, implements the above-described event detection method.
According to a seventh aspect of the present application, there is provided a computer program, which is stored on a computer-readable storage medium and which, when executed by a processor, causes the processor to carry out the above-mentioned event detection method.
According to the above description, the electronic device and the audio playing devices deployed in each monitoring area are linked, so that when the electronic device detects that a specified event occurs in a certain monitoring area, the electronic device can be linked with the audio playing devices to play the voice information corresponding to the specified event, the voice information can be automatically played by the audio playing devices after the specified event occurs in the monitoring area, the initiating object of the specified event can be automatically warned, and the object warning efficiency can be improved.
Drawings
FIG. 1 is a schematic diagram of an event detection networking architecture, one exemplary embodiment of which is shown;
FIG. 2 is a schematic diagram of another event detection networking architecture, shown in an exemplary embodiment;
FIG. 3 is a flow chart illustrating a method of event detection in accordance with an exemplary embodiment of the present application;
FIG. 4 is a diagram illustrating a hardware configuration of an electronic device according to an exemplary embodiment of the present application;
fig. 5 is a block diagram of an event detection device according to an exemplary embodiment of the present application.
Detailed Description
Reference will now be made in detail to the exemplary embodiments, examples of which are illustrated in the accompanying drawings. When the following description refers to the accompanying drawings, like numbers in different drawings represent the same or similar elements unless otherwise indicated. The embodiments described in the following exemplary embodiments do not represent all embodiments consistent with the present application. Rather, they are merely examples of apparatus and methods consistent with certain aspects of the application, as detailed in the appended claims.
The terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the application. As used in this application and the appended claims, the singular forms "a", "an", and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. It should also be understood that the term "and/or" as used herein refers to and encompasses any and all possible combinations of one or more of the associated listed items.
It is to be understood that although the terms first, second, third, etc. may be used herein to describe various information, such information should not be limited to these terms. These terms are only used to distinguish one type of information from another. For example, first information may also be referred to as second information, and similarly, second information may also be referred to as first information, without departing from the scope of the present application. The word "if," as used herein, may be interpreted as "at … …" or "at … …" or "in response to a determination," depending on the context.
In the conventional event detection mechanism, when the electronic device identifies a specific event (for example, an abnormal vehicle intrusion in a certain area is detected) from a monitoring image collected by a camera, the electronic device notifies an administrator, and the administrator goes to the scene to strange the abnormal vehicle to leave. This approach can cause inconvenience to the administrator monitoring the area.
In view of this, the present application provides an event detection method, where audio playing devices are respectively disposed in each monitoring area, an electronic device may be connected to the audio playing devices in each monitoring area, and when it is determined that a specified event occurs in a monitoring area corresponding to at least one frame of monitoring image acquired by the electronic device through at least one frame of monitoring image acquired by a camera, an audio playing instruction is sent to the audio playing device in the monitoring area, so that the audio playing device plays voice information corresponding to the specified event.
According to the method and the device, the electronic equipment and the audio playing equipment deployed in each monitoring area are in linkage processing, so that when the electronic equipment detects that a specified event occurs in a certain monitoring area, the audio playing equipment can be linked to play the voice information corresponding to the specified event, the voice information can be automatically played after the specified event occurs in the monitoring area, the initiating object of the specified event can be automatically warned, and the efficiency of object warning is improved.
The following first introduces a networking architecture required for implementing the event detection method of the present application.
The event detection method of the present application can be implemented by various networking architectures, and various networking architectures for implementing the present application are respectively described below.
1. Networking architecture 1: camera-audio playing device networking
Referring to fig. 1, fig. 1 is a schematic diagram illustrating an event detection networking architecture according to an exemplary embodiment.
The networking architecture shown in fig. 1 includes a camera and an audio playback device.
In the networking mechanism of the camera-audio playing device shown in fig. 1, the camera needs to have advanced image processing functions such as image recognition, and the camera can monitor at least one monitoring area.
In an alternative implementation, the camera may be a panoramic camera. The panoramic camera includes a plurality of cameras, each of which may monitor at least one monitored area. The panoramic camera is an intelligent camera, and can perform advanced image processing functions such as image processing and recognition, for example, can perform specified event recognition on an image collected by a camera.
In another alternative, the camera may be a rotatable camera, such as a dome camera. The rotatable camera can realize the detection of a plurality of monitoring areas through rotation. For example, different corners of the rotatable camera may correspond to different monitored areas. Likewise, the rotatable camera is also a smart camera, and can perform specified event recognition on the image acquired by the camera.
The camera is only exemplified here and is not particularly limited.
In addition, in the present application, at least one audio playing device is also deployed in each of the plurality of monitoring areas. The camera can identify at least one frame of image collected by the camera, and if a specified event is identified from the at least one frame of image, an audio playing instruction is sent to audio playing equipment deployed in a target monitoring area corresponding to the at least one frame of image, so that the audio playing equipment plays voice information corresponding to the specified event.
Briefly, a camera in a camera-audio playback device may implement the event detection logic of the present application, in other words, the camera may function as the electronic device above.
2. Networking architecture 2: camera-center server-audio playing device
Referring to fig. 2, fig. 2 is a schematic diagram of another event detection networking architecture, shown in an exemplary embodiment.
The networking architecture shown in fig. 2 includes a camera, a central server, and an audio playback device.
In the networking architecture shown in fig. 2, a plurality of cameras may be deployed, each of the plurality of cameras monitoring at least one monitored area, at least one audio playback device being deployed in each monitored area monitored by the plurality of cameras. Each camera can send the image collected by the camera to a central server, and the central server can identify the image. If the specified event is identified from the image, the central server can send an audio playing instruction for the specified event to the audio playing device in the monitoring area corresponding to the image, so that the audio playing device plays the voice information for the specified event. Of course, the camera may also recognize the image and send the recognition result to the central server, and if the image recognition result indicates that a specified event occurs in the image, the central server may send an audio playing instruction for the specified event to the audio playing device in the monitoring area corresponding to the image, so that the audio playing device plays the voice information for the specified event.
Briefly, the camera-center server in the audio playing device may implement the event detection logic of the present application, in other words, the center server may serve as the electronic device in the foregoing.
In addition, under the networking architecture, since advanced image processing functions such as image recognition and the like can be deployed in the central server, the cameras set in each monitoring area can be ordinary cameras (such as cameras with only image acquisition and without image processing), and such setting can save the cost of the deployment of the networking architecture.
Of course, each monitoring area may also be deployed with a more expensive, more intelligent panoramic camera, or a rotatable camera (such as a ball machine, etc.) which is merely illustrative of the type of camera and is not specifically limited thereto.
In addition, it should be noted that, the audio playing device described herein refers to a device having an audio playing function. For example, the audio playback device may be IP speech (a network device with speakers). IP SPEAKER is a network device with a speaker, which can play the voice information stored in the network device itself or the voice information sent from other devices. Of course, the audio playing device may also be other devices, and here, the audio playing device is only exemplarily illustrated and is not specifically limited.
Referring to fig. 3, fig. 3 is a flowchart illustrating an event detection method according to an exemplary embodiment of the present application, which can be applied to an electronic device. The method shown in fig. 3 can be used in the networking shown in fig. 1 or fig. 2, and when the method shown in fig. 3 is applied in the networking shown in fig. 1, the electronic device is a camera. When the method shown in fig. 3 is applied in the networking shown in fig. 2, the electronic device is a central server.
The event detection method can comprise the following steps:
step 301: the electronic equipment acquires a recognition result for recognizing at least one frame of monitoring image collected by the camera.
1) In an alternative implementation, when the event detection method is implemented by using "camera-audio playing device networking" shown in fig. 1, the electronic device may be a camera. The camera can directly acquire at least one monitoring image acquired by the camera.
For example, when the camera is a panoramic camera, the panoramic camera includes a plurality of cameras, each camera corresponding to at least one surveillance zone. The camera can acquire at least one monitoring image acquired by any camera shooting.
Alternatively, the at least one monitoring image may be at least one panoramic image. The panoramic image is an image synthesized from images captured by at least one camera of the panoramic camera.
Specifically, the camera may acquire an image acquired by at least one camera at each acquisition time, and then synthesize images acquired by different cameras at the same acquisition time into a panoramic image corresponding to the acquisition time, thereby acquiring at least one frame of monitoring image.
For example, assuming that 2 frames of panoramic images are acquired, assuming that a first frame of panoramic image is acquired at 0.1s and a second frame of panoramic image is acquired at 0.2s, the first frame of panoramic image is an image synthesized by images acquired by at least one camera of the panoramic camera at 0.1s, and the second frame of panoramic image is an image synthesized by images acquired by at least one camera of the panoramic camera at 0.2 s.
That is, when the camera is a panoramic camera, the monitored image collected by the camera is an image collected by any camera of the panoramic camera, or the monitored image is a panoramic image synthesized by images collected by at least one camera of the panoramic camera.
After the camera collects at least one frame of monitoring image, the camera can perform image recognition on the monitoring image to obtain a recognition result. For example, the recognition result may include: whether a specific event is recognized from the monitored image or not, although the recognition result may include other information, it is not specifically limited herein.
For another example, when the camera is a rotatable camera, the rotatable camera includes a plurality of corners, each corner corresponding to at least one surveillance area. The camera may acquire at least one surveillance image of the rotatable camera at any one of the corners.
After the camera collects at least one frame of monitoring image, the camera can perform image recognition on the monitoring image to obtain a recognition result.
2) When the event detection method is implemented by using the "camera-center server-audio playing device networking" shown in fig. 2, the electronic device may be a center server.
In an optional implementation manner, any camera may send at least one frame of monitoring image acquired by itself to the central server, and the central server may receive the at least one frame of monitoring image acquired by the camera. Then, the central server can identify at least one frame of image reported by the camera to obtain an image identification result.
Of course, in practical applications, the camera may identify at least one frame of the monitoring image collected by the camera and send the identification result to the central server, and the central server may receive the identification result sent by the camera.
It should be noted that, since the specified event may be a static process or a dynamic process, the at least one monitoring image may be a plurality of continuous monitoring images, and the monitoring image is only exemplarily illustrated and is not specifically limited.
The above-mentioned specified event may be a preset event.
For example, the specified event may be an area intrusion event, i.e. whether an abnormal object appears in the monitored area. Such as the presence of a vehicle or person in the monitored area.
For another example, the specified event may be an abnormal out-of-bounds event, i.e., detecting whether an abnormal object crosses a preset boundary line. For example, a person or a vehicle crosses a preset boundary line.
The specific events are only exemplified and not specifically limited herein.
Step 302: if the recognition result indicates that a specified event is recognized from the at least one frame of monitoring image, the electronic device determines a target monitoring area corresponding to the monitoring image and sends an audio playing instruction for the specified event to audio playing devices deployed in the target monitoring area, so that the audio playing devices play the voice information corresponding to the specified event.
Step 302 is described in detail below through step 3021 to step 3022.
Step 3021: and if the electronic equipment identifies the specified event from the at least one frame of monitoring image, determining a target monitoring area corresponding to the monitoring image.
The following description will be made of various implementations of "determining a target monitored area corresponding to the monitored image".
The implementation mode is as follows: and the electronic equipment determines the image position of the identified specified event in the monitoring image, and searches the monitoring area corresponding to the determined image position as a target monitoring area in the corresponding relation between the preset image position and the monitoring area.
In an alternative implementation, the monitoring image may be a panoramic image synthesized by images captured by at least one camera of the panoramic camera.
Specifically, at least one monitoring area is configured in advance in a panoramic image captured by a panoramic camera, a specified event to be detected is configured in each monitoring area, and the occurrence position of the specified event is determined. Based on this, the electronic device is configured in advance with a correspondence relationship of an image position of the specified event in the panoramic image and the monitored area.
In step 3021, the electronic device may identify an image position of the specified event in the panoramic image based on an image identification technology, and search, from a preset correspondence between the image position and the monitored area, a monitored area corresponding to the image position in the panoramic image where the identified specified event is located as a target monitored area.
For example, assume that the correspondence between image positions configured on the electronic device and the monitored area is as shown in table 1:
specifying image location of an event in a panoramic image Monitoring area identification
Position of A big door Monitoring area 1
Position of gate B Monitoring area 2
TABLE 1
When the electronic equipment determines that the specified event occurs at the gate position A, the target monitoring area is determined to be the monitoring area 1 according to the corresponding relation of the table 1.
In another alternative implementation, the monitoring image may be an image captured by any camera in the panoramic camera.
In particular, the field of view of each camera may correspond to at least one monitored area. For example, the camera is a wide-angle camera, which has a large visual field range and can correspond to a plurality of monitoring areas.
In order to determine the occurrence area of the specified event after the specified event is identified, the electronic device is pre-configured with the corresponding relation between the image position of the specified event in the image acquired by any camera and the monitoring area.
When step 3021 is implemented, the electronic device may identify, based on an image identification technology, an image position of the specified event in the image acquired by the arbitrary camera, and search, from a preset correspondence between the image position and the monitored area, a monitored area corresponding to the image position in the image acquired by the arbitrary camera where the identified specified event is located as a target monitored area.
For example, assume that the correspondence relationship between image positions configured on the electronic device and the monitored area is shown in table 2:
Figure GDA0003925412820000121
Figure GDA0003925412820000131
TABLE 2
When the electronic device determines that the specified event occurs at the gate position a, the target monitoring area can be determined to be the monitoring area 1 according to the corresponding relation of the table 2.
Of course, in practical applications, when the camera is not a panoramic camera, such as a normal camera, a rotatable camera, or the like, the determination of the target monitoring area may still be performed in this manner, and is not specifically limited herein.
It should be noted that, whether the electronic device is a camera or a central server, step 3021 may be implemented by using the first implementation manner. And is not particularly limited herein.
The implementation mode two is as follows: and the electronic equipment searches a target monitoring area corresponding to the acquisition information of the at least one frame of monitoring image acquired by the camera in a preset corresponding relation between the acquisition information and the monitoring area.
The acquisition information is related information of a camera for acquiring a monitoring image.
1) In an alternative implementation, when the electronic device is a camera, that is, the method is applied in the "camera-audio playing device" networking shown in fig. 1, it is assumed that the camera is a panoramic camera. Because the panoramic camera has a plurality of cameras, each camera corresponds at least one monitoring area, and the camera can gather the monitoring image in the monitoring area that it corresponds. The collected information may be a camera id for collecting a monitoring image.
Specifically, the panoramic camera is configured with a corresponding relationship between a camera identifier and a monitoring area identifier, and the panoramic camera can acquire at least one frame of image acquired by any camera. If the specified event is identified from the at least one frame of image, the panoramic camera can search the monitoring area corresponding to the camera acquiring the at least one frame of image in the corresponding relation to be used as the monitoring area corresponding to the at least one frame of monitoring image.
For example, assume that the panoramic camera includes 3 cameras, and the correspondence between the camera identifier on the panoramic camera and the monitored area identifier is shown in table 3.
Figure GDA0003925412820000132
Figure GDA0003925412820000141
TABLE 3
Assuming that the at least one frame of monitoring image is collected by the camera 1, the panoramic camera may search the monitoring area 1 corresponding to the camera 1 in the corresponding relationship shown in table 3, and use the monitoring area 1 as the monitoring area corresponding to the at least one frame of monitoring image.
2) In another alternative implementation, the method may also be applied to a "camera-center server-audio playing device" networking shown in fig. 2, where the electronic device is a center server, and the cameras in the networking include panoramic cameras.
In this case, the acquisition information may be any camera id in the panoramic camera that acquires at least one frame of the monitored image. The panoramic camera may report camera identification for acquiring the at least one frame of surveillance to the central server. And the central server determines the central server corresponding to the camera identification for collecting the at least one frame of image in the corresponding relation between the preset camera identification and the monitoring area.
3) In another alternative implementation, when the electronic device is a camera, i.e. the method is applied in the "camera-audio playing device" network shown in fig. 1, it is assumed that the camera is a rotatable camera. Since different rotation angles of the rotatable camera correspond to different monitoring areas, the collected information may be rotation angles of the camera when the rotatable camera collects monitoring images.
Specifically, the rotatable camera is provided with a corresponding relation between a rotation angle and a monitoring area. When the rotatable camera identifies the designated event from the at least one frame of monitoring image, the monitoring area corresponding to the corner when the rotatable camera collects the at least one frame of monitoring image can be searched in the corresponding relationship between the corner and the monitoring area as the monitoring area corresponding to the at least one frame of monitoring image.
For example, assuming that the total rotation angle of the rotatable camera is-50 degrees to +50 degrees, the corresponding relationship between the rotation angle of the rotatable camera and the monitored area is as shown in table 4:
Figure GDA0003925412820000142
Figure GDA0003925412820000151
TABLE 4
Assuming that the at least one frame of monitoring image is collected by the rotatable camera at a rotation angle of-50 ° to 0 °, the panoramic camera may search the monitoring area 1 corresponding to the rotation angle in the corresponding relationship shown in table 4, and use the monitoring area 1 as the monitoring area corresponding to the at least one frame of monitoring image.
4) In another alternative implementation, the method may also be applied to a "camera-center server-audio playing device" network shown in fig. 2, where the electronic device is a center server, and the cameras in the network include rotatable cameras.
In this case, the above-mentioned acquisition information may be a rotation angle of the camera at the time of acquiring the at least one monitored image. The panoramic camera can report the corner of the camera when the at least one frame of monitoring image is collected to the central server. And the central server determines the central server corresponding to the corner of the camera when the at least one frame of monitoring image is acquired in the preset corresponding relation between the camera corner and the monitoring area.
5) In another alternative implementation manner, it is assumed that the event detection method provided by the present application is applied to a "camera-center server-video playing device" networking shown in fig. 2, where the electronic device is a center server and a camera in the networking is a normal camera. Each camera corresponds to a monitoring area, so the collected information may be a camera identifier for collecting a monitoring image.
Specifically, the central server deploys a corresponding relationship between the camera and the monitoring area, and when the central server identifies a specified event from the at least one frame of monitoring image, the central server may search the monitoring area corresponding to the camera acquiring the at least one frame of monitoring image in the corresponding relationship between the camera and the monitoring area, and use the monitoring area corresponding to the at least one frame of monitoring image.
For example, the central server is connected with 3 cameras, and the correspondence between the cameras disposed on the central server and the monitored area is shown in table 5.
Camera identification MonitoringRegion identification
Camera 1 Monitoring area 1
Camera 2 Monitoring area 2
Camera 3 Monitoring area 3
TABLE 5
It is assumed that the at least one monitoring image is collected by the camera 1, and the central server may search the monitoring area 1 corresponding to the camera 1 in the corresponding relationship shown in table 5, and use the monitoring area 1 as the monitoring area corresponding to the at least one monitoring image.
Here, the "determining the target monitoring area corresponding to the monitoring image" is only exemplarily described, and is not particularly limited.
The implementation mode is three: and the electronic equipment performs image recognition on the background of the monitoring image to obtain the target monitoring area.
For example, the electronic device may acquire an image processing and recognition technology, recognize a background image from the monitoring image, recognize the background image, and determine a monitoring area corresponding to the monitoring image.
For example, it is assumed that the event detection method is applied in the networking of "camera-center server-audio playing device" shown in fig. 2, the electronic device is a center server, and the camera connected to the center server is assumed to be a panoramic camera.
The panoramic camera reports the panoramic image to the central server. If the appointed event is identified in at least one frame of panoramic image, the central server can divide the panoramic image, and identify the background of the divided sub-image containing the relevant information of the appointed event to obtain the monitoring area.
Of course, when the center server is connected to the general camera, the center server may also identify the background of the monitoring image of the general camera to obtain the monitoring area. Or, when the event detection method is applied to the networking of the camera-audio playing device shown in fig. 1, and the electronic device is a camera, the camera may also identify the monitoring image acquired by the camera to obtain the monitoring area. This is merely an example and is not particularly limited.
For another example, it is assumed that the event detection method is applied in the networking of "camera-audio playing device" shown in fig. 1, and the electronic device is a camera. The camera can acquire an image processing and recognizing technology, recognize a background image from the monitored image, recognize the background image and determine a monitored area corresponding to the monitored image.
The following describes a manner of "the electronic device identifies the background of the monitored image to obtain the target monitored area":
in an alternative implementation, the electronic device may extract background features of the monitored image. The method and the device have the advantages that the database of the characteristics of the monitoring area is prepared, and the background characteristics of the extracted area can be matched with the characteristics of each monitoring area in the database by the electronic equipment. The electronic device may determine the monitoring area corresponding to the matched monitoring area characteristic as the target monitoring area.
In another optional implementation manner, the electronic device may input the monitoring image into a trained neural network, and the neural network may perform background feature extraction and classification on the monitoring image, so as to output each type of monitoring region corresponding to the monitoring image and its probability. The electronic device may select the monitoring area with the highest probability as the target monitoring area.
Of course, in practical applications, other manners may also be adopted to implement "recognizing the background of the monitored image to obtain the target monitored area", which is only exemplary and not specifically limited herein.
The implementation mode is four: and the electronic equipment searches a monitoring area corresponding to the specified event identified from the at least one frame of monitoring image in the preset corresponding relation between the specified event and the monitoring area as a target monitoring area.
In practice, different types of specific events are usually bound to different types of monitoring areas, that is, one monitoring area represents one specific event. For example, the monitored area 1 corresponds to an area intrusion event, and the monitored area 2 corresponds to an abnormal boundary crossing event.
Due to this configuration, determination of the monitoring area by specifying an event can be realized.
Specifically, the electronic device is deployed with a corresponding relationship between a specified event and a monitoring area, and when the electronic device identifies the specified event from at least one frame of monitoring image, the electronic device may search the monitoring area corresponding to the identified specified event in the corresponding relationship between the specified event and the monitoring area, and use the monitoring area as the monitoring area corresponding to the at least one frame of monitoring image.
For example, assume that the correspondence of a specific event deployed on an electronic device to a monitored area is as shown in table 6.
Specifying events Monitoring area identification
Regional intrusion event Monitoring area 1
Abnormal out-of-range event Monitoring area 2
TABLE 6
Assuming that the designated event identified from the at least one frame of monitoring image is an area intrusion event, the electronic device may search the monitoring area 1 corresponding to the area intrusion event in the corresponding relationship shown in table 6, and use the monitoring area 1 as the monitoring area corresponding to the at least one frame of monitoring image.
Step 3021 may be implemented by implementing the fourth implementation manner if the electronic device is a camera or a central server and the deployment manner is that one monitored area corresponds to one specified event. It is to be understood that the present invention is illustrative only and not intended to be limiting.
Step 3022: and the electronic equipment sends an audio playing instruction aiming at the identified specified event to audio playing equipment deployed in the target monitoring area so as to enable the audio playing equipment to play the voice information corresponding to the identified specified event.
The implementation method is as follows: each monitoring area is provided with a plurality of audio playing devices, and each audio playing device stores the voice information of the corresponding appointed event of the audio playing device.
In implementation, the electronic device may select, from the multiple audio playback devices deployed in the target monitoring area, an audio playback device corresponding to the specified event (e.g., an audio playback device storing voice information of the specified event). Then, the electronic device can send an audio playing instruction to the audio playing device, so that the audio playing device plays the stored voice information corresponding to the specified event.
For example, suppose that the monitoring area 1 corresponds to 2 audio playing devices, namely the audio playing device 1 and the audio playing device 2.
The audio playing device 1 stores the voice information of the regional invasion event, and the audio playing device 2 stores the voice information of the abnormal boundary-crossing event.
Assuming that the designated event identified by the electronic device from at least one frame of monitoring image is an area intrusion event, and assuming that the determined target monitoring area is a monitoring area 1. The electronic device may select the audio playing device 1 corresponding to the area intrusion event from the 2 audio playing devices corresponding to the monitoring area 1. Then, the electronic device may send an audio playing instruction to the audio playing device 1, so that the audio playing device 1 plays the stored voice information corresponding to the area intrusion event.
The implementation mode two is as follows: the target monitoring area is provided with 1 audio playing device. The audio playback device can play back voice information for at least one specified event.
In this way, the deployment cost of the audio playback device can be reduced.
In an alternative implementation, the audio playback device is pre-associated with at least one specified event.
In an optional implementation manner, the audio playing device stores the voice information of each specified event, and the electronic device may send an audio playing instruction to the audio playing device deployed in the target monitoring area, where the audio playing instruction carries an identifier of the specified event identified by the electronic device. The audio playing device can acquire the voice information corresponding to the specified event and play the voice information based on the identifier of the specified event.
For example, it is assumed that the audio playback device in the target monitoring area stores voice information of an area intrusion event and voice information of an abnormal boundary crossing event.
The electronic equipment can send an audio playing instruction to the audio playing equipment, and the audio playing instruction carries the event identifier of the regional invasion event. After receiving the audio playing instruction, the audio playing device can search the voice information corresponding to the regional invasion event in the locally recorded voice information and play the voice information. For example, play "please leave this area as soon as possible".
In another alternative implementation, the audio playing device in the target monitoring area does not store any voice information. The electronic device can send an audio playing instruction to the audio playing device, wherein the audio playing instruction carries the identified voice information of the specified event. And after receiving the audio playing instruction, the audio playing equipment plays the voice information carried by the audio playing instruction.
For example, the specified event identified by the electronic device is a regional intrusion event, and the voice information corresponding to the regional intrusion event is "please leave the region as soon as possible".
The electronic equipment can send an audio playing instruction to the audio playing equipment, and the audio playing instruction carries voice information of 'please leave the area as soon as possible'. After receiving the audio playing instruction, the audio playing device plays the voice information of "please leave the local area as soon as possible" carried by the audio playing instruction.
According to the above description, the electronic equipment and the audio playing equipment deployed in each monitoring area are linked, so that when the electronic equipment detects that a specified event occurs in a certain monitoring area, the audio playing equipment can be linked to play the voice information corresponding to the specified event, the voice information can be automatically played by the audio playing equipment after the specified event occurs in the monitoring area, the initiating object of the specified event can be automatically warned, and the object warning efficiency is improved.
Referring to fig. 4, fig. 4 is a hardware structure diagram of an electronic device according to an exemplary embodiment of the present application.
The electronic device includes: a readable storage medium and a processor;
wherein the readable storage medium is configured to store machine executable instructions;
the processor is configured to read the machine executable instructions on the readable storage medium and execute the instructions to implement an event detection method.
Optionally, as shown in fig. 4, the electronic device may further include a communication interface 401, a bus 404, in addition to the processor 402 and the machine-readable storage medium 403; wherein the communication interface 401, the processor 5402 and the machine-readable storage medium 403 communicate with each other via a bus 404. The processor 402 may perform the event detection method described above by reading and executing machine-executable instructions in the machine-readable storage medium 403 corresponding to the graph selection control logic.
Referring to fig. 5, fig. 5 is a block diagram of an event detection apparatus according to an exemplary embodiment of the present application. The device is applied to the electronic equipment and can comprise the following units:
an obtaining unit 501, configured to obtain a recognition result for recognizing at least one frame of monitoring image acquired by a camera;
a sending unit 502, configured to determine a target monitoring area corresponding to the monitoring image if the identification result indicates that a specified event is identified from the at least one frame of monitoring image, and send an audio playing instruction for the identified specified event to an audio playing device deployed in the target monitoring area, so that the audio playing device plays voice information corresponding to the identified specified event.
Optionally, the sending unit 502 is configured to, when determining the target monitoring area corresponding to the monitoring image, search for the target monitoring area corresponding to the acquisition information, which is used by the camera to acquire the at least one frame of monitoring image, in a preset correspondence between the acquisition information and the monitoring area; the acquisition information in the corresponding relation is related information of a camera for acquiring a monitoring image; or, carrying out image recognition on the background of the monitoring image to obtain the target monitoring area; or, in the preset corresponding relation between the specified event and the monitored area, searching the monitored area corresponding to the specified event identified from the at least one frame of monitored image as a target monitored area; or determining the image position of the identified specified event in the monitoring image, and searching the monitoring area corresponding to the determined image position as the target monitoring area in the preset corresponding relation between the image position and the monitoring area.
Optionally, in a case that the camera is a panoramic camera, the panoramic camera includes a plurality of cameras, monitoring areas corresponding to the cameras are different, and the at least one frame of monitoring image is acquired by any camera in the panoramic camera; the acquisition information is an identifier of a camera for acquiring the at least one frame of monitoring image; or the like, or, alternatively,
and under the condition that the camera is a rotatable camera, different corners of the rotatable camera correspond to different monitoring areas, and the acquisition information is the corner of the camera when the at least one frame of monitoring image is acquired.
Optionally, the apparatus is applied to a central server, and the central server is connected to at least one of the cameras; the central server is connected with the audio playing equipment of each monitoring area; the acquisition information is a camera identifier of a camera which acquires the at least one frame of monitoring image.
Optionally, the target monitoring area is deployed with a plurality of audio playing devices, and each audio playing device stores voice information of a specified event corresponding to the audio playing device;
the sending unit 502 is configured to, when sending an audio playing instruction for the identified specified event to the audio playing devices deployed in the target monitoring area, select an audio playing device corresponding to the identified specified event from the multiple audio playing devices deployed in the target monitoring area, and send the audio playing instruction to the selected audio playing device, so that the selected audio playing device plays the stored voice information.
Optionally, an audio playing device is deployed in the target monitoring area;
the sending unit 502 is configured to send an audio playing instruction to the audio playing device in the target monitoring area when sending an audio playing instruction for the identified specified event to the audio playing device deployed in the target monitoring area, where the audio playing instruction carries the voice information of the identified specified event, so that the audio playing device plays the carried voice information; or sending an audio playing instruction to the audio playing equipment of the target monitoring area; and the audio playing instruction carries the identification of the identified specified event, so that the audio playing equipment acquires the voice information of the specified event and plays the voice information based on the identification of the specified event.
In addition, the present application also provides an event detection system, the system comprising: a camera and an audio playback device;
the camera is used for identifying at least one frame of monitoring image acquired by the camera; if a specified event is identified from the at least one frame of monitoring image, determining a target monitoring area corresponding to the monitoring image, and sending an audio playing instruction aiming at the specified event to audio playing equipment deployed in the target monitoring area,
and the audio playing device is used for responding to the audio playing instruction and playing the voice information corresponding to the specified event.
Optionally, when the target monitoring area corresponding to the monitoring image is determined, the camera is configured to search the target monitoring area corresponding to the acquisition information, which is used by the camera to acquire the at least one frame of monitoring image, in a preset corresponding relationship between the acquisition information and the monitoring area; the acquisition information in the corresponding relation is related information of a camera for acquiring a monitoring image;
alternatively, the first and second liquid crystal display panels may be,
carrying out image recognition on the background of the monitoring image to obtain the target monitoring area;
alternatively, the first and second electrodes may be,
searching a monitoring area corresponding to the specified event identified from the at least one frame of monitoring image in a corresponding relation between the preset specified event and the monitoring area, and taking the monitoring area as a target monitoring area;
and determining the image position of the identified specified event in the monitoring image, and searching the monitoring area corresponding to the determined image position as a target monitoring area in the corresponding relation between the preset image position and the monitoring area.
Optionally, in a case that the camera is a panoramic camera, the panoramic camera includes a plurality of cameras, monitoring areas corresponding to the cameras are different, and the at least one monitoring image is acquired by any camera in the panoramic camera; the acquisition information is an identifier of a camera for acquiring the at least one frame of monitoring image;
and under the condition that the camera is a rotatable camera, different corners of the rotatable camera correspond to different monitoring areas, and the acquisition information is the corner of the camera when the at least one frame of monitoring image is acquired.
Optionally, the target monitoring area is deployed with a plurality of audio playing devices, and each audio playing device stores voice information of a specified event corresponding to the audio playing device;
the camera is used for selecting the audio playing device corresponding to the identified specified event from the plurality of audio playing devices deployed in the target monitoring area and sending an audio playing instruction to the selected audio playing device when the camera sends the audio playing instruction to the audio playing devices deployed in the target monitoring area;
and the audio playing device is used for responding to the audio playing instruction and playing the voice information of the specified event stored in the audio playing device.
Optionally, an audio playing device is deployed in the target monitoring area;
the camera is used for sending an audio playing instruction to the audio playing equipment in the target monitoring area when sending the audio playing instruction to the audio playing equipment deployed in the target monitoring area, wherein the audio playing instruction carries the identified voice information of the specified event;
the audio playing device is used for playing the voice information of the identified specified event carried by the audio playing instruction;
alternatively, the first and second electrodes may be,
the camera sends an audio playing instruction to the audio playing equipment in the target monitoring area; the audio playing instruction carries an identifier of the identified specified event;
and the audio playing device is used for acquiring and playing the voice information of the specified event based on the identifier of the specified event.
The present application further provides another event detection system, the system comprising: the system comprises a camera, a central server and an audio playing device;
the camera is used for collecting at least one frame of monitoring image;
the central server is used for acquiring an identification result for identifying at least one frame of monitoring image acquired by the camera;
if the identification result indicates that the specified event is identified from the at least one frame of monitoring image, determining a target monitoring area corresponding to the monitoring image, and sending an audio playing instruction aiming at the identified specified event to audio playing equipment deployed in the target monitoring area;
and the audio playing device is used for responding to the audio playing instruction and playing the voice information corresponding to the identified specified event.
Optionally, when determining the target monitoring area corresponding to the monitoring image, the central server is configured to search, in a preset correspondence between the acquisition information and the monitoring area, the target monitoring area corresponding to the acquisition information that is used by the camera to acquire the at least one frame of monitoring image; the acquisition information in the corresponding relation is related information of a camera for acquiring a monitoring image;
alternatively, the first and second electrodes may be,
carrying out image recognition on the background of the monitoring image to obtain the target monitoring area;
alternatively, the first and second liquid crystal display panels may be,
searching a monitoring area corresponding to the specified event identified from the at least one frame of monitoring image in a corresponding relation between the preset specified event and the monitoring area, and taking the monitoring area as a target monitoring area;
alternatively, the first and second liquid crystal display panels may be,
and determining the image position of the identified specified event in the monitoring image, and searching the monitoring area corresponding to the determined image position as a target monitoring area in the corresponding relation between the preset image position and the monitoring area.
Optionally, in a case that the camera is a panoramic camera, the panoramic camera includes a plurality of cameras, monitoring areas corresponding to the cameras are different, and the at least one frame of monitoring image is acquired by any camera in the panoramic camera; the acquisition information is an identifier of a camera for acquiring the at least one frame of monitoring image; or the like, or, alternatively,
and under the condition that the camera is a rotatable camera, different corners of the rotatable camera correspond to different monitoring areas, and the acquisition information is the corner of the camera when the at least one frame of monitoring image is acquired.
Optionally, the acquisition information is a camera identifier of a camera that acquires the at least one frame of monitoring image.
Optionally, the target monitoring area is deployed with a plurality of audio playing devices, and each audio playing device stores voice information of a specified event corresponding to the audio playing device;
the central server is used for selecting the audio playing device corresponding to the identified specified event from the plurality of audio playing devices deployed in the target monitoring area and sending an audio playing instruction to the selected audio playing device when sending the audio playing instruction aiming at the identified specified event to the audio playing devices deployed in the target monitoring area;
and the selected audio playing device is used for playing the stored voice information.
Optionally, an audio playing device is deployed in the target monitoring area;
the central server is used for sending an audio playing instruction to the audio playing equipment in the target monitoring area when sending the audio playing instruction aiming at the identified specified event to the audio playing equipment deployed in the target monitoring area, wherein the audio playing instruction carries the voice information of the identified specified event;
the audio playing device is used for playing the carried voice information;
or, the central server is configured to send an audio playing instruction to the audio playing device in the target monitoring area when sending the audio playing instruction for the identified specified event to the audio playing device deployed in the target monitoring area; the audio playing instruction carries an identifier of the identified specified event;
and the audio playing device is used for acquiring and playing the voice information of the specified event based on the identifier of the specified event.
In addition, the present application also provides a computer readable storage medium, in which a computer program is stored, and the computer program is executed by a processor to implement the above event detection method.
The specific details of the implementation process of the functions and actions of each unit in the above device are the implementation processes of the corresponding steps in the above method, and are not described herein again.
A computer-readable storage medium as referred to herein may be any electronic, magnetic, optical, or other physical storage device that can contain or store information such as executable instructions, data, and the like. For example, the machine-readable storage medium may be: volatile memory, non-volatile memory, or similar storage media. In particular, the computer-readable storage medium may be a RAM (random Access Memory), a flash Memory, a storage drive (e.g., a hard disk drive), a solid state disk, any type of storage disk (e.g., a compact disk, a DVD, etc.), or similar storage medium, or a combination thereof.
In addition, the present application also provides a computer program, which is stored in a computer readable storage medium and causes a processor to implement the parking space display method for the parking area when the processor executes the computer program.
For the device embodiments, since they substantially correspond to the method embodiments, reference may be made to the partial description of the method embodiments for relevant points. The above-described embodiments of the apparatus are merely illustrative, and the units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the modules can be selected according to actual needs to achieve the purpose of the scheme of the application. One of ordinary skill in the art can understand and implement it without inventive effort.
The above description is only a preferred embodiment of the present application and should not be taken as limiting the present application, and any modifications, equivalents, improvements and the like made within the spirit and principle of the present application should be included in the protection scope of the present application.

Claims (11)

1. A method of event detection, the method comprising:
acquiring a recognition result for recognizing at least one frame of monitoring image collected by a camera;
if the recognition result indicates that a specified event is recognized from the at least one frame of monitoring image, determining a target monitoring area corresponding to the monitoring image, and sending an audio playing instruction for the recognized specified event to audio playing equipment deployed in the target monitoring area, so that the audio playing equipment plays voice information corresponding to the recognized specified event, wherein the voice information is used for warning an initiating object of the specified event;
under the condition that the camera is a panoramic camera, the panoramic camera comprises a plurality of cameras, monitoring areas corresponding to the cameras are different, and the at least one frame of monitoring image is acquired by any camera in the panoramic camera; or, in the case that the camera is a rotatable camera, different rotation angles of the rotatable camera correspond to different monitoring areas; binding different types of specified events with different types of monitoring areas;
the determining of the target monitoring area corresponding to the monitoring image includes:
and searching a monitoring area corresponding to the specified event identified from the at least one frame of monitoring image in the corresponding relation between the preset specified event and the monitoring area as a target monitoring area.
2. The method of claim 1, further comprising:
searching a target monitoring area corresponding to the acquisition information of the camera for acquiring the at least one frame of monitoring image in the corresponding relation between the preset acquisition information and the monitoring area; the acquisition information in the corresponding relation is related information of a camera for acquiring a monitoring image;
alternatively, the first and second electrodes may be,
carrying out image recognition on the background of the monitoring image to obtain the target monitoring area;
alternatively, the first and second electrodes may be,
and determining the image position of the identified specified event in the monitoring image, and searching the monitoring area corresponding to the determined image position as a target monitoring area in the corresponding relation between the preset image position and the monitoring area.
3. The method of claim 2,
under the condition that the camera is a panoramic camera, the acquisition information is an identifier of a camera for acquiring the at least one frame of monitoring image; or the like, or, alternatively,
and under the condition that the camera is a rotatable camera, the acquisition information is the rotation angle of the camera when the at least one frame of monitoring image is acquired.
4. The method according to claim 2, wherein the method is applied to a central server; the central server is connected with at least one camera; the central server is connected with the audio playing equipment of each monitoring area;
the acquisition information is a camera identifier of a camera which acquires the at least one frame of monitoring image.
5. The method according to claim 1, wherein a plurality of audio playing devices are deployed in the target monitoring area, and each audio playing device stores voice information of a specified event corresponding to the audio playing device;
the sending an audio playing instruction for the identified specified event to the audio playing device deployed in the target monitoring area includes:
and selecting audio playing equipment corresponding to the identified specified event from the plurality of audio playing equipment deployed in the target monitoring area, and sending an audio playing instruction to the selected audio playing equipment so that the selected audio playing equipment plays the stored voice information.
6. The method of claim 1, wherein the target monitoring area is deployed with an audio playing device;
the sending an audio playing instruction for the identified specified event to the audio playing device deployed in the target monitoring area includes:
sending an audio playing instruction to audio playing equipment in the target monitoring area, wherein the audio playing instruction carries the voice information of the identified specified event, so that the audio playing equipment plays the carried voice information;
alternatively, the first and second electrodes may be,
sending an audio playing instruction to audio playing equipment in the target monitoring area; and the audio playing instruction carries the identification of the identified specified event, so that the audio playing equipment acquires the voice information of the specified event and plays the voice information based on the identification of the specified event.
7. An event detection system, the system comprising: a camera and an audio playback device;
the camera is used for identifying at least one frame of monitoring image acquired by the camera; if the specified event is identified from the at least one frame of monitoring image, searching a monitoring area corresponding to the specified event identified from the at least one frame of monitoring image in a preset corresponding relation between the specified event and the monitoring area as a target monitoring area, and sending an audio playing instruction aiming at the identified specified event to audio playing equipment deployed in the target monitoring area,
the audio playing device is used for responding to the audio playing instruction and playing voice information corresponding to the identified specified event, and the voice information is used for warning an initiating object of the specified event;
under the condition that the camera is a panoramic camera, the panoramic camera comprises a plurality of cameras, the monitoring areas corresponding to the cameras are different, and the at least one monitoring image is acquired by any camera in the panoramic camera; or, in the case that the camera is a rotatable camera, different rotation angles of the rotatable camera correspond to different monitoring areas; different types of specified events are bound to different types of monitoring areas.
8. An event detection system, the system comprising: the system comprises a camera, a central server and an audio playing device;
the camera is used for collecting at least one frame of monitoring image;
the central server is used for acquiring an identification result for identifying at least one frame of monitoring image acquired by the camera;
if the identification result indicates that the specified event is identified from the at least one frame of monitoring image, searching a monitoring area corresponding to the specified event identified from the at least one frame of monitoring image in a preset corresponding relation between the specified event and the monitoring area to serve as a target monitoring area, and sending an audio playing instruction aiming at the identified specified event to audio playing equipment deployed in the target monitoring area;
the audio playing device is used for responding to the audio playing instruction and playing voice information corresponding to the identified specified event, wherein the voice information is used for warning an initiating object of the specified event;
under the condition that the camera is a panoramic camera, the panoramic camera comprises a plurality of cameras, monitoring areas corresponding to the cameras are different, and the at least one frame of monitoring image is acquired by any camera in the panoramic camera; or, in the case that the camera is a rotatable camera, different corners of the rotatable camera correspond to different monitoring areas, and different types of specified events are bound with different types of monitoring areas.
9. An event detection apparatus, characterized in that the apparatus comprises:
the device comprises an acquisition unit, a processing unit and a processing unit, wherein the acquisition unit is used for acquiring an identification result for identifying at least one frame of monitoring image acquired by a camera;
a sending unit, configured to search, if the recognition result indicates that a specified event is recognized from the at least one frame of monitored image, a monitored area corresponding to the specified event recognized from the at least one frame of monitored image in a preset corresponding relationship between the specified event and the monitored area, as a target monitored area, and send an audio playing instruction for the recognized specified event to an audio playing device deployed in the target monitored area, so that the audio playing device plays voice information corresponding to the recognized specified event, where the voice information is used to warn an initiating object of the specified event;
under the condition that the camera is a panoramic camera, the panoramic camera comprises a plurality of cameras, monitoring areas corresponding to the cameras are different, and the at least one frame of monitoring image is acquired by any camera in the panoramic camera; or, in the case that the camera is a rotatable camera, different rotation angles of the rotatable camera correspond to different monitoring areas; different types of specified events are bound to different types of monitoring areas.
10. The apparatus according to claim 9, wherein the sending unit is further configured to, when determining a target monitoring area corresponding to the monitoring image, search for the target monitoring area corresponding to the acquisition information that the camera uses to acquire the at least one frame of monitoring image in a preset correspondence between the acquisition information and the monitoring area; the acquisition information in the corresponding relation is related information of a camera for acquiring a monitoring image; or, carrying out image recognition on the background of the monitoring image to obtain the target monitoring area; or determining the image position of the identified specified event in the monitored image, and searching the monitored area corresponding to the determined image position as a target monitored area in the corresponding relation between the preset image position and the monitored area;
under the condition that the camera is a panoramic camera, the acquisition information is an identifier of a camera for acquiring the at least one frame of monitoring image; or, in the case that the camera is a rotatable camera, the acquisition information is a rotation angle of the camera when the at least one frame of monitoring image is acquired; alternatively, the first and second electrodes may be,
the device is applied to a central server which is connected with at least one camera; the central server is connected with the audio playing equipment of each monitoring area; the acquisition information is a camera identifier of a camera for acquiring the at least one frame of monitoring image;
the target monitoring area is provided with a plurality of audio playing devices, and each audio playing device stores the voice information of the corresponding appointed event of the audio playing device; the sending unit is configured to, when sending an audio playing instruction for the identified specified event to the audio playing devices deployed in the target monitoring area, select an audio playing device corresponding to the identified specified event from among the multiple audio playing devices deployed in the target monitoring area, and send the audio playing instruction to the selected audio playing device, so that the selected audio playing device plays the stored voice information; alternatively, the first and second electrodes may be,
the target monitoring area is provided with an audio playing device; the sending unit is configured to send an audio playing instruction to the audio playing device in the target monitoring area when sending an audio playing instruction for the identified specified event to the audio playing device deployed in the target monitoring area, where the audio playing instruction carries the voice information of the identified specified event, so that the audio playing device plays the carried voice information; or sending an audio playing instruction to the audio playing equipment of the target monitoring area; and the audio playing instruction carries the identified identifier of the specified event, so that the audio playing equipment acquires and plays the voice information of the specified event based on the identifier of the specified event.
11. An electronic device, comprising a readable storage medium and a processor;
wherein the readable storage medium is configured to store machine executable instructions;
the processor configured to read the machine executable instructions on the readable storage medium and execute the instructions to implement the steps of the method of any one of claims 1-6.
CN202110237180.5A 2021-03-03 2021-03-03 Event detection method, device, system and equipment Active CN113112722B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110237180.5A CN113112722B (en) 2021-03-03 2021-03-03 Event detection method, device, system and equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110237180.5A CN113112722B (en) 2021-03-03 2021-03-03 Event detection method, device, system and equipment

Publications (2)

Publication Number Publication Date
CN113112722A CN113112722A (en) 2021-07-13
CN113112722B true CN113112722B (en) 2023-03-24

Family

ID=76709767

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110237180.5A Active CN113112722B (en) 2021-03-03 2021-03-03 Event detection method, device, system and equipment

Country Status (1)

Country Link
CN (1) CN113112722B (en)

Family Cites Families (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1689323A (en) * 2002-05-10 2005-10-26 汤姆森许可贸易公司 Television signal receiver capable of receiving emergency alert signals
JP2004186987A (en) * 2002-12-03 2004-07-02 Mitsubishi Electric Corp Remote monitor device and remote monitor system using the same
CN101763709A (en) * 2008-10-24 2010-06-30 杨林 Security system in intelligent home furnishing
CN102137251A (en) * 2010-01-22 2011-07-27 鸿富锦精密工业(深圳)有限公司 Image monitoring system and method
US20130198044A1 (en) * 2012-01-27 2013-08-01 Concert Window LLC Automated broadcast systems and methods
KR102421141B1 (en) * 2015-10-30 2022-07-14 삼성전자주식회사 Apparatus and method for storing event signal and image and operating method of vision sensor for transmitting event signal to the apparatus
CN105448030A (en) * 2015-11-24 2016-03-30 王丽华 Security and protection system for smart home
TWI582732B (en) * 2016-03-17 2017-05-11 Automatic display of multimedia monitoring system and its information processing method
CN105872966A (en) * 2016-04-01 2016-08-17 乐视控股(北京)有限公司 Method and device for pushing customized message
KR102644782B1 (en) * 2016-07-25 2024-03-07 한화비전 주식회사 The Apparatus And The System For Monitoring
WO2018061616A1 (en) * 2016-09-28 2018-04-05 株式会社日立国際電気 Monitoring system
CN107566809B (en) * 2017-09-30 2020-04-03 浙江大华技术股份有限公司 Video monitoring alarm method, device and system
KR102079770B1 (en) * 2019-09-20 2020-02-20 주식회사 토브넷 Remote monitoring system and method for moveable CCTV
CN110751800B (en) * 2019-10-08 2021-06-29 中兴飞流信息科技有限公司 Voice alarm prompt system based on video AI intelligent analysis
CN110751116B (en) * 2019-10-24 2022-07-01 银河水滴科技(宁波)有限公司 Target identification method and device
CN111915857B (en) * 2020-07-24 2022-09-30 杭州海康威视系统技术有限公司 Alarm level determination method and device, electronic equipment and storage medium

Also Published As

Publication number Publication date
CN113112722A (en) 2021-07-13

Similar Documents

Publication Publication Date Title
CN109166261B (en) Image processing method, device and equipment based on image recognition and storage medium
US9560323B2 (en) Method and system for metadata extraction from master-slave cameras tracking system
JP6693517B2 (en) Monitoring system, monitoring method and program
US11263446B2 (en) Method for person re-identification in closed place, system, and terminal device
JP2017033554A (en) Video data analysis method and device, and parking place monitoring system
US10204419B2 (en) Monitoring method and monitoring device
JP7047970B2 (en) Methods, devices and programs for determining periods of interest and at least one area of interest for managing events.
CN108540752B (en) Method, device and system for identifying target object in video monitoring
CN111565301B (en) Indoor monitoring method, device and system, storage medium and camera device
WO2008156894A2 (en) System and related techniques for detecting and classifying features within data
JP6013923B2 (en) System and method for browsing and searching for video episodes
US20220392233A1 (en) Traffic information providing method and device, and computer program stored in medium in order to execute method
CN113628404A (en) Method and device for reducing invalid alarm
KR101212082B1 (en) Image Recognition Apparatus and Vison Monitoring Method thereof
US20190027004A1 (en) Method for performing multi-camera automatic patrol control with aid of statistics data in a surveillance system, and associated apparatus
CN103905789B (en) The method and its device of the preservation, search and ex-post analysis of dynamic analyze data
CN113112722B (en) Event detection method, device, system and equipment
KR101589823B1 (en) Cctv monitoring system providing variable display environment to search event situation efficiently
KR20210114309A (en) Apparatus and method for tracking pedestrians in multiple cctv environment
TWI515667B (en) Vehicle recognition and detection system, vehicle information collection method, vehicle information detection method and vehicle information inquiry method
KR20030064668A (en) Advanced Image Processing Digital Video Recorder System
CN111177449B (en) Multi-dimensional information integration method based on picture and related equipment
CN111814509B (en) Article positioning method, device and monitoring system
CN210015451U (en) Embedded portable face cloth accuse case device
CN112733647A (en) Method, analysis server and system based on MAC address and face information binding

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant