WO2019228218A1 - Procédé de surveillance, dispositif, serveur, et support de stockage - Google Patents

Procédé de surveillance, dispositif, serveur, et support de stockage Download PDF

Info

Publication number
WO2019228218A1
WO2019228218A1 PCT/CN2019/087692 CN2019087692W WO2019228218A1 WO 2019228218 A1 WO2019228218 A1 WO 2019228218A1 CN 2019087692 W CN2019087692 W CN 2019087692W WO 2019228218 A1 WO2019228218 A1 WO 2019228218A1
Authority
WO
WIPO (PCT)
Prior art keywords
monitoring
information
monitored
static
characteristic information
Prior art date
Application number
PCT/CN2019/087692
Other languages
English (en)
Chinese (zh)
Inventor
赵丛君
Original Assignee
菜鸟智能物流控股有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from CN201810557078.1A external-priority patent/CN110543803A/zh
Application filed by 菜鸟智能物流控股有限公司 filed Critical 菜鸟智能物流控股有限公司
Publication of WO2019228218A1 publication Critical patent/WO2019228218A1/fr

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/18Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast

Definitions

  • the present application relates to the field of computer technology, and in particular, to a monitoring method, a monitoring device, a server, and a storage medium.
  • Fire exits are usually provided in public places such as office buildings, office parks, shopping plazas, and train stations. Smoking, non-smoking, and parking areas are also divided. However, sometimes there are incidents of non-compliance in these areas, which can cause various problems. Such as blocking fire passages, smoking in smoke-free areas, etc. may cause hidden safety hazards.
  • patrols are usually organized by security and property personnel to discourage related problems such as smoking and blocked fire passages.
  • the aforementioned manual patrol method requires patrol personnel to walk to the corresponding location to find out whether there is a problem. If there is a problem in some places that have just been patrolled, it is difficult to find it in time and the efficiency is low.
  • the embodiment of the present application provides a monitoring method to improve the discovery efficiency of events in the monitoring area.
  • an embodiment of the present application further provides a monitoring device, a server, and a storage medium to ensure the implementation and application of the above system.
  • an embodiment of the present application discloses a monitoring method.
  • the method includes: acquiring static characteristic information of a monitoring object in a monitoring area and motion characteristic information of the monitoring object; and according to the static characteristic information and The motion characteristic information determines a corresponding monitoring event.
  • the static characteristic information of the monitoring object includes at least one of the following: the type of the monitoring object, the size of the monitoring object, the position of the monitoring object, and the position of the associated monitoring object group;
  • the motion characteristic information of the monitoring object includes the following At least one of: displacement information of the monitoring object, and displacement information of the associated monitoring object group.
  • the associated monitoring object group refers to a combination of monitoring objects having an associated relationship.
  • acquiring the static characteristic information of the monitoring object in the monitoring area and the motion characteristic information of the monitoring object includes: obtaining a monitoring video including the monitoring object in the monitoring area; identifying the monitoring video to determine the monitoring The static characteristic information of the object and the monitored object; and identifying the motion characteristic information of the monitored object in a selected time period based on the static characteristic information.
  • the identifying the monitoring video and determining the monitoring object and the static characteristic information of the monitoring object include: determining a monitoring type according to a monitoring area; identifying a monitoring object corresponding to the monitoring type in the monitoring video, And identify the static characteristic information of the monitored object.
  • identifying the motion characteristic information of the monitored object in a selected time period based on the static characteristic information includes: extracting static characteristics corresponding to the monitored object at multiple time points in the selected time period. Information; comparing a plurality of static feature information of the monitoring object to determine corresponding motion feature information.
  • comparing the plurality of static feature information of the monitored object to determine corresponding motion feature information includes at least one of the following steps: comparing the positions of the monitored object pair by pair to determine the Displacement information; comparing the positions of the associated monitoring object groups in pairs to determine displacement information of the associated monitoring object groups, the displacement information including: a displacement direction and a displacement distance.
  • the obtaining the monitoring video including the monitoring object in the monitoring area includes: after detecting the monitoring object in the monitoring area, extracting the monitoring video of the monitoring area in a selected time period, wherein the extracted monitoring video is taken by The monitoring area contains monitoring objects.
  • determining the corresponding monitoring event according to the static characteristic information and the motion characteristic information includes: analyzing a state of the monitoring object according to the static characteristic information and the motion characteristic information; and according to the The status of the monitored object determines the corresponding monitoring event.
  • the method further includes: generating alarm information according to the monitoring event, and sending the alarm information.
  • the method further includes: obtaining an alarm processing result, the alarm processing result including: an alarm processing result fed back according to the alarm information and / or a reported alarm processing result; and adjusting the monitoring area according to the alarm processing result.
  • Identification algorithm of internal monitoring objects including: an alarm processing result fed back according to the alarm information and / or a reported alarm processing result; and adjusting the monitoring area according to the alarm processing result.
  • the monitoring type includes at least one of the following: a hidden danger category and a violation category.
  • An embodiment of the present application further discloses a monitoring device, including: a feature recognition module, configured to obtain static feature information of a monitored object in a monitored area and motion feature information of the monitored object; and an event analysis module, configured to The static feature information and the motion feature information determine a corresponding monitoring event.
  • a feature recognition module configured to obtain static feature information of a monitored object in a monitored area and motion feature information of the monitored object
  • an event analysis module configured to The static feature information and the motion feature information determine a corresponding monitoring event.
  • the static characteristic information of the monitoring object includes at least one of the following: the type of the monitoring object, the size of the monitoring object, the position of the monitoring object, and the position of the associated monitoring object group;
  • the motion characteristic information of the monitoring object includes the following At least one of: displacement information of the monitoring object, and displacement information of the associated monitoring object group.
  • the associated monitoring object group refers to a combination of monitoring objects having an associated relationship.
  • the feature identification module includes: an acquisition sub-module for acquiring a monitoring video including a monitoring object in a monitoring area; a static identification sub-module for identifying the monitoring video, determining the monitoring object and the The static feature information of the monitored object; a motion recognition sub-module is configured to identify the movement characteristic information of the monitored object in a selected time period according to the static feature information.
  • the static identification sub-module is configured to determine a monitoring type according to a monitoring area; identify a monitoring object corresponding to the monitoring type in a monitoring video, and identify static characteristic information of the monitoring object.
  • the motion recognition sub-module is configured to extract static feature information corresponding to the monitoring object at multiple time points within a selected time period, and compare multiple static feature information of the monitoring object, Determine corresponding motion feature information.
  • the motion recognition sub-module is used to compare the positions of the monitored objects in pairs to determine the displacement information of the monitored objects; and / or to compare the positions of the associated monitoring object groups in pairs to determine the associated monitored objects.
  • a set of displacement information, the displacement information including: a displacement direction and a displacement distance.
  • the acquisition submodule is configured to extract monitoring videos of the monitoring area in a selected time period after detecting the monitoring objects in the monitoring area, and the monitoring area captured by the extracted monitoring video includes the monitoring objects.
  • the event analysis module includes: a state analysis sub-module configured to analyze a state of the monitored object based on the static characteristic information and the motion characteristic information; and an event determination sub-module configured to perform the monitoring according to the monitoring The status of the object determines the corresponding monitoring event.
  • an alarm module configured to generate alarm information according to the monitoring event, and send the alarm information.
  • it further includes: an adjustment module for obtaining an alarm processing result; adjusting an identification algorithm for a monitoring object in the monitoring area according to the alarm processing result, and the alarm processing result includes: Alarm processing results and / or reported alarm processing results.
  • the monitoring type includes at least one of the following: a hidden danger category and a violation category.
  • An embodiment of the present application further discloses a server, including: a processor; and a memory storing executable code thereon, and when the executable code is executed, the processor is caused to execute one of the embodiments of the present application. Or more of said monitoring methods.
  • the embodiment of the present application further discloses one or more machine-readable media having executable code stored thereon, and when the executable code is executed, the processor is executed as described in one or more of the embodiments of the present application. Monitoring method.
  • the embodiments of the present application include the following advantages:
  • the static characteristic information of the monitored object in the monitoring area and the motion characteristic information of the monitored object may be obtained, and then the corresponding monitoring event is determined according to the static characteristic information and the motion characteristic information.
  • monitoring events occurring in the monitoring area can be detected according to the characteristics of the monitoring objects, so as to detect problems in time and improve the efficiency of detecting monitoring events.
  • FIG. 1 is a flowchart of steps in an embodiment of a monitoring method according to the present application
  • FIG. 2 is a schematic diagram of a monitoring screen according to an embodiment of the present application.
  • FIGS. 3A and 3B are schematic diagrams of another monitoring screen according to an embodiment of the present application.
  • FIG. 4 is a schematic diagram of a monitoring system interaction according to an embodiment of the present application.
  • FIG. 5 is a schematic diagram of another monitoring system according to an embodiment of the present application.
  • FIG. 6 is a schematic diagram of another monitoring system interaction according to an embodiment of the present application.
  • FIG. 7 is a flowchart of steps in another embodiment of a monitoring method according to the present application.
  • FIG. 8 is a structural block diagram of an embodiment of a monitoring device according to the present application.
  • FIG. 9 is a structural block diagram of another embodiment of a monitoring device according to the present application.
  • FIG. 10 is a schematic structural diagram of a device according to an embodiment of the present application.
  • the area monitored by each camera is called the monitoring area.
  • the monitoring video corresponding to the monitoring area is identified, and the corresponding monitoring event can be determined based on the static characteristic information and motion characteristic information of the identified monitoring object. In this way, hidden dangers, violations and other problems in the monitoring area are automatically identified, and the efficiency of security investigation is improved.
  • FIG. 1 a flowchart of steps in an embodiment of a monitoring method according to the present application is shown.
  • Step 102 Acquire the static characteristic information of the monitoring object in the monitoring area and the motion characteristic information of the monitoring object.
  • the surveillance video of the surveillance area is captured by the camera, and the surveillance video can be transmitted to the server in real time, or the corresponding surveillance video can be retrieved when the server needs it.
  • the server performs recognition processing on the surveillance video, and can detect the surveillance object in the surveillance area, as well as the static feature information and motion feature information of the surveillance object.
  • the monitoring object refers to an object in the monitoring area that may cause a monitoring event.
  • the type of monitoring event that may occur on the monitoring object is determined, and the monitoring event refers to a situation that may cause security risks and violations.
  • Static feature information refers to the data corresponding to the static characteristics of the monitored object, such as attribute information such as category and size, and spatial information such as location.
  • Motion feature information refers to the characteristic information of the monitored object in motion, such as the monitored object's Movement direction, movement displacement, etc. Wherein, if the monitoring object is in a stationary state, the corresponding motion feature information may be zero features, such as zero motion displacement.
  • a cardboard box placed on the fire prevention aisle can be identified and used as a monitoring object to obtain corresponding static characteristic information and motion characteristic information.
  • the two monitoring screens of the non-smoking area can identify the smokers and the smoke in the hands of the non-smoking areas, and take the person and the smoke as the monitoring objects, and determine the person and the smoke respectively. Static feature information and motion feature information.
  • the monitoring object in the embodiment of the present application includes an associated monitoring object group.
  • the associated monitoring object group refers to a combination of monitoring objects with an associated relationship. For smoking, for example, a human hand and a cigarette in the hand can be used as an association.
  • the monitoring object group, or a person's mouth and the final cigarette constitute an associated monitoring object group.
  • the static characteristic information of the monitoring object includes at least one of the following: the type of the monitoring object, the size of the monitoring object, the position of the monitoring object, and the position of the associated monitoring object group.
  • the category of the monitoring object can be pre-configured and related to the monitoring category. For example, for the fire prevention passage blocking category, the category of the monitoring object includes articles, and for the smoking category, the category of the monitoring object includes people and smoke.
  • the size of the monitored object refers to the size data of the monitored object.
  • the required size information can be determined according to the category, such as the volume and floor area of the item, as well as the height and weight of the person.
  • the position of the monitoring object refers to the position of the monitoring object in the monitoring area, and based on the position, the movement characteristics of the monitoring object can be determined.
  • the position of the associated monitoring object group refers to the position of the associated monitoring object group in the monitoring area.
  • the movement characteristic information of the monitored object includes at least one of the following: displacement information of the monitored object and displacement information associated with the group of monitored objects; wherein the displacement information includes a displacement direction and a displacement distance.
  • the displacement information of the monitoring object may refer to information such as the direction and distance of the monitoring object moving within the monitoring area; the displacement information of the related monitoring object group may refer to the direction and distance of the related monitoring object group moving within the monitoring area.
  • the group of associated monitoring objects may include a main object and one or more sub-objects, so that the main object is mainly used to determine displacement information. For example, in a scene of smoking in a non-smoking area, there are multiple types of people sitting and smoking together.
  • Smoke may be on a person's hand or sometimes on the person's mouth. Therefore, for a person within a set time, such as 3 minutes, the association of the monitoring object group can be judged based on the combination of the cigarette and the mouth, and the combination of the smoke and the hand. Displacement information.
  • Step 104 Determine a corresponding monitoring event according to the static characteristic information and the motion characteristic information.
  • the status of the monitored object can be analyzed, and the corresponding monitored event is determined after the status of the monitored object meets the conditions.
  • the static feature information of the carton remains unchanged, and the motion feature information is a cold feature. Then the carton is identified as not moving within 5 minutes, and a monitoring event that the fire passage is occupied is determined.
  • the 3-minute surveillance video both the static characteristic information and the motion characteristic information of the person and the smoke are changed, the person's smoking behavior is identified, and the monitoring event of smoking in the non-smoking area is determined. . Therefore, without manual patrol, the monitoring event can be automatically identified and the processing efficiency can be improved.
  • the static characteristic information of the monitored object in the monitoring area and the motion characteristic information of the monitored object are obtained, and then the corresponding monitoring event is determined according to the static characteristic information and the motion characteristic information, and the identification of the monitored object is performed.
  • the characteristics of the monitoring object monitoring events occurring in the monitoring area can be detected, so as to detect problems in time and improve the efficiency of monitoring event discovery.
  • more cameras can usually be arranged in the monitoring area.
  • distributed construction can be used for multi-level identification and screening to determine monitoring events.
  • a local server and a web server are set up, and the local server set at the monitoring site is used for preliminary detection.
  • the surveillance video with the monitoring object in the monitoring area is screened out, and then reported to the remote network server.
  • the static characteristic information of the monitoring object is identified through the network server.
  • motion feature information and then it is not possible to determine the corresponding monitoring event based on the static feature information and the motion feature information, thereby quickly discovering the monitoring event.
  • Monitoring events can also be reported, so that monitoring sites can quickly learn about monitoring events and promptly remind them to ensure order and safety in public places. 1, a schematic diagram of a monitoring system according to an embodiment of the present application is shown.
  • FIG. 4 a schematic diagram of interaction of a monitoring system according to an embodiment of the present application is shown.
  • the monitoring system includes: a network server 10, a local server 20, and a camera 30.
  • the local server 20 can be set at a monitoring place, and one local server 20 can be connected to multiple cameras 30 in the monitoring place, thereby realizing monitoring areas in the monitoring place.
  • the web server 10 can be set at the remote end of the surveillance site, and one web server can be connected to the local servers in multiple surveillance sites to provide identification and alarm services for multiple surveillance sites.
  • the problem area can be quickly screened by the local server installed in the monitoring area, and the network server can identify whether a problem has occurred, and the cloud server can provide services to multiple public places, improving resource utilization. .
  • Step 402 The camera captures the monitoring area and uploads the corresponding monitoring video to the local server.
  • Step 404 The local server identifies a monitoring object in the monitoring video.
  • monitoring objects can be identified for different types of monitoring.
  • the monitoring objects include items on the fire prevention aisle; for monitoring of non-smoking areas, the monitoring objects include people and smoke.
  • the local server 20 identifies the monitoring object in the monitoring video.
  • the monitoring types include at least one of the following: hidden safety hazards and violations.
  • Hidden safety hazards refer to the categories that cause safety hazards. Among them, safety hazards refer to the daily production process or social activities, due to human factors, changes in materials, and environmental impacts. Problems, defects, failures, signs, hidden dangers, such as occupation of fire fighting passages, blockages, etc., and signs such as fire hydrants are covered, and sewer covers are lost.
  • the violation category refers to the category of the violation, such as smoking in a non-smoking area. According to the differences between hidden safety hazards and violations of regulations, incident categories can also be classified.
  • the hidden safety hazard categories include at least one of the following: fire passages, fire appliances, sewage wells, etc., which can be divided according to the hidden danger problems; violation categories Including at least one of the following: roads, parking, smoking, electrical appliances, etc., can be divided according to regulations.
  • the local server When the local server recognizes the surveillance video, it can obtain each frame of the video for recognition processing, or it can extract several images from the surveillance video for recognition processing. Among them, for the monitoring area photographed by each camera, such as shooting roads, parking lots, fire passages, etc., it is possible to set identifiers for the monitoring area such as camera identification, area identification, etc., to facilitate the determination of the monitoring area, and according to the monitoring that may occur in the monitoring area Configure the monitoring type corresponding to the event, and set the monitoring object corresponding to each monitoring type. Therefore, the corresponding relationship between the monitoring area and the monitoring type can be configured for each camera, so that the corresponding monitoring type recognition processing is performed for the monitoring video of each monitoring area.
  • Step 406 The local server reports to the network server according to the identified monitoring object.
  • the local server can trigger monitoring alarms based on the identified monitoring objects, such as generating monitoring alarm information and reporting based on the monitoring area, monitoring type, monitoring objects, etc., and determining the monitoring video containing the monitoring objects based on the monitoring objects, and reporting, etc., making the network
  • the server may perform identification processing to confirm whether there is a monitoring event.
  • the initial inspection of the web server can quickly identify the surveillance objects in the surveillance area and report them quickly, and then use the web server to re-examine the corresponding surveillance video, so that the web server can determine whether there is a monitoring event or not, and realize potential security risks.
  • the automatic detection of problems such as violations can also be reported to the processing end for processing, which improves the processing efficiency.
  • Step 408 The network server obtains the surveillance video including the surveillance object in the surveillance area.
  • the network server 10 may obtain a monitoring video including a monitoring object in the monitoring area.
  • the selected time period required for the monitoring video can be determined according to the identified monitoring object, and then the monitoring video of the monitoring area within the selected time period is extracted, and the monitoring area captured by the extracted monitoring video includes the monitoring object. If a recognition object is detected in the surveillance video at 9:15, a 10-minute target video can be extracted from 9:13; if a recognition object is detected in the surveillance video at 10 o'clock, each Extract a video of 1 minute in length every 1 minute, extract a total of 5 and so on.
  • Step 410 The network server identifies the surveillance video, and determines static feature information and motion feature information of the monitored object.
  • the network server can perform a re-check of the monitoring area.
  • the monitoring object in the monitoring video can identify the monitoring object, determine the static characteristic information of the monitoring object, and determine the motion characteristic information based on the static characteristic information.
  • different types of surveillance may use different recognition algorithms, and then the surveillance video is identified and processed according to the recognition algorithms. For example, an item on a recognizable channel such as a fire prevention channel is a monitoring object, and the object is identified. Dimensions, location on fire aisles, and other static characteristics.
  • Step 412 The network server determines a corresponding monitoring event according to the static characteristic information and the motion characteristic information.
  • the network server can analyze the static characteristic information and motion characteristic information of the monitored object to determine whether the movement status of the monitored object meets the conditions of the monitoring category, such as whether the items on the fire passage have not moved within 10 minutes. Whether people have smoking in non-smoking areas. Therefore, after determining that the motion status of the monitored object meets the conditions of the monitoring category, a corresponding monitoring event is generated.
  • the alarm information refers to information indicating that an alarm event exists in the area where the monitoring component is generated.
  • the alarm information can indicate the monitoring area where the alarm event occurs and the corresponding monitoring video and other information, so that it is easy to verify whether there is a problem. Processing efficiency.
  • the receiving end of the alarm information can be set according to requirements, such as transmitting to the security end of the monitoring place to notify the security personnel to verify the problem, or transmitting to the broadcasting end of the monitoring place to point out the problem through broadcasting, so that the alarm event can be resolved as soon as possible.
  • the above uses the camera to transmit the monitoring video to the local server as an example.
  • the monitoring video captured by the camera can be stored in a storage device.
  • the local server and the network server need to monitor the video, they are retrieved from the storage device.
  • FIG. 5 a schematic diagram of another monitoring system according to an embodiment of the present application is shown.
  • FIG. 6 there is shown a schematic interaction diagram of another monitoring system according to an embodiment of the present application.
  • the monitoring system further includes: a storage device 40 and a processing end 50.
  • the storage device can receive and store monitoring videos from the camera 20, so the storage device 40 may include a device such as a switch to facilitate interaction with the camera, server and other devices.
  • the processing terminal 50 may be set according to requirements, such as a security terminal, a broadcasting terminal, and the like.
  • FIG. 5 uses a monitoring site connected to a network server as an example, which may include multiple cameras, such as camera 301, camera 302, ... camera 30n, where n is a positive integer.
  • the local server and the network server can both identify the surveillance video by using an identification algorithm.
  • the identification algorithm can be constructed based on a deep learning model.
  • the recognition algorithm of the network server is called a second recognition algorithm. After the corresponding event and other recognition results are obtained according to the recognition algorithm, the recognition algorithm can also be adjusted based on the recognition result, so as to continuously optimize the algorithm and improve the processing efficiency and accuracy.
  • Step 602 The camera captures the surveillance video and uploads it to the storage device.
  • the surveillance video can be transmitted between the camera and the storage device in real time, and the storage device associates the identification information of each surveillance area with the surveillance video, so as to find the surveillance video shot by different cameras.
  • a network hard disk video recorder can be used as a storage device.
  • the NVR can receive a digital video code stream transmitted by a camera such as a network camera (IPC) through the network, and store and manage the digital video code stream.
  • IPC network camera
  • Step 604 The local server obtains the monitoring video from the storage device.
  • the local server can use multiple methods to obtain the video stream of the surveillance video from the camera. For example, it can use the polling method or the real-time method. Among them, the polling method can be used for non-real-time events, such as the detection of violation events. Real-time events, such as detection of hidden dangers. Among them, the time and length of the acquired surveillance video can be determined according to parameters such as the type of surveillance, and identification processing operations such as decoding analysis can be performed after acquisition.
  • the local server may be constructed using an open platform based on edge computing, which may include a control device and an edge computing device.
  • the control device may provide algorithm scheduling for the edge computing device, and feedback the recognition result of the algorithm to the cloud server.
  • the cloud server can optimize the algorithm based on the recognition results.
  • the edge computing device provides the execution environment of the algorithm. If a deep learning algorithm is used, the control device can schedule the required deep learning algorithm, and the edge computing device performs the operation of the corresponding deep learning algorithm.
  • edge computing refers to an open platform that uses the core capabilities of network, computing, storage, and applications on the side close to the source of data or data to provide services nearby. It can generate faster network service responses to meet the industry's real-time requirements. Basic needs in business, application intelligence, security and privacy protection.
  • Step 606 The local server identifies a monitoring object in the monitoring video.
  • the local server can determine the monitoring type to which the surveillance video to be identified belongs, and then identify the surveillance video according to the surveillance type, for example, extracting the image in the surveillance video and then performing image recognition to obtain the surveillance object contained in the surveillance video. Such as items on fire escapes, cigarettes in human hands, etc.
  • the local server determines the surveillance type to which the surveillance area belongs, and then calls a first recognition algorithm corresponding to the surveillance type, and then uses the first recognition algorithm to perform corresponding recognition processing on the surveillance video. To determine the monitoring objects in the monitoring area.
  • the control device can learn the monitoring area of the camera and determine the monitoring type of the monitoring area in the local server.
  • the edge computing device includes recognition algorithms corresponding to various monitoring types, so that the control device can notify the edge
  • the recognition algorithm used by the computing device for the surveillance video is then called by the edge computing device to identify the surveillance video, and the recognition result is fed back to the control device.
  • Step 608 The local server reports to the network server according to the identified monitoring object.
  • the local server can trigger monitoring alarms based on the identified monitoring objects, such as generating monitoring alarm information and reporting based on the monitoring area, monitoring type, monitoring objects, etc., and determining the monitoring video containing the monitoring objects based on the monitoring objects, and reporting, etc., making the network
  • the server may perform identification processing to confirm whether there is a monitoring event.
  • Step 610 The network server obtains a surveillance video including a surveillance object in the surveillance area.
  • the network server can obtain the surveillance video including the surveillance object in the surveillance area according to the identified surveillance object, wherein after detecting the surveillance object in the surveillance area, extract the surveillance video in the surveillance area in the selected time period, and the extracted surveillance video
  • the captured monitoring area contains monitoring objects.
  • Step 612 The network server recognizes the monitoring video, determines the monitoring object and the static characteristic information of the monitoring object, and identifies the motion characteristic information of the monitoring object in the selected time period based on the static characteristic information.
  • the network server can perform a re-inspection of the surveillance area.
  • the surveillance object in the surveillance video can be identified and the static characteristic information of the surveillance object can be determined.
  • the surveillance object's Static feature information such as size and position
  • static feature information such as size and position of the monitored object are calculated based on information such as the ratio of the monitored object in the monitoring area to the reference objects such as corridors and the ground.
  • the motion characteristic information of the monitored object in the selected time period can be identified, and multiple time points can be determined in the selected time period, such as 1, 3 seconds as a time point, and then The time points are compared with the static characteristics of the monitored object to obtain the movement characteristic information of the monitored object, such as the position difference between the monitored objects at two adjacent time points to determine the displacement direction, distance and other information.
  • the calculation of the related monitoring object group is similar, and the displacement information of the related monitoring object group can be determined according to the position difference of the related monitoring object group where the main object is located at two neighboring time points.
  • Step 614 The network server determines a corresponding monitoring event according to the static characteristic information and the motion characteristic information.
  • the network server can analyze the static characteristic information and motion characteristic information of the monitored object to determine whether the movement status of the monitored object meets the conditions of the monitoring category, such as whether the items on the fire passage have not moved within 10 minutes. Whether people have smoking in non-smoking areas. Therefore, after determining that the motion status of the monitored object meets the conditions of the monitoring category, a corresponding monitoring event is generated.
  • Step 616 The network server generates alarm information according to the monitoring event, and sends the alarm information.
  • alarm information for the monitoring event, and then send the alarm information.
  • Step 618 The processing end sends an alarm processing result.
  • Step 620 The network server adjusts the recognition algorithm of the monitored objects in the monitored area according to the alarm processing result.
  • the alarm processing result includes: an alarm processing result fed back according to the alarm information and / or a reported alarm processing result.
  • the alarm processing result fed back according to the alarm information refers to an alarm processing result fed back after receiving the alarm information and processing.
  • the processing end can process according to the alarm information, and then generate corresponding alarm processing results, such as the event exists or does not exist, etc., and then the alarm processing result can be fed back to the web server, and the web server can call this according to the alarm processing result.
  • the first recognition algorithm and the second recognition algorithm can be adjusted according to the alarm processing result, and the first recognition algorithm can also be adjusted according to the result of the second recognition algorithm.
  • the reported alarm processing result refers to an alarm processing result that is directly detected and processed without receiving alarm information.
  • the server may not identify some security hazards and violations of the alarm event in time, but find them through security, property patrol, and user reports. It can also generate alarm processing results, and then optimize the identification algorithm based on the alarm processing results.
  • FIG. 7 a flowchart of steps in another embodiment of a monitoring method according to the present application is shown.
  • Step 702 Obtain a monitoring video including a monitoring object in a monitoring area.
  • the initial inspection of the surveillance video by the local server can identify the surveillance objects in the surveillance area, so that the network server can obtain the surveillance video containing the surveillance objects in the surveillance area for detection, making the detection more targeted.
  • the acquiring the monitoring video including the monitoring object in the monitoring area includes: after detecting the monitoring object in the monitoring area, extracting the monitoring video of the monitoring area in the selected time period, wherein the captured monitoring video captures the monitoring The area contains monitoring objects. It is possible to determine the time information of the monitoring object appearing in the monitoring area, determine the selection time period based on the time information, and then extract and extract the monitoring video of the monitoring area in the selected time period, so that the monitoring area captured by the extracted monitoring video includes monitoring Object.
  • the local server can report after detecting the monitoring object for the first time
  • the network server can extract the monitoring video of the monitoring area in the selected time period according to this, or report after detecting the monitoring object multiple times, and determine based on the multiple detection results. Select a time period and extract the surveillance video.
  • step 704 the monitoring video is identified, and a monitoring object and static characteristic information of the monitoring object are determined.
  • the server can identify the surveillance video, wherein an identification algorithm can be determined according to the type of surveillance, and then the surveillance video is identified and processed according to the identification algorithm to identify the surveillance object and the static characteristic information of the surveillance object.
  • identifying the monitoring video and determining the monitoring object and the static characteristic information of the monitoring object includes: determining a monitoring type according to a monitoring area; identifying a monitoring object corresponding to the monitoring type in the monitoring video, and identifying Static characteristic information of the monitored object.
  • the monitoring objects that need to be identified in the monitoring area such as items on fire passages, people and smoke in non-smoking areas, can be identified, and the static characteristics of the monitoring object can be identified, such as according to the monitoring area.
  • Step 706 Identify motion feature information of the monitored object in a selected time period based on the static feature information.
  • the multiple static feature information of the monitored object can be compared to obtain the movement characteristics of the monitored object, so as to identify the movement characteristic information of the monitored object in the selected time period.
  • identifying the motion characteristic information of the monitored object in the selected time period based on the static characteristic information includes: extracting the static characteristic information corresponding to the monitored object at multiple time points in the selected time period; The plurality of static feature information of the monitored object are compared to determine corresponding motion feature information. Multiple time points can be determined within the selected time period, such as 1, 3 seconds as a time point, and then the time point is compared with the static characteristics of the monitored object to obtain the motion characteristic information of the monitored object, such as where the monitored object is at The position difference between two adjacent time points determines the displacement direction and distance.
  • the calculation of the related monitoring object group is similar.
  • the displacement information of the related monitoring object group can be determined according to the position difference of the related monitoring object group where the main object is located at two neighboring time points.
  • comparing a plurality of static feature information of the monitored object to determine corresponding motion feature information includes at least one of the following steps: comparing the positions of the monitored object pair by pair to determine displacement information of the monitored object Comparing the positions of the associated monitoring object groups in pairs to determine displacement information of the associated monitoring object groups, the displacement information including: a displacement direction and a displacement distance.
  • the positions of the monitored objects at two adjacent time points can be compared. For example, if a time point is determined at an interval of 2 seconds, the positions of the monitored objects at 1 second and 4 seconds can be compared. Compare the position between the second and 7 seconds, and so on, to obtain the displacement information of the monitored object at this time interval.
  • the positions of the associated monitoring object group at two neighboring time points can be compared. For example, if a time point is determined at an interval of 3 seconds, the monitoring object can be performed at the positions of 1 second and 5 seconds. The comparison is performed at the positions of 5 seconds and 9 seconds, and so on, to obtain displacement information of the associated monitoring object group at the time interval. Among them, the same related monitoring object group can be compared, and the related monitoring object group where the main object is located can be compared to determine displacement information of the related monitoring object group. As shown in the examples of FIGS. 3A and 3B, FIG. 3A and FIG.
  • 3B are monitoring screens corresponding to two time points, respectively, in which a combination of human hand and smoke is used as an associated monitoring object group. At these two time points, based on the human hand and The position of the combination of smoke can determine the corresponding displacement information, so that through a pairwise comparison of the positions of the combination of the human hand and the smoke at multiple adjacent time points, the displacement information of the combination of the human hand and the smoke in the selected time period can be obtained.
  • Step 708 Determine a corresponding monitoring event according to the static characteristic information and the motion characteristic information.
  • the state refers to the form displayed by the monitoring object, and the state may include a movement state, a change state, a body state, and the like. Movement status such as motion or stillness, change status such as change or no change, physical status and physical size, etc.
  • the corresponding conditions include exercise conditions, changing conditions, physical conditions, etc. Different monitoring areas need to detect different conditions, and Can be judged based on one or more conditions.
  • an item whose size is larger than the size threshold meets the physical condition is in a stationary state and exceeds the time threshold to meet the motion condition, and determines that the item meets the physical condition and the motion condition can generate a monitoring event.
  • the carton is identified as the monitoring object.
  • the volume of the carton is 1 cubic meter, and the volume is greater than 0.5 cubic meters. It is confirmed that the physical conditions are met, and the carton is within 5 minutes. If it is not moving and is in a stationary state and it is confirmed that the motion conditions are met, it is judged that the carton meets the physical conditions and the motion conditions, and a monitoring event of the fire passage blocking can be generated.
  • a smoking action is determined based on a combination of human hands and smoke, and the smoking action can be confirmed after one or more repeated displacements of the combination of human hands and smoke, so that the satisfying exercise is determined based on the smoking action.
  • the combination of human hand and smoke is used as the associated monitoring object group, and the actions of lifting and lowering the combination of human hand and smoke multiple times are determined according to the displacement information corresponding to the combination of human hand and smoke, which can be artificially identified.
  • the smoking action it is confirmed that the exercise conditions are satisfied, and during the execution of the smoking action, the volume of smoke is gradually reduced, and the change conditions are confirmed, so that the exercise conditions and change conditions are satisfied, and the monitoring of smoking in the non-smoking area can be generated. event.
  • Step 710 Generate alarm information according to the monitoring event, and send the alarm information.
  • the alarm monitoring video can be extracted from the target video, and then the alarm monitoring video, monitoring type, monitoring area and other parameters are used to generate alarm information, and then the alarm information is sent to the processing end for processing.
  • Step 712 Obtain an alarm processing result, and adjust an identification algorithm for a monitoring object in the monitoring area according to the alarm processing result.
  • the processing end can process according to the alarm information, and then generate corresponding alarm processing results, such as the event exists or does not exist, etc., and then the alarm processing result can be fed back to the web server, and the web server can call this according to the alarm processing result.
  • the first recognition algorithm and the second recognition algorithm can be adjusted according to the alarm processing result, and the first recognition algorithm can also be adjusted according to the result of the second recognition algorithm.
  • the server may not identify some security hazards and violations of the alarm event in time, but find them through security, property patrol, and user reports. It can also generate alarm processing results, and then optimize the identification algorithm based on the alarm processing results.
  • technologies such as edge computing can be applied to pre-process surveillance videos, filter invalid surveillance videos, and report surveillance videos that may have problems; and, through the web server, identify and identify surveillance, identify early-warning events, and report them,
  • the web server also uses artificial intelligence technologies such as robots to identify potential safety hazards through personnel intelligence.
  • the recognition algorithm is constructed through mathematical models such as deep learning, and the algorithm can be continuously adjusted during the recognition process to optimize business processes.
  • the embodiment of the present application is described using a public place as a monitoring place. In fact, it can also be applied to the monitoring of private places such as homes.
  • the monitoring place including a camera can be used to monitor and detect alarm events in time. For example, when the user's home enters unmanned mode, it can monitor the anti-theft, monitor the status of the kitchen, etc., so that when a problem is found, the processing end of the home Internet of Things can be notified to issue an alarm, such as an alarm, turning off the kitchen fire, appliances, etc. Give the user a mobile device so that the user is kept up-to-date with events that occur at home.
  • this embodiment further provides a monitoring device, which is applied to a network server.
  • FIG. 8 a structural block diagram of an embodiment of a monitoring device according to the present application is shown, which may specifically include the following modules:
  • a feature recognition module 802 is configured to obtain static feature information of a monitored object in the monitored area and motion feature information of the monitored object.
  • An event analysis module 804 is configured to determine a corresponding monitoring event according to the static feature information and the motion feature information.
  • the static characteristic information of the monitored object in the monitoring area and the motion characteristic information of the monitored object are obtained, and then the corresponding monitoring event is determined according to the static characteristic information and the motion characteristic information, and the identification of the monitored object is performed.
  • the characteristics of the monitoring object monitoring events occurring in the monitoring area can be detected, so as to detect problems in time and improve the efficiency of monitoring event discovery.
  • FIG. 9 a structural block diagram of another embodiment of a monitoring device according to the present application is shown, which may specifically include the following modules:
  • a feature recognition module 802 is configured to obtain static feature information of a monitored object in the monitored area and motion feature information of the monitored object.
  • An event analysis module 804 is configured to determine a corresponding monitoring event according to the static characteristic information and the motion characteristic information.
  • the alarm module 806 is configured to generate alarm information according to the monitoring event, and send the alarm information.
  • An adjustment module 808, configured to obtain an alarm processing result; and adjust an identification algorithm for a monitoring object in the monitoring area according to the alarm processing result, and the alarm processing result includes an alarm processing result and / or an alarm feedback result according to the alarm information Reported alarm processing results.
  • the static characteristic information of the monitoring object includes at least one of the following: the type of the monitoring object, the size of the monitoring object, the position of the monitoring object, and the position of the associated monitoring object group;
  • the motion characteristic information of the monitoring object includes at least One: the displacement information of the monitoring object and the displacement information of the associated monitoring object group.
  • the associated monitoring object group refers to a combination of monitoring objects having an associated relationship.
  • the feature recognition module 802 includes: an acquisition sub-module 8022, a static recognition sub-module 8024, and a motion recognition sub-module 8026, wherein:
  • the acquisition submodule 8022 is configured to acquire a surveillance video including a surveillance object in a surveillance area.
  • the static identification sub-module 8024 is configured to identify the surveillance video and determine a surveillance object and static characteristic information of the surveillance object.
  • a motion recognition sub-module 8026 is configured to identify motion feature information of the monitored object in a selected time period according to the static feature information.
  • the static identification sub-module 8024 is configured to determine a monitoring type according to a monitoring area; identify a monitoring object corresponding to the monitoring type in a monitoring video, and identify static characteristic information of the monitoring object.
  • the monitoring type includes at least one of the following: a hidden danger category and a violation category.
  • the motion recognition sub-module 8026 is configured to extract static feature information corresponding to the monitored object at multiple time points within a selected time period; compare the multiple static feature information of the monitored object to determine the corresponding Kinematics information.
  • the motion recognition sub-module 8026 is configured to compare the positions of the monitoring objects in pairs to determine the displacement information of the monitoring objects; and / or compare the positions of the related monitoring object groups in pairs to determine the displacements of the related monitoring object groups.
  • Information, the displacement information includes: a displacement direction and a displacement distance.
  • the obtaining sub-module 8022 is configured to extract monitoring videos of the monitoring area in a selected time period after detecting the monitoring objects in the monitoring area, wherein the monitoring area captured by the extracted monitoring video includes the monitoring objects.
  • the event analysis module 804 includes a state analysis sub-module 8042 and an event determination sub-module 8044, where:
  • a state analysis sub-module 8042 is configured to analyze a state of the monitored object according to the static feature information and the motion feature information.
  • the event determining sub-module 8044 is configured to determine a corresponding monitoring event according to a state of the monitoring object.
  • technologies such as edge computing can be applied to pre-process surveillance videos, filter invalid surveillance videos, and report surveillance videos that may have problems; and, through the web server, identify and identify surveillance, identify early-warning events, and report them,
  • the web server also uses artificial intelligence technologies such as robots to identify potential safety hazards through personnel intelligence.
  • the recognition algorithm is constructed through mathematical models such as deep learning, and the algorithm can be continuously adjusted during the recognition process to optimize business processes.
  • An embodiment of the present application further provides a non-volatile readable storage medium.
  • the storage medium stores one or more modules.
  • the device can execute the program. Instructions for each method step in the embodiments of the present application.
  • the embodiments of the present application provide one or more machine-readable media having instructions stored thereon, which when executed by one or more processors, cause the electronic device to execute the method according to one or more of the foregoing embodiments.
  • the electronic device includes a server, a gateway, and a sub-device, and the sub-device is a device such as an IoT device.
  • Embodiments of the present disclosure may be implemented as a device for performing a desired configuration using any appropriate hardware, firmware, software, or any combination thereof, and the device may include an electronic device such as a server (cluster), a terminal device such as an IoT device, and the like.
  • FIG. 10 schematically illustrates an exemplary apparatus 1000 that can be used to implement various embodiments described in this application.
  • FIG. 10 illustrates an exemplary device 1000 having one or more processors 1002, a control module (chipset) 1004 coupled to at least one of the processor (s) 1002.
  • Memory 1006 coupled to control module 1004, non-volatile memory (NVM) / storage device 1008 coupled to control module 1004, one or more input / output devices 1010 coupled to control module 1004, and Network interface 1012 coupled to control module 1006.
  • NVM non-volatile memory
  • the processor 1002 may include one or more single-core or multi-core processors, and the processor 1002 may include any combination of a general-purpose processor or a special-purpose processor (for example, a graphics processor, an application processor, a baseband processor, etc.).
  • the apparatus 1000 can serve as a server device such as a gateway described in the embodiments of the present application.
  • the device 1000 may include one or more computer-readable media (e.g., memory 1006 or NVM / storage device 1008) having instructions 1014 and incorporated with the one or more computer-readable media is configured to One or more processors 1002 executing instructions 1014 to implement modules to perform actions described in this disclosure.
  • one or more processors 1002 executing instructions 1014 to implement modules to perform actions described in this disclosure.
  • control module 1004 may include any suitable interface controller to provide any suitable device to at least one of the processor (s) 1002 and / or any suitable device or component in communication with the control module 1004 Interface.
  • the control module 1004 may include a memory controller module to provide an interface to the memory 1006.
  • the memory controller module may be a hardware module, a software module, and / or a firmware module.
  • the memory 1006 may be used to load and store data and / or instructions 1014 for the device 1000, for example.
  • the memory 1006 may include any suitable volatile memory, such as a suitable DRAM.
  • the memory 1006 may include a double data rate type quad synchronous dynamic random access memory (DDR4SDRAM).
  • DDR4SDRAM double data rate type quad synchronous dynamic random access memory
  • control module 1004 may include one or more input / output controllers to provide an interface to the NVM / storage device 1008 and the input / output device (s) 1010.
  • NVM / storage device 1008 may be used to store data and / or instructions 1014.
  • NVM / storage device 1008 may include any suitable non-volatile memory (e.g., flash memory) and / or may include any suitable non-volatile storage device (e.g., one or more hard drives (e.g., HDD), one or more compact disc (CD) drives, and / or one or more digital versatile disc (DVD) drives).
  • any suitable non-volatile memory e.g., flash memory
  • HDD hard drives
  • CD compact disc
  • DVD digital versatile disc
  • the NVM / storage device 1008 may include storage resources that are physically part of the device on which the device 1000 is installed, or it may be accessed by the device and may not necessarily be part of the device.
  • the NVM / storage device 1008 may be accessed via a network via one or more input / output devices 1010.
  • the input / output device (s) 1010 may provide an interface for the device 1000 to communicate with any other suitable device.
  • the input / output device 1010 may include a communication component, an audio component, a sensor component, and the like.
  • the network interface 1012 may provide an interface for the device 1000 to communicate through one or more networks, and the device 1000 may communicate with one or more of the wireless networks according to any one or more of the one or more wireless network standards and / or protocols.
  • the components perform wireless communication, such as accessing a wireless network based on a communication standard, such as WiFi, 2G, 3G, 4G, 5G, etc., or a combination thereof for wireless communication.
  • At least one of the processor (s) 1002 may be packaged with the logic of one or more controllers (eg, a memory controller module) of the control module 1004.
  • at least one of the processor (s) 1002 may be packaged with the logic of one or more controllers of the control module 1004 to form a system-in-package (SiP).
  • at least one of the processor (s) 1002 may be integrated with the logic of one or more controllers of the control module 1004 on the same mold.
  • at least one of the processor (s) 1002 may be integrated with the logic of one or more controllers of the control module 1004 on the same mold to form a system-on-chip (SoC).
  • SoC system-on-chip
  • the apparatus 1000 may be, but is not limited to, a terminal device such as a server, a desktop computing device, or a mobile computing device (eg, a laptop computing device, a handheld computing device, a tablet computer, a netbook, etc.).
  • the device 1000 may have more or fewer components and / or different architectures.
  • the device 1000 includes one or more cameras, keyboards, liquid crystal display (LCD) screens (including touch screen displays), non-volatile memory ports, multiple antennas, graphics chips, application specific integrated circuits ( ASIC) and speakers.
  • LCD liquid crystal display
  • ASIC application specific integrated circuits
  • An embodiment of the present application provides a server, including: one or more processors; and one or more machine-readable media having instructions stored thereon, which when executed by the one or more processors, cause The server executes the inter-device communication method according to one or more of the embodiments of the present application.
  • the description is relatively simple. For the relevant part, refer to the description of the method embodiment.
  • These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing terminal device to produce a machine, such that the instructions executed by the processor of the computer or other programmable data processing terminal device Means are generated for implementing the functions specified in one or more flowcharts and / or one or more blocks of the block diagrams.
  • These computer program instructions may also be stored in a computer-readable memory capable of directing a computer or other programmable data processing terminal device to work in a specific manner, such that the instructions stored in the computer-readable memory produce a manufactured article including the instruction means, the The instruction means implements the functions specified in one or more flowcharts and / or one or more blocks of the block diagram.
  • These computer program instructions can also be loaded on a computer or other programmable data processing terminal device, so that a series of operation steps can be performed on the computer or other programmable terminal device to produce a computer-implemented process, so that the computer or other programmable terminal device can
  • the instructions executed on the steps provide steps for implementing the functions specified in one or more of the flowcharts and / or one or more of the block diagrams.

Abstract

Selon certains modes de réalisation, la présente invention concerne un procédé de surveillance, un dispositif, un serveur et un support de stockage, de façon à augmenter la probabilité de découverte d'un incident dans une région surveillée. Le procédé consiste à : obtenir des informations de caractéristiques statiques d'un objet surveillé dans une région surveillée et des informations de caractéristiques de mouvement de l'objet surveillé ; et déterminer un incident surveillé correspondant selon les informations de caractéristiques statiques et les informations de caractéristiques de mouvement. La détection est effectuée sur un incident surveillé qui se produit dans une région surveillée selon des caractéristiques d'un objet surveillé, de façon à découvrir un problème en temps opportun, améliorant l'efficacité de découverte d'incidents surveillés.
PCT/CN2019/087692 2018-06-01 2019-05-21 Procédé de surveillance, dispositif, serveur, et support de stockage WO2019228218A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201810557078.1A CN110543803A (zh) 2018-05-29 2018-06-01 监控方法、装置、服务器和存储介质
CN201810557078.1 2018-06-01

Publications (1)

Publication Number Publication Date
WO2019228218A1 true WO2019228218A1 (fr) 2019-12-05

Family

ID=68697847

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2019/087692 WO2019228218A1 (fr) 2018-06-01 2019-05-21 Procédé de surveillance, dispositif, serveur, et support de stockage

Country Status (1)

Country Link
WO (1) WO2019228218A1 (fr)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR100853744B1 (ko) * 2007-04-12 2008-08-25 포스데이타 주식회사 영상감시장치 및 상황변화 감지방법
CN101281593A (zh) * 2008-04-16 2008-10-08 安防科技(中国)有限公司 智能视频监控事件检索方法及系统
KR20090026937A (ko) * 2007-09-11 2009-03-16 삼성테크윈 주식회사 복수의 이벤트 영상을 표시하는 통합 감시 시스템 및 이를이용한 복수의 이벤트 영상 표시 방법
CN103155549A (zh) * 2010-10-08 2013-06-12 Lg电子株式会社 图像监控装置和用于检测其事件的方法

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR100853744B1 (ko) * 2007-04-12 2008-08-25 포스데이타 주식회사 영상감시장치 및 상황변화 감지방법
KR20090026937A (ko) * 2007-09-11 2009-03-16 삼성테크윈 주식회사 복수의 이벤트 영상을 표시하는 통합 감시 시스템 및 이를이용한 복수의 이벤트 영상 표시 방법
CN101281593A (zh) * 2008-04-16 2008-10-08 安防科技(中国)有限公司 智能视频监控事件检索方法及系统
CN103155549A (zh) * 2010-10-08 2013-06-12 Lg电子株式会社 图像监控装置和用于检测其事件的方法

Similar Documents

Publication Publication Date Title
CN110543803A (zh) 监控方法、装置、服务器和存储介质
CN103578240B (zh) 一种基于物联网的安防服务网
CN104040601B (zh) 基于云的视频监视管理系统
CN103325209A (zh) 一种基于无线的智能安防报警系统
AU2018279856A1 (en) System and method for aiding responses to an event detected by a monitoring system
US20160357762A1 (en) Smart View Selection In A Cloud Video Service
US20170034483A1 (en) Smart shift selection in a cloud video service
CN106791655B (zh) 一种视频处理方法及装置
KR101687477B1 (ko) 빅데이터를 이용한 이벤트 발생 정보 제공 방법 및 이벤트 발생 정보 제공 시스템
US20140168427A1 (en) Notify associates of cleanup jobs
CN106412522A (zh) 对室内外环境中物体的视频分析检测方法及系统
CN106790515B (zh) 一种异常事件处理系统及其应用方法
CN108205868A (zh) 一种校园防火防盗智能监控管理系统
US11032262B2 (en) System and method for providing security monitoring
US20220338303A1 (en) Systems and methods for identifying blockages of emergency exists in a building
KR20200052418A (ko) 딥러닝 기반의 자동 폭력 감지 시스템
CN110022466A (zh) 一种基于智慧大数据的视频分析平台及其控制方法
US11210529B2 (en) Automated surveillance system and method therefor
CN204375138U (zh) 基于人流密度识别技术的智能预警系统
Samuel et al. AI Driven Thermal People Counting for Smart Window Facade Using Portable Low‐Cost Miniature Thermal Imaging Sensors
CN102737463A (zh) 基于智能视频的室内人员闯入的监控和报警系统
TW201801520A (zh) 影像式小偷偵測裝置
WO2019228218A1 (fr) Procédé de surveillance, dispositif, serveur, et support de stockage
US10965899B1 (en) System and method for integration of a television into a connected-home monitoring system
CN203338517U (zh) 一种基于无线的智能安防报警系统

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19812050

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 19812050

Country of ref document: EP

Kind code of ref document: A1