CN110543803A - Monitoring method, device, server and storage medium - Google Patents

Monitoring method, device, server and storage medium Download PDF

Info

Publication number
CN110543803A
CN110543803A CN201810557078.1A CN201810557078A CN110543803A CN 110543803 A CN110543803 A CN 110543803A CN 201810557078 A CN201810557078 A CN 201810557078A CN 110543803 A CN110543803 A CN 110543803A
Authority
CN
China
Prior art keywords
monitoring
characteristic information
monitored object
information
monitored
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201810557078.1A
Other languages
Chinese (zh)
Inventor
赵丛君
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Cainiao Smart Logistics Holding Ltd
Original Assignee
Cainiao Smart Logistics Holding Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Cainiao Smart Logistics Holding Ltd filed Critical Cainiao Smart Logistics Holding Ltd
Priority to PCT/CN2019/087692 priority Critical patent/WO2019228218A1/en
Publication of CN110543803A publication Critical patent/CN110543803A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • G06V20/41Higher-level, semantic clustering, classification or understanding of video scenes, e.g. detection, labelling or Markovian modelling of sport events or news items
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • G06V20/41Higher-level, semantic clustering, classification or understanding of video scenes, e.g. detection, labelling or Markovian modelling of sport events or news items
    • G06V20/42Higher-level, semantic clustering, classification or understanding of video scenes, e.g. detection, labelling or Markovian modelling of sport events or news items of sport video content
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • G06V20/46Extracting features or characteristics from the video content, e.g. video fingerprints, representative shots or key frames
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/18Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast
    • H04N7/181Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast for receiving images from a plurality of remote sources

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computational Linguistics (AREA)
  • Software Systems (AREA)
  • Signal Processing (AREA)
  • Alarm Systems (AREA)

Abstract

the embodiment of the application provides a monitoring method, a monitoring device, a server and a storage medium, so as to improve the discovery efficiency of events in a monitoring area. The method comprises the following steps: acquiring static characteristic information of a monitored object in a monitored area and motion characteristic information of the monitored object; and determining a corresponding monitoring event according to the static characteristic information and the motion characteristic information. And detecting the monitoring event occurring in the monitoring area according to the characteristics of the monitored object, thereby finding the problem in time and improving the finding efficiency of the monitoring event.

Description

Monitoring method, device, server and storage medium
Technical Field
The present application relates to the field of computer technologies, and in particular, to a monitoring method, a monitoring apparatus, a server, and a storage medium.
Background
Fire fighting passageways are usually arranged in public places such as office buildings, office parks, shopping squares and railway stations, and areas such as smoking areas, smokeless areas and parking areas can be further divided. However, sometimes these areas are subject to events that do not comply with the regulations, which may cause various problems. Such as blocking fire fighting access, smoking in smokeless areas, etc., may pose potential safety hazards.
At present, in order to solve the problems in public areas, people such as security protection, property and the like generally organize patrol, find relevant problem events such as smoking, fire fighting channel blockage and the like and dissuade the problem events.
however, the manual patrol mode needs patrol personnel to go to a corresponding position to find whether a problem exists, and if a problem occurs in a place which is just patrolled, the problem is difficult to find in time, so that the efficiency is low.
Disclosure of Invention
The embodiment of the application provides a monitoring method, so that the finding efficiency of events in a monitoring area is improved.
Correspondingly, the embodiment of the application also provides a monitoring device, a server and a storage medium, which are used for ensuring the realization and the application of the system.
In order to solve the above problem, an embodiment of the present application discloses a monitoring method, where the method includes: acquiring static characteristic information of a monitored object in a monitored area and motion characteristic information of the monitored object; and determining a corresponding monitoring event according to the static characteristic information and the motion characteristic information.
Optionally, the static characteristic information of the monitored object includes at least one of: the type of the monitored object, the size of the monitored object, the position of the monitored object and the position of the associated monitored object group; the motion characteristic information of the monitored object comprises at least one of the following information: displacement information of the monitored object, displacement information of the associated monitored object group.
Optionally, the associated monitoring object group refers to a combination of monitoring objects having an association relationship.
Optionally, the obtaining of the static characteristic information of the monitored object and the motion characteristic information of the monitored object in the monitored area includes: acquiring a monitoring video containing a monitored object in a monitoring area; identifying the monitoring video, and determining a monitored object and static characteristic information of the monitored object; and identifying the motion characteristic information of the monitored object in a selected time period according to the static characteristic information.
optionally, the identifying the monitoring video and determining the monitored object and the static feature information of the monitored object include: determining a monitoring type according to the monitoring area; and identifying a monitoring object corresponding to the monitoring type in the monitoring video, and identifying static characteristic information of the monitoring object.
Optionally, the identifying, according to the static feature information, motion feature information of the monitored object in a selected time period includes: extracting static characteristic information of the monitored object corresponding to a plurality of time points in a selected time period; comparing the plurality of static characteristic information of the monitored object, and determining corresponding motion characteristic information.
Optionally, the comparing the plurality of static feature information of the monitored object to determine corresponding motion feature information includes at least one of the following steps: comparing the positions of the monitored objects pairwise to determine displacement information of the monitored objects; comparing the positions of the associated monitoring object groups pairwise to determine displacement information of the associated monitoring object groups, wherein the displacement information comprises: displacement direction and displacement distance.
Optionally, the acquiring a monitoring video including a monitoring object in the monitoring area includes: after a monitoring object is detected in a monitoring area, a monitoring video of the monitoring area in a selected time period is extracted, wherein the monitoring area shot by the extracted monitoring video contains the monitoring object.
optionally, the determining, according to the static feature information and the motion feature information, a corresponding monitoring event includes: analyzing the state of the monitored object according to the static characteristic information and the motion characteristic information; and determining a corresponding monitoring event according to the state of the monitored object.
Optionally, the method further includes: and generating alarm information according to the monitoring event, and sending the alarm information.
Optionally, the method further includes: acquiring an alarm processing result, wherein the alarm processing result comprises: according to the alarm processing result fed back by the alarm information and/or the reported alarm processing result; and adjusting an identification algorithm for the monitored object in the monitored area according to the alarm processing result.
Optionally, the monitoring type includes at least one of the following: potential safety hazards and violations.
The embodiment of the application also discloses a monitoring device, including: the system comprises a characteristic identification module, a characteristic analysis module and a characteristic analysis module, wherein the characteristic identification module is used for acquiring static characteristic information of a monitored object in a monitored area and motion characteristic information of the monitored object; and the event analysis module is used for determining a corresponding monitoring event according to the static characteristic information and the motion characteristic information.
optionally, the static characteristic information of the monitored object includes at least one of: the type of the monitored object, the size of the monitored object, the position of the monitored object and the position of the associated monitored object group; the motion characteristic information of the monitored object comprises at least one of the following information: displacement information of the monitored object, displacement information of the associated monitored object group.
Optionally, the associated monitoring object group refers to a combination of monitoring objects having an association relationship.
Optionally, the feature recognition module includes: the acquisition submodule is used for acquiring a monitoring video containing a monitoring object in a monitoring area; the static identification submodule is used for identifying the monitoring video and determining a monitoring object and static characteristic information of the monitoring object; and the motion identification submodule is used for identifying the motion characteristic information of the monitored object in a selected time period according to the static characteristic information.
Optionally, the static identification submodule is configured to determine a monitoring type according to a monitoring area; and identifying a monitoring object corresponding to the monitoring type in the monitoring video, and identifying static characteristic information of the monitoring object.
Optionally, the motion identification submodule is configured to extract static feature information of the monitored object corresponding to a plurality of time points in a selected time period; comparing the plurality of static characteristic information of the monitored object, and determining corresponding motion characteristic information.
Optionally, the motion identifier module is configured to compare the positions of the monitored objects two by two, and determine displacement information of the monitored objects; and/or comparing the positions of the associated monitoring object groups pairwise to determine displacement information of the associated monitoring object groups, wherein the displacement information comprises: displacement direction and displacement distance.
Optionally, the obtaining sub-module is configured to, after a monitored object is detected in a monitored area, extract a monitoring video of the monitored area within a selected time period, where a monitored area shot by the extracted monitoring video includes the monitored object.
optionally, the event analysis module includes: the state analysis submodule is used for analyzing the state of the monitored object according to the static characteristic information and the motion characteristic information; and the event determining submodule is used for determining a corresponding monitoring event according to the state of the monitored object.
Optionally, the method further includes: and the alarm module is used for generating alarm information according to the monitoring event and sending the alarm information.
optionally, the method further includes: the adjusting module is used for acquiring an alarm processing result; adjusting an identification algorithm for the monitored object in the monitored area according to the alarm processing result, wherein the alarm processing result comprises: and according to the alarm processing result fed back by the alarm information and/or the reported alarm processing result.
Optionally, the monitoring type includes at least one of the following: potential safety hazards and violations.
The embodiment of the present application further discloses a server, including: a processor; and a memory having executable code stored thereon, which when executed, causes the processor to perform a monitoring method as described in one or more of the embodiments of the present application.
One or more machine-readable media having stored thereon executable code that, when executed, causes a processor to perform a monitoring method as described in one or more of the embodiments of the present application are also disclosed.
Compared with the prior art, the embodiment of the application has the following advantages:
In the embodiment of the application, the static characteristic information of the monitored object in the monitored area and the motion characteristic information of the monitored object can be acquired, then the corresponding monitoring event is determined according to the static characteristic information and the motion characteristic information, and the monitoring event occurring in the monitored area can be detected according to the characteristics of the monitored object through the identification of the monitored object, so that the problem can be found in time, and the discovery efficiency of the monitoring event is improved.
Drawings
FIG. 1 is a flow chart illustrating steps of an embodiment of a monitoring method of the present application;
FIG. 2 is a schematic view of a monitoring screen in an embodiment of the present application;
Fig. 3A and 3B are schematic views of another monitoring screen in the embodiment of the present application;
FIG. 4 is a schematic interaction diagram of a monitoring system according to an embodiment of the present application;
FIG. 5 is a schematic view of another monitoring system according to an embodiment of the present application;
FIG. 6 is a schematic diagram of another monitoring system interaction according to an embodiment of the present application;
FIG. 7 is a flow chart of steps in another monitoring method embodiment of the present application;
FIG. 8 is a block diagram of an embodiment of a monitoring device according to the present application;
FIG. 9 is a block diagram of another monitoring device embodiment of the present application;
fig. 10 is a schematic structural diagram of an apparatus according to an embodiment of the present application.
Detailed Description
In order to make the aforementioned objects, features and advantages of the present application more comprehensible, the present application is described in further detail with reference to the accompanying drawings and the detailed description.
The area monitored by each camera is called a monitoring area. And identifying the corresponding monitoring video of the monitoring area, and determining a corresponding monitoring event according to the identified static characteristic information and motion characteristic information of the monitored object. Therefore, the problems of hidden danger, violation and the like in the monitoring area are automatically identified, and the efficiency of safety investigation is improved.
Referring to fig. 1, a flow chart of steps of an embodiment of a monitoring method of the present application is shown.
Step 102, obtaining static characteristic information of a monitored object in a monitored area and motion characteristic information of the monitored object.
The monitoring video of the monitoring area is shot through image acquisition equipment such as a camera and the like, and the monitoring video can be transmitted to the server in real time or the corresponding monitoring video can be called when the server needs the monitoring video. The server identifies the monitoring video, and can detect the monitoring object, the static characteristic information and the motion characteristic information of the monitoring object in the monitoring area. The monitoring object refers to an object which may cause a monitoring event to occur in a monitoring area, the type of the monitoring event which may occur in the monitoring object is determined, and the monitoring event refers to a situation which may cause a potential safety hazard and cause a violation. The static characteristic information refers to data of static characteristics corresponding to the monitored object, such as attribute information of category, size and the like, and spatial information of the position and the like; the motion characteristic information refers to characteristic information of the monitored object in a motion state, such as a motion direction, a motion displacement, and the like of the monitored object. If the monitored object is in a static state, the corresponding motion characteristic information may be a zero characteristic, such as zero motion displacement.
As shown in fig. 2, the monitoring screen of the fire passage can identify the carton placed on the fire passage, and obtain corresponding static characteristic information and motion characteristic information by using the carton as a monitoring object. As shown in fig. 3A and 3B, the two monitoring pictures in the non-smoking area can identify the person smoking the cigarette in the non-smoking area and the cigarette in the hand of the person, respectively use the person and the cigarette as the monitoring objects, and respectively determine the static characteristic information and the motion characteristic information of the person and the cigarette.
In this embodiment of the present application, the monitoring objects in this embodiment of the present application include an associated monitoring object group, where the associated monitoring object group refers to a combination of monitoring objects having an association relationship, for example, for a smoking class, a human hand and a cigarette in the hand may be used as the associated monitoring object group, or a mouth of the human and a final cigarette may form the associated monitoring object group.
wherein the static characteristic information of the monitored object comprises at least one of the following: the category of the monitored object, the size of the monitored object, the position of the monitored object, and the position of the associated monitored object group. The class of the monitoring object may be pre-configured and related to the class of monitoring, for example for a fire fighting access blockage class, where the class of the monitoring object comprises an article, and for example for a smoking class, where the class of the monitoring object comprises a person and smoke, etc. The size of the monitored object refers to size data of the monitored object, and specifically, required size information, such as the volume and the floor area of an article, and height and weight of a person, can be determined according to the category. The position of the monitored object refers to the position of the monitored object in the monitored area, based on which the movement characteristics of the monitored object can be determined. The position of the associated monitored object group refers to the position of the associated monitored object group in the monitored area.
the motion characteristic information of the monitored object comprises at least one of the following information: displacement information of the monitored object and displacement information of the associated monitored object group; wherein the displacement information comprises a displacement direction and a displacement distance. The displacement information of the monitored object can refer to information such as the moving direction and distance of the monitored object in the monitored area; the displacement information of the associated monitored object group may refer to information such as a direction and a distance in which the associated monitored object group moves within the monitored area. In the embodiment of the present application, the associated monitoring object group may include a main object and one or more auxiliary objects, so that the displacement information is determined mainly by the main object, for example, in a scene of smoking in a non-smoking area, there are various actions of smoking by a person, and smoke may be on a hand of the person and may be on a mouth of the person, so that for a person within a set time such as 3 minutes, the displacement information of the associated monitoring object group may be determined based on a combination of smoke and the mouth and a combination of smoke and the hand.
And 104, determining a corresponding monitoring event according to the static characteristic information and the motion characteristic information.
According to the static characteristic information and the motion characteristic information of the monitored object, the state of the monitored object can be analyzed, and a corresponding monitoring event is determined after the state of the monitored object meets the condition. In the example of fig. 2, in 5 minutes of monitoring video, the static characteristic information of the carton is kept unchanged, and the motion characteristic information is zero characteristic, the carton is identified not to move within 5 minutes, and the monitoring event of fire fighting access occupancy is determined to occur. As shown in fig. 3A and 3B, in the 3-minute monitoring video, the static characteristic information and the motion characteristic information of the person and the cigarette are changed, the action of smoking by the person is recognized, and the monitoring event of smoking in the non-smoking area is determined. Therefore, the monitoring event can be automatically identified without manual patrol, and the processing efficiency is improved.
In summary, the static feature information of the monitored object in the monitored area and the motion feature information of the monitored object are obtained, then the corresponding monitoring event is determined according to the static feature information and the motion feature information, and the monitoring event occurring in the monitored area can be detected according to the feature of the monitored object through the identification of the monitored object, so that the problem can be found in time, and the discovery efficiency of the monitoring event is improved.
According to the embodiment of the application, a distributed architecture can be adopted to identify and process monitoring events in a plurality of monitoring areas, so that the processing equipment can comprise image acquisition equipment, a local server, a network server, processing end equipment and the like. The identification process for the monitoring event can be applied to various scenarios, for example, by one or more of an image acquisition device, a local server, and a network server, parsing a monitoring object included in image information of a monitoring video, and analyzing static feature information and dynamic features of the monitoring object to determine the monitoring event.
In one scene, image acquisition equipment acquires static characteristic information of a monitored object in a monitored area and motion characteristic information of the monitored object; and the image acquisition equipment determines a corresponding monitoring event according to the static characteristic information and the motion characteristic information. And the image acquisition equipment transmits the monitoring event to a local server or a network server.
Wherein, the image acquisition device (for example: a camera) as the edge calculation device can execute the following processes: acquiring static characteristic information of a monitored object in a monitored area and motion characteristic information of the monitored object; and determining a corresponding monitoring event according to the static characteristic information and the motion characteristic information. Further, the acquiring static characteristic information of the monitored object and the motion characteristic information of the monitored object in the monitored area includes: acquiring a monitoring video containing a monitored object in a monitoring area; identifying the monitoring video, and determining a monitored object and static characteristic information of the monitored object; and identifying the motion characteristic information of the monitored object in a selected time period according to the static characteristic information. And then the image acquisition equipment transmits the monitoring event to a local server or a network server. Of course, the image information acquired by the image acquisition device may be stored by itself, may be stored in a local server, and may also be stored in a network server.
In another scene, a local server acquires static characteristic information of a monitored object in a monitored area and motion characteristic information of the monitored object; the local server determines a corresponding monitoring event according to the static characteristic information and the motion characteristic information; the local server transmits the monitoring event to a network server.
The local server obtains image information containing a monitored object in a monitoring area from the image acquisition equipment, and then performs the following processing: acquiring static characteristic information of a monitored object in a monitored area and motion characteristic information of the monitored object; and determining a corresponding monitoring event according to the static characteristic information and the motion characteristic information. Further, the acquiring static characteristic information of the monitored object and the motion characteristic information of the monitored object in the monitored area includes: acquiring a monitoring video containing a monitored object in a monitoring area; identifying the monitoring video, and determining a monitored object and static characteristic information of the monitored object; and identifying the motion characteristic information of the monitored object in a selected time period according to the static characteristic information. The local server then transmits the monitoring event to a network server. Of course, after the image information is acquired by the image acquisition device, the image information including the monitored object may be transmitted only to the local server, or the acquired image information may be uploaded.
In another scene, a network server acquires static characteristic information of a monitored object in a monitored area and motion characteristic information of the monitored object; the network server determines a corresponding monitoring event according to the static characteristic information and the motion characteristic information; and the network server transmits the monitoring event to a corresponding processing end.
Wherein, the network server does the following processing: acquiring static characteristic information of a monitored object in a monitored area and motion characteristic information of the monitored object; and determining a corresponding monitoring event according to the static characteristic information and the motion characteristic information. Further, the acquiring static characteristic information of the monitored object and the motion characteristic information of the monitored object in the monitored area includes: acquiring a monitoring video containing a monitored object in a monitoring area; identifying the monitoring video, and determining a monitored object and static characteristic information of the monitored object; and identifying the motion characteristic information of the monitored object in a selected time period according to the static characteristic information. And then the network server transmits the monitoring event to a corresponding processing end. Of course, after the image information is acquired by the image acquisition device, the image information including the monitored object may be transmitted only to the local server, or the acquired image information may be uploaded in real time. After the image information is acquired by the local server, the image information containing the monitored object can be transmitted to the network server only, and the image information acquired from the image acquisition equipment can also be uploaded.
In the embodiment of the application, a plurality of cameras can be generally configured in a monitoring area, and in order to improve the processing efficiency, a distributed architecture can be adopted to perform multi-stage identification screening to determine a monitoring event in one scene. If the local server and the network server are set, the local server set in the monitoring place is used for carrying out preliminary detection, the monitoring video with the monitored object in the monitoring area is screened out and reported to the remote network server, the static characteristic information and the motion characteristic information of the monitored object are identified through the network server, and then the corresponding monitoring event can be determined according to the static characteristic information and the motion characteristic information, so that the monitoring event can be found quickly. The monitoring event can be reported, so that the monitoring site can quickly acquire the monitoring event and remind the monitoring event in time, and the order and the safety of public places are ensured.
Referring to fig. 4, a schematic interaction diagram of a monitoring system according to an embodiment of the present application is shown.
The monitoring system includes: the system comprises a network server 10, a local server 20 and cameras 30, wherein the local server 20 can be arranged at a monitoring place, and one local server 20 can be connected with a plurality of cameras 30 at the monitoring place, so that preliminary screening of monitoring videos corresponding to each monitoring area in the monitoring place is realized; the network server 10 may be located remotely from the monitoring sites, and one network server may be connected to local servers of a plurality of monitoring sites to provide identification alert services for the plurality of monitoring sites. Based on this monitored control system's framework, can be through setting up the screening problem area that can be quick at the local server in control place, and can discern whether go wrong through network server to the high in the clouds server can provide service for a plurality of public places, improves resource utilization.
Step 402, the camera shoots a monitoring area and uploads a corresponding monitoring video to the local server.
Step 404, the local server identifies the monitoring object in the monitoring video.
For example, for monitoring a fire passage, the monitoring object includes an article on the fire passage; for another example, in the monitoring of the non-smoking area, the monitored object includes people, smoke, and the like. So that the local server 20 identifies the monitored object in the monitoring video.
Wherein the monitoring type comprises at least one of: potential safety hazards and violations. The safety hazards refer to the category of the events causing the safety hazards, wherein the safety hazards refer to various problems, defects, faults, young ends, hidden dangers and the like caused by human factors, changes of objects, influences of the environment and the like in daily production processes or social activities, such as occupation and blockage of fire fighting channels, marks such as fire hydrants and the like are covered, and sewer well covers are lost. The violation class refers to a class of violation of a prescribed event, such as smoking in a non-smoking area, and the like. According to the potential safety hazard and the difference of violation of regulations, event categories can be further divided, for example, the potential safety hazard category comprises at least one of the following: fire fighting passageways, fire fighting appliances, sewerage wells and the like can be divided according to the hidden danger; the violation classes include at least one of: roads, parking, smoking, electrical appliances, etc. may be classified as specified.
When the local server identifies the monitoring video, each frame of image in the video can be obtained for identification processing, and a plurality of images can be extracted from the monitoring video for identification processing. The monitoring area shot by each camera, such as a shooting road, a parking lot, a fire fighting channel and the like, can be provided with marks such as camera marks, area marks and the like, so that the monitoring area can be conveniently determined, and each monitoring type is set to correspond to a recognized monitoring object according to the corresponding monitoring type configured for the monitoring event which possibly occurs in the monitoring area. Therefore, the corresponding relation between the monitoring area and the monitoring type can be configured for each camera, so that the identification processing of the corresponding monitoring type is executed for the monitoring video of each monitoring area.
and 406, reporting the identified monitoring object to a network server by the local server.
the local server may trigger monitoring and early warning based on the identified monitoring object, such as generating and reporting monitoring and early warning information according to a monitoring area, a monitoring type, a monitoring object, and the like, and determining and reporting a monitoring video including the monitoring object based on the monitoring object, so that the network server may perform identification processing to determine whether a monitoring event exists.
The monitoring objects in the monitoring area can be quickly identified and reported quickly through the initial detection of the network server, then the network server is adopted to recheck the corresponding monitoring videos, so that the network server judges whether a monitoring event exists or not, the automatic detection of the problems of potential safety hazards, violation and the like is realized, the processing end can also be reported to process, and the processing efficiency is improved.
Step 408, the network server obtains the monitoring video containing the monitored object in the monitoring area.
the network server 10 can acquire a monitoring video containing a monitoring object in the monitoring area. The selection time period required by the monitoring video can be determined according to the identified monitoring object, then the monitoring video of the monitoring area in the selection time period is extracted, and the monitoring area shot by the extracted monitoring video contains the monitoring object. If the recognition object is detected in the 9:15 monitoring video, the 10-minute length monitoring video can be extracted from 9: 13; if the recognition object is detected in the 10-point monitoring video, a video with the length of 1 minute can be extracted every 1 minute from the 10-point, 5 segments can be extracted in total, and the like.
step 410, the network server identifies the monitoring video and determines the static characteristic information and the motion characteristic information of the monitored object.
The network server can perform retest of the monitored area after acquiring the monitored video, wherein the monitored object in the monitored video can be identified, the static characteristic information of the monitored object is determined, and the motion characteristic information is determined based on the static characteristic information. In the embodiment of the application, different identification algorithms can be adopted for different monitoring types, and then the monitoring video is identified according to the identification algorithms, for example, an article on a fireproof channel type identifiable channel is used as a monitoring object, and the size of the article, the position on the fireproof channel and other static characteristic information are identified. For example, the smoking class can recognize the monitoring objects such as the person smoking in the non-smoking area and the cigarette in the human hand, and recognize the static characteristic information such as the position of the combination of the cigarette and the human hand, and the movement characteristic information such as the position change of the combination of the cigarette and the human hand.
In step 412, the network server determines a corresponding monitoring event according to the static characteristic information and the motion characteristic information.
The network server can analyze the static characteristic information and the motion characteristic information of the monitored object, so as to judge whether the motion state of the monitored object meets the condition of the monitoring category, for example, whether articles on a fire fighting passage do not move within 10 minutes, and whether people smoke in a non-smoking area, and the like. Therefore, after the motion state of the monitored object is judged to be in accordance with the condition of the monitoring category, a corresponding monitoring event is generated.
Alarm information can also be generated for the monitoring event and then sent. The alarm information indicates information indicating that a monitoring event exists in a monitoring area, can indicate the monitoring area where the monitoring event occurs and information such as corresponding monitoring videos, is convenient to check whether problems exist, can rapidly realize alarming, and improves processing efficiency. The receiving end of the alarm information can be set according to requirements, for example, the alarm information is transmitted to a security end of a monitoring place, so that security personnel can be informed to verify the problem, and if the alarm information is transmitted to a broadcasting end of the monitoring place, the problem is pointed out through broadcasting, and the like, so that the monitoring event can be solved as soon as possible.
In the above, for example, the camera transmits the monitoring video to the local server, in the actual processing, the monitoring video shot by the camera may be stored in the storage device, and when the local server and the network server need to monitor the video, the monitoring video is called from the storage device.
Referring to fig. 5, a schematic diagram of another monitoring system of an embodiment of the present application is shown.
referring to fig. 6, another interaction diagram of the monitoring system according to the embodiment of the present application is shown.
The monitoring system further comprises: a storage device 40 and a processing terminal 50, the storage device can receive the monitoring video from the camera 20 and store the monitoring video, and therefore the storage device 40 may comprise a switch or the like to facilitate interaction with the camera, the server or the like. The processing terminal 50 can be set according to requirements, such as a security terminal, a broadcasting terminal, and the like. Fig. 3 illustrates a monitoring site connected to a network server, which may include a plurality of cameras, such as camera 301, camera 302 …, and camera 30n, where n is a positive integer.
In the embodiment of the application, both the local server and the network server can identify the monitoring video through an identification algorithm, the identification algorithm can be constructed according to a deep learning model, in order to be convenient to distinguish, the identification algorithm of the local server is called as a first identification algorithm, and the identification algorithm of the network server is called as a second identification algorithm. After the corresponding recognition results such as events are obtained according to the recognition algorithm, the recognition algorithm can be adjusted based on the recognition results, so that the algorithm is continuously optimized, and the processing efficiency and accuracy are improved.
Step 602, a camera shoots a monitoring video and uploads the monitoring video to a storage device.
The monitoring videos can be transmitted between the cameras and the storage device in real time, and the storage device associates identification information of each monitoring area with the monitoring videos, so that the monitoring videos shot by different cameras can be conveniently searched. In the embodiment of the application, a Network Video Recorder (NVR) may be used as the storage device, and the NVR may receive, store and manage a digital Video stream transmitted by a Camera, such as an Internet Protocol Camera (IPC), through a Network, and may be used in a networked distributed architecture. For example, a plurality of gun type or ball type cameras of 600 ten thousand pixels and NVR storage devices are installed in a monitoring place, and a server having a preliminary recognition algorithm program is installed in the monitoring place.
In step 604, the local server obtains the surveillance video from the storage device.
the local server may acquire the video stream of the monitoring video from the storage device in various manners, for example, in a polling manner or a real-time manner, where the polling manner may be commonly used for non-real-time events, such as detection of violation events, and the real-time manner is commonly used for real-time events, such as detection of safety hazard events. The time of the obtained monitoring video, the length of the video and the like can be determined according to parameters such as monitoring types and the like, and after the monitoring video is obtained, the identification processing operations such as decoding analysis and the like can be carried out.
In the embodiment of the application, the local server can be constructed by adopting an open platform based on edge computing, the open platform can comprise a control device and edge computing devices, the control device can provide algorithm scheduling for the edge computing devices and feed back the identification results of the algorithms to the cloud server, so that the cloud server can optimize the algorithms based on the identification results, the edge computing devices provide execution environments of the algorithms, if the deep learning algorithms are adopted, the control device can schedule the required deep learning algorithms, and the edge computing devices execute the operation of the corresponding deep learning algorithms. The edge computing is that an open platform integrating network, computing, storage and application core capabilities is adopted on one side close to an object or a data source to provide services nearby, and the edge computing can generate faster network service response and meet basic requirements of the industry on real-time business, application intelligence, safety, privacy protection and the like.
And 606, identifying the monitoring object in the monitoring video by the local server.
The local server may determine a monitoring type to which the monitored video to be identified belongs, and then identify the monitored video according to the monitoring type, for example, extract an image in the monitored video and then perform operations such as image identification, so as to obtain a monitored object included in the monitored video. Such as objects in the fire path, cigarettes in the human hand, etc. After obtaining the monitoring video of the monitoring area, the local server determines the monitoring type of the monitoring area according to the monitoring area, then calls a first recognition algorithm corresponding to the monitoring type, and then adopts the first recognition algorithm to perform corresponding recognition processing on the monitoring video to determine the monitoring object in the monitoring area.
taking the above-mentioned edge computing architecture as an example, the control device in the local server may acquire the monitoring area of the camera and determine the monitoring type of the monitoring area, and the edge computing device includes identification algorithms corresponding to various monitoring types, so that the control device may notify the edge computing device of the identification algorithm that the monitoring video needs to adopt, and then the edge computing device calls the corresponding identification algorithm to identify the monitoring video, and feeds back the identification result to the control device.
step 608, the local server reports to the network server according to the identified monitoring object.
The local server may trigger monitoring and early warning based on the identified monitoring object, for example, generate and report monitoring and early warning information according to a monitoring area, a monitoring type, a monitoring object, and the like, and determine and report a monitoring video including the monitoring object based on the monitoring object, so that the network server may perform identification processing to determine whether a monitoring event exists.
Step 610, the network server obtains a monitoring video containing a monitoring object in the monitoring area.
the network server can obtain the monitoring video containing the monitoring object in the monitoring area according to the identified monitoring object, wherein after the monitoring object is detected in the monitoring area, the monitoring video of the monitoring area in a selected time period is extracted, and the monitoring area shot by the extracted monitoring video contains the monitoring object.
Step 612, the network server identifies the monitoring video, determines a monitoring object and static characteristic information of the monitoring object, and identifies motion characteristic information of the monitoring object in a selected time period according to the static characteristic information.
After the monitoring video is obtained, the network server may perform retest of the monitoring area, wherein the monitoring object in the monitoring video may be identified, and static feature information of the monitoring object may be determined, for example, the static feature information such as the size and the position of the monitoring object may be calculated according to the spatial coordinates of the monitoring area, and the static feature information such as the size and the position of the monitoring object may be calculated according to the information such as the ratio of the monitoring object to the reference object such as a corridor and the ground in the monitoring area. Then, the motion characteristic information of the monitored object in the selected time period can be identified according to the static characteristic information, wherein a plurality of time points can be determined in the selected time period, for example, 1 and 3 seconds are taken as one time point, then the static characteristics of the monitored object corresponding to the time point are compared to obtain the motion characteristic information of the monitored object, for example, the information such as the displacement direction, the distance and the like can be determined according to the position difference value of the monitored object at two adjacent time points. The calculation of the associated monitoring object group is similar, and the displacement information of the associated monitoring object group can be determined according to the position difference of the associated monitoring object group where the main object is located in two adjacent time points.
And 614, the network server determines a corresponding monitoring event according to the static characteristic information and the motion characteristic information.
The network server can analyze the static characteristic information and the motion characteristic information of the monitored object, so as to judge whether the motion state of the monitored object meets the condition of the monitoring category, for example, whether articles on a fire fighting passage do not move within 10 minutes, and whether people smoke in a non-smoking area, and the like. Therefore, after the motion state of the monitored object is judged to be in accordance with the condition of the monitoring category, a corresponding monitoring event is generated.
And 616, generating alarm information by the network server according to the monitoring event, and sending the alarm information.
alarm information may also be generated for the monitoring event and then transmitted. Wherein, to different alarm information, still can carry out different alarm operations, for example to potential safety hazards such as fire control passageway jam, sewer well lid are lost, can send alarm operation to security terminal, inform security personnel to recheck the problem and handle, wherein make security personnel know relevant incident fast through warning surveillance video to confirm whether take place. For example, the illegal event of smoking in the non-smoking area can be sent to a broadcast terminal for broadcast reminding, or a person who smoking in violation is identified in a monitoring place such as a company, alarm information is sent to the person, or a reminding short message is generated according to the alarm information and sent to a mobile phone of the person.
And step 618, the processing end sends the alarm processing result.
step 620, the network server adjusts the identification algorithm of the monitored object in the monitored area according to the alarm processing result.
Wherein the alarm processing result comprises: and according to the alarm processing result fed back by the alarm information and/or the reported alarm processing result.
And the alarm processing result fed back according to the alarm information refers to an alarm processing result fed back after the alarm information is received and processed. The processing end can process according to the alarm information, then generate a corresponding alarm processing result, such as the existence of the event or the absence of the event, and the like, and then feed the alarm processing result back to the network server, and the network server can call the identification algorithm of the monitoring type according to the alarm processing result, so that the identification algorithm is optimized. The first recognition algorithm and the second recognition algorithm can be adjusted according to the alarm processing result, and the first recognition algorithm can be adjusted according to the result of the second recognition algorithm.
In the embodiment of the application, the reported alarm processing result refers to an alarm processing result obtained by directly discovering and processing a monitoring event without receiving alarm information. Sometimes, the server may not timely identify some potential safety hazards and illegal monitoring events, but finds the events in security, property patrol, user reports and other ways, and can generate alarm processing results, and then optimizes the identification algorithm based on the alarm processing results.
Referring to FIG. 7, a flow chart of steps of another monitoring method embodiment of the present application is shown.
Step 702, acquiring a monitoring video containing a monitoring object in a monitoring area.
The monitoring video is initially detected through the local server, so that the monitoring object in the monitoring area can be identified, the network server can acquire the monitoring video containing the monitoring object in the monitoring area to detect, and the detection is more targeted.
Wherein, the acquiring of the monitoring video containing the monitored object in the monitoring area comprises: after a monitoring object is detected in a monitoring area, a monitoring video of the monitoring area in a selected time period is extracted, wherein the monitoring area shot by the extracted monitoring video contains the monitoring object. The time information of the monitored object appearing in the monitored area can be determined, the selection time period is determined according to the time information, and then the monitored video of the monitored area in the selection time period is extracted, so that the monitored area shot by the extracted monitored video contains the monitored object. The local server can report the monitored object after the monitored object is detected for the first time, the network server can extract the monitored video of the monitored area in the selected time period according to the monitored object, and can also report the monitored object after the monitored object is detected for multiple times, the selected time period is determined according to the multiple detection results, and then the monitored video is extracted.
Step 704, identifying the monitoring video, and determining a monitoring object and static characteristic information of the monitoring object.
The server can identify the monitoring video, wherein an identification algorithm can be determined according to the monitoring type, and then the monitoring video is identified according to the identification algorithm to identify the monitoring object and the static characteristic information of the monitoring object.
The identifying the monitoring video and determining the monitoring object and the static characteristic information of the monitoring object comprise: determining a monitoring type according to the monitoring area; and identifying a monitoring object corresponding to the monitoring type in the monitoring video, and identifying static characteristic information of the monitoring object. According to the monitoring type, monitoring objects needing to be identified in the monitoring area, such as articles on a fire fighting access, people and smoke in a non-smoking area, and the like, can be determined, and static characteristics of the monitoring objects can be identified, such as static characteristic information of the size, the position and the like of the monitoring objects is calculated according to space coordinates of the monitoring area, and static characteristic information of the size, the position and the like of the monitoring objects is calculated according to information of the proportion and the like of the monitoring objects in the monitoring area and reference objects such as corridors, the ground and the like.
step 706, identifying the motion characteristic information of the monitored object in the selected time period according to the static characteristic information.
the plurality of static characteristic information of the monitored object can be compared to obtain the motion characteristic of the monitored object, so that the motion characteristic information of the monitored object in a selected time period is identified. Wherein, the identifying the motion characteristic information of the monitored object in the selected time period according to the static characteristic information comprises: extracting static characteristic information of the monitored object corresponding to a plurality of time points in a selected time period; comparing the plurality of static characteristic information of the monitored object, and determining corresponding motion characteristic information. A plurality of time points, for example, 1 or 3 seconds, may be determined within a selected time period as one time point, and then static features of the monitored object corresponding to the time point are compared to obtain motion feature information of the monitored object, for example, information such as displacement direction and distance determined by a position difference value of the monitored object at two adjacent time points. The calculation of the associated monitoring object group is similar, and the displacement information of the associated monitoring object group can be determined according to the position difference of the associated monitoring object group where the main object is located in two adjacent time points.
The comparing the plurality of static characteristic information of the monitored object to determine the corresponding motion characteristic information includes at least one of the following steps: comparing the positions of the monitored objects pairwise to determine displacement information of the monitored objects; comparing the positions of the associated monitoring object groups pairwise to determine displacement information of the associated monitoring object groups, wherein the displacement information comprises: displacement direction and displacement distance.
For the position information of the monitored object, the positions of the monitored object at two adjacent time points can be compared, for example, a time point is determined at an interval of 2 seconds, the positions of the monitored object at 1 second and 4 seconds can be compared, the positions at 4 seconds and 7 seconds can be compared, and so on, and the displacement information of the monitored object at the time interval can be obtained.
for the position information of the associated monitoring object group, the positions of the associated monitoring object group at two adjacent time points may be compared, for example, a time point is determined at an interval of 3 seconds, the positions of the monitoring object at 1 second and 5 seconds may be compared, the positions at 5 seconds and 9 seconds may be compared, and so on, to obtain the displacement information of the associated monitoring object group at the time interval. The same associated monitoring object group may be compared, and the associated monitoring object group in which the main object is located may be compared to determine the displacement information of the associated monitoring object group. As in the example of fig. 3A and 3B, fig. 3A and 3B are monitoring screens corresponding to two time points, respectively, where a combination of a human hand and a cigarette is taken as an associated monitoring object group, and at the two time points, corresponding displacement information can be determined based on the position of the combination of the human hand and the cigarette, so that displacement information of the combination of the human hand and the cigarette in a selected time period can be obtained by comparing two-to-two of the positions of the combination of the human hand and the cigarette corresponding to a plurality of adjacent time points.
Step 708, determining a corresponding monitoring event according to the static characteristic information and the motion characteristic information.
The static characteristic information and the motion characteristic information of the monitored object are analyzed, and whether a monitoring event is generated or not can be determined. Different conditions can be set for different monitoring types, so that the state of the monitored object is analyzed according to the static characteristic information and the motion characteristic information of the monitored object, whether the state meets the conditions or not is judged, and a monitoring event is generated after the state meets the conditions is judged. The state refers to the form represented by the monitored object, and the state may include a motion state, a change state, a shape state, and the like. The motion state is motion or static, the change state is changed or not, the shape state and the size of the shape and the like, the corresponding conditions comprise motion conditions, change conditions, shape conditions and the like, different monitoring areas need to detect different conditions, and the detection can be carried out according to one or more conditions.
For example, in the identification of fire fighting channel blockage, the size of an article is larger than a size threshold value and meets a shape condition, the article is in a static state and exceeds a time threshold value and meets a motion condition, and a monitoring event can be generated after the article is judged to meet the shape condition and meet the motion condition. As in the example of fig. 2, for the identification of the fire fighting access blockage, a carton is identified as a monitoring object, wherein the volume of the carton is 1 cubic meter, the volume is more than 0.5 cubic meter, it is confirmed that the shape condition is satisfied, and the carton is not moved in a static state within 5 minutes, it is confirmed that the motion condition is satisfied, then it is judged that the carton satisfies the shape condition and satisfies the motion condition, and a monitoring event of the fire fighting access blockage can be generated.
In the recognition of smoking in the non-smoking area, the occurrence of smoking action is judged based on the combination of human hands and cigarettes, and the smoking action can be confirmed after the combination of human hands and cigarettes has one or more repeated displacements, so that the satisfaction of the motion condition is determined based on the smoking action, the change state of the size of the cigarettes in the hands can also be judged, the satisfaction of the change condition is confirmed when the change state of the cigarettes is reduced, and the monitoring event is generated after the satisfaction of the motion condition and the change condition is judged. As in the example of fig. 3A, B, a combination of a human hand and a cigarette is used as a related monitoring object group, and the movement of lifting up and down the combination of the human hand and the cigarette is determined to occur multiple times according to the displacement information corresponding to the combination of the human hand and the cigarette, so that the smoking movement can be artificially recognized, and the satisfaction of the movement condition is confirmed, and in the process of executing the smoking movement, the volume of the cigarette is gradually reduced, and the satisfaction of the change condition is confirmed, so that the satisfaction of the movement condition and the change condition is judged, and the monitoring event of smoking in the non-smoking area can be generated.
And 710, generating alarm information according to the monitoring event, and sending the alarm information.
According to the monitoring event, an alarm monitoring video can be extracted from the monitoring video, then alarm information is generated by adopting parameters of the alarm monitoring video, the monitoring type, the monitoring area and the like, and then the alarm information is sent to a processing end for processing.
Wherein, to different alarm information, still can carry out different alarm operations, for example to potential safety hazards such as fire control passageway jam, sewer well lid are lost, can send alarm operation to security terminal, inform security personnel to recheck the problem and handle, wherein make security personnel know relevant incident fast through warning surveillance video to confirm whether take place. For example, the illegal event of smoking in the non-smoking area can be sent to a broadcast terminal for broadcast reminding, or a person who smoking in violation is identified in a monitoring place such as a company, alarm information is sent to the person, or a reminding short message is generated according to the alarm information and sent to a mobile phone of the person.
And 712, acquiring an alarm processing result, and adjusting an identification algorithm for the monitored object in the monitored area according to the alarm processing result.
The processing end can process according to the alarm information, then generate a corresponding alarm processing result, such as the existence of the event or the absence of the event, and the like, and then feed the alarm processing result back to the network server, and the network server can call the identification algorithm of the monitoring type according to the alarm processing result, so that the identification algorithm is optimized. The first recognition algorithm and the second recognition algorithm can be adjusted according to the alarm processing result, and the first recognition algorithm can be adjusted according to the result of the second recognition algorithm. Sometimes, the server may not timely identify some potential safety hazards and illegal monitoring events, but finds the events in security, property patrol, user reports and other ways, and can generate alarm processing results, and then optimizes the identification algorithm based on the alarm processing results.
The embodiment of the application is mainly based on the steps that a network server acquires static characteristic information of a monitored object in a monitored area and motion characteristic information of the monitored object; determining a corresponding monitoring event according to the static characteristic information and the motion characteristic information, and actually obtaining the static characteristic information of the monitored object and the motion characteristic information of the monitored object and the monitoring event may also be performed by an image acquisition device, a local server, and the like.
In the embodiment of the application, the camera is arranged in the monitoring place, and technologies such as edge calculation are combined, so that the real-time monitoring and early warning of the monitoring place in 24 hours can be realized, the labor cost is greatly reduced, the efficiency is improved, and potential safety hazards can be found timely.
The method comprises the steps that a local server can apply technologies such as edge calculation and the like to preprocess a monitoring video, filter invalid monitoring videos and report the monitoring videos which possibly have problems; and the network server can identify, monitor and identify, find and report the early warning event, and the network server also applies the artificial intelligence technology such as a robot and the like to identify the potential safety hazard through the intelligence of personnel. The recognition algorithm is constructed through mathematical models such as deep learning, and the algorithm can be continuously adjusted in the recognition process, so that the business process is optimized.
The embodiment of the application is described by taking a public place as a monitoring place, and can be actually applied to monitoring of private places such as household places, and monitoring places comprising cameras can be monitored, monitored events can be discovered in time, and an alarm can be given. For example, when a user's home enters an unmanned mode, the state of the kitchen can be monitored through monitoring theft prevention, and the like, so that the processing end of the home Internet of things can be informed to send out an alarm after a problem is found, for example, the processing end can give an alarm, the kitchen is turned off, electric appliances and the like, and the alarm can also be sent to the mobile equipment of the user so that the user can know the events occurring in the home in time.
It should be noted that, for simplicity of description, the method embodiments are described as a series of acts or combination of acts, but those skilled in the art will recognize that the embodiments are not limited by the order of acts described, as some steps may occur in other orders or concurrently depending on the embodiments. Further, those skilled in the art will also appreciate that the embodiments described in the specification are presently preferred and that no particular act is required of the embodiments of the application.
on the basis of the above embodiments, the present embodiment further provides a monitoring device, which is applied to a network server.
Referring to fig. 8, a block diagram of a monitoring apparatus according to an embodiment of the present disclosure is shown, which may specifically include the following modules:
The feature identification module 802 is configured to obtain static feature information of a monitored object in a monitored area and motion feature information of the monitored object.
And the event analysis module 804 is configured to determine a corresponding monitoring event according to the static characteristic information and the motion characteristic information.
In summary, the static feature information of the monitored object in the monitored area and the motion feature information of the monitored object are obtained, then the corresponding monitoring event is determined according to the static feature information and the motion feature information, and the monitoring event occurring in the monitored area can be detected according to the feature of the monitored object through the identification of the monitored object, so that the problem can be found in time, and the discovery efficiency of the monitoring event is improved.
Referring to fig. 9, a block diagram of another monitoring apparatus according to another embodiment of the present application is shown, which may specifically include the following modules:
the feature identification module 802 is configured to obtain static feature information of a monitored object in a monitored area and motion feature information of the monitored object.
And the event analysis module 804 is configured to determine a corresponding monitoring event according to the static characteristic information and the motion characteristic information.
And an alarm module 806, configured to generate alarm information according to the monitoring event, and send the alarm information.
An adjusting module 808, configured to obtain an alarm processing result; adjusting an identification algorithm for the monitored object in the monitored area according to the alarm processing result, wherein the alarm processing result comprises: and according to the alarm processing result fed back by the alarm information and/or the reported alarm processing result.
Wherein the static characteristic information of the monitored object comprises at least one of the following: the type of the monitored object, the size of the monitored object, the position of the monitored object and the position of the associated monitored object group; the motion characteristic information of the monitored object comprises at least one of the following information: displacement information of the monitored object, displacement information of the associated monitored object group. The related monitoring object group refers to a combination of monitoring objects with a related relationship.
The feature recognition module 802 includes: an acquisition submodule 8022, a static identification submodule 8024 and a motion identification submodule 8026, wherein:
The obtaining submodule 8022 is configured to obtain a monitoring video including a monitoring object in a monitoring area.
A static identification submodule 8024, configured to identify the monitoring video, and determine a monitored object and static feature information of the monitored object.
A motion identification submodule 8026, configured to identify, according to the static feature information, motion feature information of the monitored object in a selected time period.
The static identification submodule 8024 is configured to determine a monitoring type according to a monitoring area; and identifying a monitoring object corresponding to the monitoring type in the monitoring video, and identifying static characteristic information of the monitoring object. The monitoring type comprises at least one of the following: potential safety hazards and violations.
the motion identification submodule 8026 is configured to extract static feature information corresponding to the monitoring object at multiple time points in a selected time period; comparing the plurality of static characteristic information of the monitored object, and determining corresponding motion characteristic information.
The motion recognition submodule 8026 is configured to compare the positions of the monitored objects with each other, and determine displacement information of the monitored objects; and/or comparing the positions of the associated monitoring object groups pairwise to determine displacement information of the associated monitoring object groups, wherein the displacement information comprises: displacement direction and displacement distance.
the obtaining sub-module 8022 is configured to, after a monitored object is detected in a monitored area, extract a monitoring video of the monitored area in a selected time period, where the monitored area shot by the extracted monitoring video includes the monitored object.
The event analysis module 804 includes: a state analysis sub-module 8042 and an event determination sub-module 8044, wherein:
And a state analyzing submodule 8042, configured to analyze a state of the monitored object according to the static feature information and the motion feature information.
The event determining submodule 8044 is configured to determine a corresponding monitoring event according to the state of the monitored object.
In the embodiment of the application, the camera is arranged in the monitoring place, and technologies such as edge calculation are combined, so that the real-time monitoring and early warning of the monitoring place in 24 hours can be realized, the labor cost is greatly reduced, the efficiency is improved, and potential safety hazards can be found timely.
The method comprises the steps that a local server can apply technologies such as edge calculation and the like to preprocess a monitoring video, filter invalid monitoring videos and report the monitoring videos which possibly have problems; and the network server can identify, monitor and identify, find and report the early warning event, and the network server also applies the artificial intelligence technology such as a robot and the like to identify the potential safety hazard through the intelligence of personnel. The recognition algorithm is constructed through mathematical models such as deep learning, and the algorithm can be continuously adjusted in the recognition process, so that the business process is optimized.
the present application further provides a non-transitory, readable storage medium, where one or more modules (programs) are stored, and when the one or more modules are applied to a device, the device may execute instructions (instructions) of method steps in this application.
Embodiments of the present application provide one or more machine-readable media having instructions stored thereon, which when executed by one or more processors, cause an electronic device to perform the methods as described in one or more of the above embodiments. In the embodiment of the application, the electronic device comprises a server, a gateway, a sub-device and the like, wherein the sub-device is a device such as an internet of things device.
Embodiments of the present disclosure may be implemented as an apparatus, which may include electronic devices such as servers (clusters), terminal devices such as IoT devices, and the like, using any suitable hardware, firmware, software, or any combination thereof, for a desired configuration. Fig. 10 schematically illustrates an example apparatus 1000 that may be used to implement various embodiments described herein.
For one embodiment, fig. 10 illustrates an example apparatus 1000 having one or more processors 1002, a control module (chipset) 1004 coupled to at least one of the processor(s) 1002, a memory 1006 coupled to the control module 1004, a non-volatile memory (NVM)/storage 1008 coupled to the control module 1004, one or more input/output devices 1010 coupled to the control module 1004, and a network interface 1012 coupled to the control module 1006.
The processor 1002 may include one or more single-core or multi-core processors, and the processor 1002 may include any combination of general-purpose or special-purpose processors (e.g., graphics processors, application processors, baseband processors, etc.). In some embodiments, the apparatus 1000 can be a server device such as a gateway described in the embodiments of the present application.
In some embodiments, the apparatus 1000 may include one or more computer-readable media (e.g., the memory 1006 or the NVM/storage 1008) having instructions 1014 and one or more processors 1002 that, in conjunction with the one or more computer-readable media, are configured to execute the instructions 1014 to implement modules to perform the actions described in this disclosure.
For one embodiment, control module 1004 may include any suitable interface controllers to provide any suitable interface to at least one of the processor(s) 1002 and/or any suitable device or component in communication with control module 1004.
The control module 1004 may include a memory controller module to provide an interface to the memory 1006. The memory controller module may be a hardware module, a software module, and/or a firmware module.
memory 1006 may be used, for example, to load and store data and/or instructions 1014 for device 1000. For one embodiment, memory 1006 may comprise any suitable volatile memory, such as suitable DRAM. In some embodiments, the memory 1006 may comprise a double data rate type four synchronous dynamic random access memory (DDR4 SDRAM).
For one embodiment, the control module 1004 may include one or more input/output controllers to provide an interface to the NVM/storage 1008 and input/output device(s) 1010.
For example, NVM/storage 1008 may be used to store data and/or instructions 1014. NVM/storage 1008 may include any suitable non-volatile memory (e.g., flash memory) and/or may include any suitable non-volatile storage device(s) (e.g., one or more hard disk drive(s) (HDD (s)), one or more Compact Disc (CD) drive(s), and/or one or more Digital Versatile Disc (DVD) drive (s)).
The NVM/storage 1008 may include storage resources that are physically part of the device on which the apparatus 1000 is installed, or it may be accessible by the device and need not be part of the device. For example, NVM/storage 1008 may be accessed over a network via input/output device(s) 1010.
input/output device(s) 1010 may provide an interface for apparatus 1000 to communicate with any other suitable device, input/output devices 1010 may include communication components, audio components, sensor components, and so forth. Network interface 1012 may provide an interface for device 1000 to communicate over one or more networks, and device 1000 may communicate wirelessly with one or more components of a wireless network according to any of one or more wireless network standards and/or protocols, such as access to a communication standard-based wireless network, such as WiFi, 2G, 3G, 4G, 5G, etc., or a combination thereof.
For one embodiment, at least one of the processor(s) 1002 may be packaged together with logic for one or more controller(s) (e.g., memory controller module) of control module 1004. For one embodiment, at least one of the processor(s) 1002 may be packaged together with logic for one or more controller(s) of control module 1004 to form a System In Package (SiP). For one embodiment, at least one of the processor(s) 1002 may be integrated on the same die with the logic of one or more controllers of the control module 1004. For one embodiment, at least one of the processor(s) 1002 may be integrated on the same die with logic for one or more controller(s) of control module 1004 to form a system on chip (SoC).
in various embodiments, the apparatus 1000 may be, but is not limited to: a server, a desktop computing device, or a mobile computing device (e.g., a laptop computing device, a handheld computing device, a tablet, a netbook, etc.), among other terminal devices. In various embodiments, the apparatus 1000 may have more or fewer components and/or different architectures. For example, in some embodiments, device 1000 includes one or more cameras, a keyboard, a Liquid Crystal Display (LCD) screen (including a touch screen display), a non-volatile memory port, multiple antennas, a graphics chip, an Application Specific Integrated Circuit (ASIC), and speakers.
An embodiment of the present application provides a server, including: one or more processors; and one or more machine readable media having instructions stored thereon, which when executed by the one or more processors, cause the server to perform a monitoring method as described in one or more of the embodiments of the application.
For the device embodiment, since it is basically similar to the method embodiment, the description is simple, and for the relevant points, refer to the partial description of the method embodiment.
The embodiments in the present specification are described in a progressive manner, each embodiment focuses on differences from other embodiments, and the same and similar parts among the embodiments are referred to each other.
embodiments of the present application are described with reference to flowchart illustrations and/or block diagrams of methods, terminal devices (systems), and computer program products according to embodiments of the application. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing terminal to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing terminal, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
these computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing terminal to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing terminal to cause a series of operational steps to be performed on the computer or other programmable terminal to produce a computer implemented process such that the instructions which execute on the computer or other programmable terminal provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
While preferred embodiments of the present application have been described, additional variations and modifications of these embodiments may occur to those skilled in the art once they learn of the basic inventive concepts. Therefore, it is intended that the appended claims be interpreted as including the preferred embodiment and all such alterations and modifications as fall within the true scope of the embodiments of the application.
finally, it should also be noted that, herein, relational terms such as first and second, and the like may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or terminal that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or terminal. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or terminal that comprises the element.
The foregoing detailed description is directed to a monitoring method, a monitoring apparatus, a server, and a storage medium, which are provided by the present application, and specific examples are applied in the present application to explain the principles and embodiments of the present application, and the descriptions of the foregoing examples are only used to help understand the method and the core ideas of the present application; meanwhile, for a person skilled in the art, according to the idea of the present application, there may be variations in the specific embodiments and the application scope, and in summary, the content of the present specification should not be construed as a limitation to the present application.

Claims (26)

1. A method of monitoring, the method comprising:
Acquiring static characteristic information of a monitored object in a monitored area and motion characteristic information of the monitored object;
And determining a corresponding monitoring event according to the static characteristic information and the motion characteristic information.
2. The method of claim 1,
The static characteristic information of the monitored object comprises at least one of the following information: the type of the monitored object, the size of the monitored object, the position of the monitored object and the position of the associated monitored object group;
the motion characteristic information of the monitored object comprises at least one of the following information: displacement information of the monitored object, displacement information of the associated monitored object group.
3. The method according to claim 2, wherein the related monitoring object group refers to a combination of monitoring objects having a related relationship.
4. The method according to claim 3, wherein the obtaining of the static characteristic information of the monitored object and the motion characteristic information of the monitored object in the monitored area comprises:
Acquiring a monitoring video containing a monitored object in a monitoring area;
Identifying the monitoring video, and determining a monitored object and static characteristic information of the monitored object;
and identifying the motion characteristic information of the monitored object in a selected time period according to the static characteristic information.
5. The method of claim 4, wherein the identifying the surveillance video and determining the surveillance object and the static feature information of the surveillance object comprises:
Determining a monitoring type according to the monitoring area;
and identifying a monitoring object corresponding to the monitoring type in the monitoring video, and identifying static characteristic information of the monitoring object.
6. The method of claim 4, wherein identifying motion characteristic information of the monitored object over a selected time period based on the static characteristic information comprises:
Extracting static characteristic information of the monitored object corresponding to a plurality of time points in a selected time period;
Comparing the plurality of static characteristic information of the monitored object, and determining corresponding motion characteristic information.
7. The method according to claim 6, wherein the comparing the plurality of static feature information of the monitored object to determine the corresponding motion feature information comprises at least one of the following steps:
Comparing the positions of the monitored objects pairwise to determine displacement information of the monitored objects;
Comparing the positions of the associated monitoring object groups pairwise to determine displacement information of the associated monitoring object groups, wherein the displacement information comprises: displacement direction and displacement distance.
8. The method according to claim 4, wherein the obtaining of the monitoring video containing the monitored object in the monitoring area comprises:
After a monitoring object is detected in a monitoring area, a monitoring video of the monitoring area in a selected time period is extracted, wherein the monitoring area shot by the extracted monitoring video contains the monitoring object.
9. The method of claim 4, wherein determining the corresponding monitoring event according to the static characteristic information and the motion characteristic information comprises:
analyzing the state of the monitored object according to the static characteristic information and the motion characteristic information;
And determining a corresponding monitoring event according to the state of the monitored object.
10. The method of claim 1, further comprising:
And generating alarm information according to the monitoring event, and sending the alarm information.
11. the method of claim 10, further comprising:
acquiring an alarm processing result, wherein the alarm processing result comprises: according to the alarm processing result fed back by the alarm information and/or the reported alarm processing result;
And adjusting an identification algorithm for the monitored object in the monitored area according to the alarm processing result.
12. the method of claim 5, the type of monitoring comprising at least one of: potential safety hazards and violations.
13. A monitoring device, comprising:
The system comprises a characteristic identification module, a characteristic analysis module and a characteristic analysis module, wherein the characteristic identification module is used for acquiring static characteristic information of a monitored object in a monitored area and motion characteristic information of the monitored object;
And the event analysis module is used for determining a corresponding monitoring event according to the static characteristic information and the motion characteristic information.
14. The apparatus of claim 13,
The static characteristic information of the monitored object comprises at least one of the following information: the type of the monitored object, the size of the monitored object, the position of the monitored object and the position of the associated monitored object group;
The motion characteristic information of the monitored object comprises at least one of the following information: displacement information of the monitored object, displacement information of the associated monitored object group.
15. The apparatus according to claim 14, wherein the related monitoring object group refers to a combination of monitoring objects having a related relationship.
16. The apparatus of claim 15, wherein the feature recognition module comprises:
The acquisition submodule is used for acquiring a monitoring video containing a monitoring object in a monitoring area;
The static identification submodule is used for identifying the monitoring video and determining a monitoring object and static characteristic information of the monitoring object;
And the motion identification submodule is used for identifying the motion characteristic information of the monitored object in a selected time period according to the static characteristic information.
17. The apparatus of claim 16,
The static identification submodule is used for determining a monitoring type according to a monitoring area; and identifying a monitoring object corresponding to the monitoring type in the monitoring video, and identifying static characteristic information of the monitoring object.
18. The apparatus of claim 16,
The motion identification submodule is used for extracting static characteristic information of the monitored object corresponding to a plurality of time points in a selected time period; comparing the plurality of static characteristic information of the monitored object, and determining corresponding motion characteristic information.
19. the apparatus of claim 18,
The motion identification submodule is used for comparing the positions of the monitored objects pairwise to determine the displacement information of the monitored objects; and/or comparing the positions of the associated monitoring object groups pairwise to determine displacement information of the associated monitoring object groups, wherein the displacement information comprises: displacement direction and displacement distance.
20. The apparatus of claim 16,
The acquisition submodule is used for extracting the monitoring video of the monitoring area in a selected time period after the monitoring object is detected in the monitoring area, wherein the monitoring area shot by the extracted monitoring video contains the monitoring object.
21. The apparatus of claim 15, wherein the event analysis module comprises:
The state analysis submodule is used for analyzing the state of the monitored object according to the static characteristic information and the motion characteristic information;
And the event determining submodule is used for determining a corresponding monitoring event according to the state of the monitored object.
22. the apparatus of claim 13, further comprising:
And the alarm module is used for generating alarm information according to the monitoring event and sending the alarm information.
23. the apparatus of claim 22, further comprising:
the adjusting module is used for acquiring an alarm processing result; adjusting an identification algorithm for the monitored object in the monitored area according to the alarm processing result, wherein the alarm processing result comprises: and according to the alarm processing result fed back by the alarm information and/or the reported alarm processing result.
24. The apparatus of claim 17, the type of monitoring comprising at least one of: potential safety hazards and violations.
25. A server, comprising:
a processor; and
Memory having stored thereon executable code which, when executed, causes the processor to perform a monitoring method as claimed in one or more of claims 1-12.
26. One or more machine-readable media having executable code stored thereon that, when executed, causes a processor to perform a monitoring method as recited in one or more of claims 1-12.
CN201810557078.1A 2018-05-29 2018-06-01 Monitoring method, device, server and storage medium Pending CN110543803A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
PCT/CN2019/087692 WO2019228218A1 (en) 2018-06-01 2019-05-21 Monitoring method, device, server, and storage medium

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201810532327 2018-05-29
CN2018105323271 2018-05-29

Publications (1)

Publication Number Publication Date
CN110543803A true CN110543803A (en) 2019-12-06

Family

ID=68701646

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810557078.1A Pending CN110543803A (en) 2018-05-29 2018-06-01 Monitoring method, device, server and storage medium

Country Status (1)

Country Link
CN (1) CN110543803A (en)

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111263117A (en) * 2020-02-17 2020-06-09 北京金和网络股份有限公司 Emergency command video linkage method, device and system
CN111586354A (en) * 2020-04-28 2020-08-25 上海市保安服务(集团)有限公司 Investigation system
CN111967321A (en) * 2020-07-15 2020-11-20 菜鸟智能物流控股有限公司 Video data processing method and device, electronic equipment and storage medium
CN112446334A (en) * 2020-12-02 2021-03-05 福建亿安智能技术有限公司 Method and system for recognizing illegal behaviors of non-motor vehicle
CN112633207A (en) * 2020-12-29 2021-04-09 杭州拓深科技有限公司 Fire fighting channel blocking video identification method based on intelligent algorithm
CN112836565A (en) * 2020-11-27 2021-05-25 北京芯翌智能信息技术有限公司 Monitoring video processing method and device, monitoring system, storage medium and terminal
CN113111866A (en) * 2021-06-15 2021-07-13 深圳市图元科技有限公司 Intelligent monitoring management system and method based on video analysis
CN113221981A (en) * 2021-04-28 2021-08-06 之江实验室 Edge deep learning-oriented data cooperative processing optimization method
CN114077231A (en) * 2021-10-27 2022-02-22 中国通信建设集团设计院有限公司 Internet of things system for industrial production based on 5G communication network
CN118397255A (en) * 2024-06-26 2024-07-26 杭州海康威视数字技术股份有限公司 Method, device and equipment for determining analysis area and intelligently analyzing analysis area

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2002279553A (en) * 2001-03-15 2002-09-27 Sanyo Electric Co Ltd Abnormal condition detector
CN104978829A (en) * 2015-06-24 2015-10-14 国家电网公司 Indoor smoking monitoring control method and system
CN105227918A (en) * 2015-09-30 2016-01-06 珠海安联锐视科技股份有限公司 A kind of intelligent control method and device
CN105225428A (en) * 2015-09-14 2016-01-06 国家电网公司 A kind of indoor smoke detection alarm method and system
CN106157331A (en) * 2016-07-05 2016-11-23 乐视控股(北京)有限公司 A kind of smoking detection method and device

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2002279553A (en) * 2001-03-15 2002-09-27 Sanyo Electric Co Ltd Abnormal condition detector
CN104978829A (en) * 2015-06-24 2015-10-14 国家电网公司 Indoor smoking monitoring control method and system
CN105225428A (en) * 2015-09-14 2016-01-06 国家电网公司 A kind of indoor smoke detection alarm method and system
CN105227918A (en) * 2015-09-30 2016-01-06 珠海安联锐视科技股份有限公司 A kind of intelligent control method and device
CN106157331A (en) * 2016-07-05 2016-11-23 乐视控股(北京)有限公司 A kind of smoking detection method and device

Cited By (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111263117A (en) * 2020-02-17 2020-06-09 北京金和网络股份有限公司 Emergency command video linkage method, device and system
CN111586354A (en) * 2020-04-28 2020-08-25 上海市保安服务(集团)有限公司 Investigation system
CN111967321B (en) * 2020-07-15 2024-04-05 菜鸟智能物流控股有限公司 Video data processing method, device, electronic equipment and storage medium
CN111967321A (en) * 2020-07-15 2020-11-20 菜鸟智能物流控股有限公司 Video data processing method and device, electronic equipment and storage medium
CN112836565A (en) * 2020-11-27 2021-05-25 北京芯翌智能信息技术有限公司 Monitoring video processing method and device, monitoring system, storage medium and terminal
CN112836565B (en) * 2020-11-27 2024-04-12 上海芯翌智能科技有限公司 Monitoring video processing method and device, monitoring system, storage medium and terminal
CN112446334A (en) * 2020-12-02 2021-03-05 福建亿安智能技术有限公司 Method and system for recognizing illegal behaviors of non-motor vehicle
CN112633207A (en) * 2020-12-29 2021-04-09 杭州拓深科技有限公司 Fire fighting channel blocking video identification method based on intelligent algorithm
CN113221981A (en) * 2021-04-28 2021-08-06 之江实验室 Edge deep learning-oriented data cooperative processing optimization method
CN113111866A (en) * 2021-06-15 2021-07-13 深圳市图元科技有限公司 Intelligent monitoring management system and method based on video analysis
CN113111866B (en) * 2021-06-15 2021-10-26 深圳市图元科技有限公司 Intelligent monitoring management system and method based on video analysis
CN114077231A (en) * 2021-10-27 2022-02-22 中国通信建设集团设计院有限公司 Internet of things system for industrial production based on 5G communication network
CN118397255A (en) * 2024-06-26 2024-07-26 杭州海康威视数字技术股份有限公司 Method, device and equipment for determining analysis area and intelligently analyzing analysis area

Similar Documents

Publication Publication Date Title
CN110543803A (en) Monitoring method, device, server and storage medium
AU2019204810B2 (en) Digital fingerprint tracking
CN110876035B (en) Scene updating method and device based on video and electronic equipment
CN103270536B (en) Stopped object detection
US20180047173A1 (en) Methods and systems of performing content-adaptive object tracking in video analytics
CN108073577A (en) A kind of alarm method and system based on recognition of face
CN106790515B (en) Abnormal event processing system and application method thereof
US11195010B2 (en) Smoke detection system and method
US10140718B2 (en) Methods and systems of maintaining object trackers in video analytics
KR101743386B1 (en) Video monitoring method, device and system
US20180197294A1 (en) Methods and apparatus for video background subtraction
CN103325209A (en) Intelligent security alarm system based on wireless
US10360456B2 (en) Methods and systems of maintaining lost object trackers in video analytics
US10115005B2 (en) Methods and systems of updating motion models for object trackers in video analytics
US20220338303A1 (en) Systems and methods for identifying blockages of emergency exists in a building
KR101377029B1 (en) The apparatus and method of monitoring cctv with control moudule
CN107959812B (en) Monitoring data storage method, device and system and routing equipment
US10341616B2 (en) Surveillance system and method of controlling the same
CN114913663B (en) Abnormality detection method, abnormality detection device, computer device, and storage medium
CN109815839B (en) Loitering person identification method under micro-service architecture and related product
KR101646733B1 (en) Method and apparatus of classifying media data
CN101315326A (en) Smog detecting method and apparatus
KR20120113014A (en) Image recognition apparatus and vison monitoring method thereof
KR20110079939A (en) Image sensing agent and security system of usn complex type
CN110928305A (en) Patrol method and system for railway passenger station patrol robot

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20191206