CN114205565A - Monitoring video distribution method and system - Google Patents

Monitoring video distribution method and system Download PDF

Info

Publication number
CN114205565A
CN114205565A CN202210137781.3A CN202210137781A CN114205565A CN 114205565 A CN114205565 A CN 114205565A CN 202210137781 A CN202210137781 A CN 202210137781A CN 114205565 A CN114205565 A CN 114205565A
Authority
CN
China
Prior art keywords
preset
video
activity
preset object
determining
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202210137781.3A
Other languages
Chinese (zh)
Other versions
CN114205565B (en
Inventor
潘嘉明
陈彬
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Yunding Network Technology Beijing Co Ltd
Original Assignee
Yunding Network Technology Beijing Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Yunding Network Technology Beijing Co Ltd filed Critical Yunding Network Technology Beijing Co Ltd
Priority to CN202210807908.8A priority Critical patent/CN115243009A/en
Priority to CN202210137781.3A priority patent/CN114205565B/en
Publication of CN114205565A publication Critical patent/CN114205565A/en
Priority to PCT/CN2022/104406 priority patent/WO2023280273A1/en
Priority to CN202280048533.XA priority patent/CN117730524A/en
Application granted granted Critical
Publication of CN114205565B publication Critical patent/CN114205565B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/18Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast
    • H04N7/181Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast for receiving images from a plurality of remote sources
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/52Surveillance or monitoring of activities, e.g. for recognising suspicious objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/20Movements or behaviour, e.g. gesture recognition

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Health & Medical Sciences (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Human Computer Interaction (AREA)
  • Social Psychology (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Signal Processing (AREA)
  • Artificial Intelligence (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Medical Informatics (AREA)
  • Psychiatry (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Alarm Systems (AREA)
  • Closed-Circuit Television Systems (AREA)

Abstract

The embodiment of the specification provides a monitoring video distribution method and a monitoring video distribution system, wherein the method comprises the steps of obtaining a first video of a first preset area and a second video of a second preset area, wherein the security level of the first preset area is higher than that of the second preset area; determining an activity track of a preset object based on the first video; determining an activity scene of a preset object based on the activity track; identifying the identity of the preset object based on the second video; determining a push rating for the first video and/or the second video based on the activity scene and the identity; distributing the first video and/or the second video push to one or more users based on the push level.

Description

Monitoring video distribution method and system
Technical Field
The present disclosure relates to the field of image communications, and in particular, to a method and a system for distributing surveillance videos.
Background
With the increasing speed of industrialization and urbanization, a large number of cities have become stranger societies. Due to distrust and anxiety of strangers, more and more users select to install the camera, use the intelligent doorbell and the intelligent door lock with the camera. But the number of people that may pass around the door is very large and a large number of people may be monitored, some of which do not need to be concerned or do not have any security risk, such as passing by passers-by chance. If all monitored personnel information is pushed to the user, for example, the personnel information is recorded and pushed to a mobile phone APP of the user, the user is bombarded by a large amount of useless information, and the key information is easily missed.
Therefore, a surveillance video distribution method is needed to better implement surveillance video push.
Disclosure of Invention
One embodiment of the present specification provides a surveillance video distribution method. The monitoring video distribution method comprises the following steps: acquiring a first video of a first preset area and a second video of a second preset area, wherein the security level of the first preset area is higher than that of the second preset area; determining an activity track of a preset object based on the first video; determining an activity scene of the preset object based on the activity track; identifying the identity of the preset object based on the second video; determining a push rating for the first video and/or the second video based on the activity scene and the identity; distributing the first video and/or the second video to one or more users based on the push rating.
One of embodiments of the present specification provides a surveillance video distribution system, including: the video acquisition module is used for acquiring a first video of a first preset area and a second video of a second preset area, and the security level of the first preset area is higher than that of the second preset area; the track determining module is used for determining an activity track of a preset object based on the first video; the scene determining module is used for determining an activity scene of the preset object based on the activity track; the identity determination module is used for identifying the identity of the preset object based on the second video; a push level determination module for determining a push level of the first video and/or the second video based on the activity scene and the identity; and a distribution module for distributing the first video and/or the second video to one or more users based on the push level.
One of the embodiments of the present specification provides a surveillance video distribution apparatus, including a processor, where the processor is configured to execute a surveillance video distribution method.
One of the embodiments of the present specification provides a computer-readable storage medium, where the storage medium stores computer instructions, and when the computer reads the computer instructions in the storage medium, the computer executes a surveillance video distribution method.
Drawings
The present description will be further explained by way of exemplary embodiments, which will be described in detail by way of the accompanying drawings. These embodiments are not intended to be limiting, and in these embodiments like numerals are used to indicate like structures, wherein:
FIG. 1 is a schematic diagram of an application scenario of a surveillance video distribution system according to some embodiments of the present description;
FIG. 2 is an exemplary flow diagram of a surveillance video distribution method according to some embodiments of the present description;
FIG. 3 is an exemplary flow diagram of a method of determining an active scene in accordance with some embodiments of the present description;
FIG. 4 is an exemplary flow chart of a method of identifying an identity in accordance with some embodiments of the present description;
FIG. 5 is another exemplary flow chart of a method of identifying an identity in accordance with some embodiments of the present description;
fig. 6 is a schematic diagram of a surveillance video distribution method according to some embodiments of the present description.
Detailed Description
In order to more clearly illustrate the technical solutions of the embodiments of the present disclosure, the drawings used in the description of the embodiments will be briefly described below. It is obvious that the drawings in the following description are only examples or embodiments of the present description, and that for a person skilled in the art, the present description can also be applied to other similar scenarios on the basis of these drawings without inventive effort. Unless otherwise apparent from the context, or otherwise indicated, like reference numbers in the figures refer to the same structure or operation.
It should be understood that "system", "apparatus", "unit" and/or "module" as used herein is a method for distinguishing different components, elements, parts, portions or assemblies at different levels. However, other words may be substituted by other expressions if they accomplish the same purpose.
As used in this specification and the appended claims, the terms "a," "an," "the," and/or "the" are not intended to be inclusive in the singular, but rather are intended to be inclusive in the plural, unless the context clearly dictates otherwise. In general, the terms "comprises" and "comprising" merely indicate that steps and elements are included which are explicitly identified, that the steps and elements do not form an exclusive list, and that a method or apparatus may include other steps or elements.
Flow charts are used in this description to illustrate operations performed by a system according to embodiments of the present description. It should be understood that the preceding or following operations are not necessarily performed in the exact order in which they are performed. Rather, the various steps may be processed in reverse order or simultaneously. Meanwhile, other operations may be added to the processes, or a certain step or several steps of operations may be removed from the processes.
The monitoring video distribution method and system in one or more embodiments of the present specification can be applied to various scenes in the security field, for example, banks, hotels, hospitals, machine rooms, storehouses, key rooms, office buildings, office areas, schools, kindergartens, residential areas, factories, elevators, and the like. In some embodiments, the surveillance video distribution method and system may be used with various types of doors, such as, for example, side-hung doors, side-by-side doors, sliding doors, folding doors, rollup doors, revolving doors, induction doors, and the like. In some embodiments, the surveillance video distribution method and system may be used for various windows, balconies, ceilings, and the like. In some embodiments, the surveillance video distribution method and system may be used in bank counters, ATM machines, and the like.
By the monitoring video distribution method and the monitoring video distribution system, the following steps can be realized: the identity of the personnel is recognized based on the characteristics of the face, the gait, the attachments and the like, the self-adaptive learning is realized, the activity scene is judged through the activity track, and one or more functions of hierarchical pushing of information of the intelligent security equipment and the like are realized by combining the identity of the personnel and the activity scene. The method and the system based on the security information classification can realize one or more beneficial effects of accurately identifying personnel identities, reducing or avoiding blind areas, reasonably and effectively pushing and the like.
It should be understood that the application scenarios of the surveillance video distribution method and system of the present application are only examples or embodiments of the present application, and those skilled in the art can also apply the present application to other similar scenarios according to the drawings without any creative effort.
Fig. 1 is a schematic diagram of an application scenario of a surveillance video distribution system according to some embodiments of the present description.
As shown in fig. 1, the surveillance video distribution system 100 may include a server 110, a processor 112, a terminal 120, a camera 130 (hereinafter referred to as the camera 130) disposed in a security area (e.g., in front of a door, on a door frame, on a window frame, around an ATM, etc.), a storage device 140, and a network 150.
In some embodiments, the server 110 may be used to process information and/or data related to monitoring the video distribution system 100, such as obtaining video, identifying personnel identity. In some embodiments, the server 110 may be a single server or a group of servers. The set of servers can be centralized or distributed (e.g., the server 110 can be a distributed system). In some embodiments, the server 110 may be local or remote. For example, server 110 may access information and/or data stored in terminal 120, camera 130, storage device 140 via network 150. As another example, server 110 may be directly connected to terminal 120, camera 130, and/or storage device 140 to access stored information and/or data. In some embodiments, the server 110 may be implemented on a cloud platform or provided in a virtual manner. By way of example only, the cloud platform may include a private cloud, a public cloud, a hybrid cloud, a community cloud, a distributed cloud, an internal cloud, a multi-tiered cloud, and the like, or any combination thereof.
In some embodiments, the server 110 may include a processor 112. The processor 112 may process information and/or data related to the surveillance video distribution system 100 to perform one or more of the functions described herein. For example, the processor 112 may retrieve the second video, the first video, and determine the push level. In some embodiments, processor 112 may include one or more processing engines (e.g., a single chip processing engine or a multi-chip processing engine).
Terminal 120 refers to one or more terminal devices or software used by a user. In some embodiments, the terminal 120 may be used by one or more users, which may include owners, family members, security personnel, property personnel, and the like. In some embodiments, the identity of the person may be tagged by the terminal 120. In some embodiments, the push information may be presented to the user through the terminal 120, for example, through the user's cell phone, through a display of the building vision system. In some embodiments, the terminal 120 may be one or any combination of a mobile device 120-1, a tablet computer 120-2, a laptop computer 120-3, a desktop computer 120-4, or other device having input and/or output capabilities. In some embodiments, mobile device 120-1 may include a cell phone, a smart phone, a Personal Digital Assistant (PDA), a navigation device, a handheld terminal (POS), and the like, or any combination thereof. In some embodiments, desktop computer 120-4 may be an on-board computer, an on-board television, or the like.
The secured area may include an area around a door, which may include doors of various venues, such as an entry door, a cell door, a building door, a cottage door, a yard door, a garage door, and the like. In some embodiments, the opening direction of the door may be inward opening and/or outward opening. The secured area may also include areas around windows, balconies, rooftops, bank counters, ATM machines, and the like. In some embodiments, one or more cameras are positioned within a secured area, for example, on a door (e.g., a door frame), around a door (e.g., a wall around a door or other camera mountable object), on a window, around a balcony, around a bank counter, around an ATM. The cameras 130 may include a general camera, a high definition camera, a visible light camera, an infrared camera, an optical flow camera, a night vision camera, and the like. In some embodiments, the camera 130 may be disposed in an indoor, outdoor, behind door, door frame, or the like, and any combination thereof. Cameras 130 may be used to capture video within a security area (e.g., inside, outside, behind, in a door frame, outside a window, outside a bank counter, inside a bank counter, etc.), and in some embodiments, one or more cameras may transmit the captured video to server 110 via network 150.
The storage device 140 may be used to store data and/or instructions related to monitoring the video distribution system 100. In some embodiments, storage device 140 may store data obtained/obtained from terminal 120 and/or camera 130. In some embodiments, storage device 140 may store historical data, video data, training samples, and the like. In some embodiments, storage device 140 may store data and/or instructions used by server 110 to perform or use to perform the exemplary methods described in this application. In some embodiments, storage device 140 may include one or a combination of mass storage, removable storage, volatile read-write memory, read-only memory (ROM), and the like. In some embodiments, the storage device 140 may be implemented by a cloud platform as described herein. For example, the cloud platform may include one or a combination of private cloud, public cloud, hybrid cloud, community cloud, distributed cloud, cross-cloud, multi-cloud, and the like.
The network 150 may facilitate the exchange of information and/or data. In some embodiments, one or more components of the surveillance video distribution system 100 (e.g., the server 110, the cameras 130, the storage devices 140) may send information and/or data to other components of the surveillance video distribution system 100 via the network 150. For example, the camera 130 may send door periphery video (e.g., a first video, a second video, a door back image, etc.) to the server 110 via the network 150. In some embodiments, the surveillance video distribution system 100 may include one or more network access points. For example, a base station and/or wireless access point 150-1, 150-2, …, one or more components of the surveillance video distribution system 100 may be connected to the network 150 to exchange data and/or information.
It should be noted that the surveillance video distribution system 100 is provided for illustrative purposes only and is not intended to limit the scope of the present application. It will be apparent to those skilled in the art that various modifications and variations can be made in light of the description of the present application. For example, the surveillance video distribution system 100 may also include a database. As another example, the surveillance video distribution system 100 may implement similar or different functionality on other devices. However, such changes and modifications do not depart from the scope of the present application.
In some embodiments, the surveillance video distribution system 100 may include a video acquisition module, a trajectory determination module, a scene determination module, an identity determination module, a push level determination module, and a distribution module.
In some embodiments, the video acquiring module may be configured to acquire a first video in a first preset area and a second video in a second preset area, where a security level of the first preset area is higher than a security level of the second preset area.
In some embodiments, the trajectory determination module may be configured to determine an activity trajectory of the preset object based on the first video.
In some embodiments, the scene determination module may be configured to determine an activity scene of the preset object based on the activity trajectory.
In some embodiments, the scene determining module may be further configured to determine at least one type of preset activity scene and an activity track corresponding to each preset activity scene, where the at least one type of preset activity scene includes at least a first type of activity scene and a second type of activity scene; wherein the first type of activity scene comprises scenes passing through a sensitive monitoring area; the second type of activity scene comprises scenes which do not pass through a sensitive monitoring area; and determining a preset activity scene matched with the activity track of the preset object in at least one type of preset activity scenes, and determining the activity scene of the preset object based on the matched preset activity scene. In some embodiments, the activity track corresponding to the first type of activity scene includes: at least one of an entry trajectory and an exit trajectory; the activity tracks corresponding to the second type of activity scenes comprise regular tracks.
In some embodiments, the scene determination module may be further configured to determine a preset object movement state based on the first video, where the preset object movement state includes movement of a preset object or no movement of the preset object; determining a preset object moving state corresponding to each preset activity scene; determining a preset activity scene matched with an activity track of a preset object and a movement state of the preset object in at least one type of preset activity scene, and determining the activity scene of the preset object based on the matched preset activity scene; the preset object movement corresponds to a first type of activity scene, and the non-preset object movement corresponds to a second type of activity scene.
In some embodiments, the scenario determination module may be further configured to determine a door lock status, the door lock status including an open status or a closed status; determining the door lock state corresponding to each preset activity scene; determining a preset activity scene matched with the activity track and the door lock state of a preset object in at least one type of preset activity scenes, and determining the activity scene of the preset object based on the matched preset activity scene; the open state corresponds to a first type of activity scene, and the closed state corresponds to a second type of activity scene.
In some embodiments, the identity determination module may be configured to identify the identity of the preset object based on the second video.
In some embodiments, the identity determination module may be further configured to obtain at least one preset object feature of a human face feature, a preset object gait feature and a preset object attachment feature based on the second video, where the preset object attachment feature includes at least one of a clothing feature, a hair style feature and a clothing feature; and determining the identity of the preset object based on at least one preset object characteristic.
In some embodiments, the identity determination module may be further configured to determine at least one preset identity and at least one preset object feature corresponding to each preset identity; and determining a preset identity matched with at least one preset object characteristic of the preset object in the at least one preset identity, and determining the identity of the preset object based on the matched preset identity.
In some embodiments, the identity determination module may be further configured to identify the identity of the preset object through a machine learning model based on the second video.
In some embodiments, the push level determination module may be configured to determine a push level of the preset object based on the activity scenario and the identity.
In some embodiments, different identities corresponding to different activity scenarios may correspond to different preset object push levels.
In some embodiments, different push classes may correspond to different push methods.
In some embodiments, the distribution module may be to distribute the first video and/or the second video to one or more users based on the push rating.
It should be noted that the above descriptions of the candidate display, the preset object information rating system and the modules thereof are only for convenience of description, and the description should not be limited to the scope of the illustrated embodiments. It will be appreciated that it is within the skill of the art to predetermine objects, and having an understanding of the principles of the system, it is possible to combine the various modules in any way or to form a subsystem in connection with other modules without departing from such principles. In some embodiments, the video acquisition module, the track determination module, the scene determination module, the identity determination module, the push level determination module, and the distribution module disclosed in fig. 1 may be different modules in one system, or may be a module that implements the functions of two or more modules described above. For example, each module may share one memory module, and each module may have its own memory module. Such variations are within the scope of the present disclosure.
Fig. 2 is an exemplary flow diagram of a surveillance video distribution method according to some embodiments of the present description. As shown in fig. 2, the process 200 may include the following steps. In some embodiments, the process 200 may be performed by the processor 112.
Step 210, a first video of a first preset area and a second video of a second preset area are obtained, wherein the security level of the first preset area is higher than that of the second preset area. In some embodiments, step 210 may be performed by a video acquisition module.
In some embodiments, when a preset object exists in the security area, the video acquisition module may acquire a first video of a first preset area and a second video of a second preset area, where the first preset area at least includes a sensitive monitoring area, the second preset area at least includes a non-sensitive monitoring area, and a security level of the first preset area is higher than a security level of the second preset area.
The preset object refers to an object that may actively or passively enter a protected area. The preset objects may include persons, animals, movable objects, and the like. The protected area may include a residential home, a factory building, an office area, inside a bank counter, etc.
The preset object in the security area refers to the activity of the preset object in the range that the distance between an access door, a window and the like is less than a certain threshold value. For example, cleaning a corridor or stairs in front of a door, cleaning a window, etc., passing by a neighbor, knocking or pressing a doorbell by a visitor (friend, residence, courier), repairing a facility in front of a door or carrying an object by a property person, opening a door by a family member (using a key, a fingerprint, brushing a face, etc.), handling business by a customer at a bank counter or an ATM, destroying glass of a window or a bank counter by an unidentified person, destroying an ATM by an unidentified person, performing activities on a balcony, courier takeout, etc. are placed at a doorway, etc. The threshold may be determined on demand or empirically, for example 1 meter, 1.5 meters, 5 meters, 10 meters, etc.
In some embodiments, the video acquisition module may determine whether the preset object exists in the security area through a sensor, for example, an infrared sensor, a vibration sensor, a sound sensor, or the like. If the sensor senses a person, an animal, a vibration, a sound (footstep sound, breathing sound, clothing rub sound, cat cry, wing flapping sound, etc.), the acquisition of the first video and/or the second video is triggered. In some embodiments, the video acquisition module may automatically trigger acquisition of the first video and/or the second video, e.g., once every 30 seconds, etc. In some embodiments, when a security worker sees that a preset object exists in a surveillance video, the security worker can manually trigger the acquisition of the first video and/or the second video.
For more details of the security area, refer to fig. 1, which is not described herein again.
In some embodiments, one or more cameras (e.g., camera 130 described in fig. 1) disposed in the secured area may remain on. In some embodiments, some or all of the cameras may be turned on when a preset object exists in the security area. For example, a common camera (e.g., a camera with a lower resolution and a lower power) may be kept in an on state all the time, and when a preset object exists in the security area, the remaining part of cameras such as a high-definition camera and an optical flow camera may be turned on. For another example, the infrared camera may be kept in an on state at night, and when a preset object exists in the security area, the light source may be turned on and the remaining cameras such as the high-definition camera and the optical flow camera may be turned on.
In some embodiments, the video acquisition module may determine whether the preset object exists in the security area through a plurality of methods. For example, a video shot by a common camera or an infrared camera is analyzed in real time, and when it is determined that a human face, a human figure and the like exist in the video, it can be determined that a preset object exists around the door. For another example, when a footstep sound is detected by the sensor, it may be determined that a preset object exists in the security area. For another example, when a movement of a person is detected by a sensor, it may be determined that a preset object exists in the security area. The second predetermined area includes at least a non-sensitive monitoring area. The second video is a video of which the shooting area is a second preset area, for example, a video outside a door of a house, a video outside a door of a factory building, a video outside a window, a video outside a counter of a bank, and the like. The second preset area can be set according to actual requirements.
In some embodiments, the second video may be captured by various cameras (e.g., a normal camera, a high-definition camera, an infrared camera, etc.).
In some embodiments, the video acquisition module may acquire the second video from each camera via a network. Alternatively, the video acquisition module may be integrated with each camera. In some embodiments, the video capture module may capture the second video from each camera via the bus. In some embodiments, the second video may be obtained through an interface including, but not limited to, a program interface, a data interface, a transport interface, and the like. For example, when the information classification system of the intelligent security device works, the second video can be automatically extracted from the interface.
The first predetermined area includes at least a sensitive monitoring area. The sensitive monitoring area refers to an area necessary to enter a protection area or perform a specific operation, for example, an area right under a door frame, an area right under a window frame, a bank counter glass partition and a right under the same, an area where a balcony or a balcony is connected with a room, an operation area of an ATM, and the like. The protected area may include a residential home, a factory building, an office area, inside a bank counter, etc.
It will be appreciated that entering and/or leaving a protected area requires passing through a sensitive monitoring area. By taking a video of a sensitive monitored area, all the entry and/or exit of people, animals or other entities into and/or out of the protected area can be photographed. The first video is a video with a shooting area being a first preset area, for example, a video directly below a door frame of an entrance door, a video directly below a door frame of an office area, a video directly below a glass partition of a bank counter, and the like. The first preset area may be set according to actual requirements, and may further include an area near the lower side of the door frame (for example, an area within a preset distance, for example, 50cm, around the lower side of the door frame).
The first video can be obtained by shooting through various cameras. In particular, in some embodiments, the first video may be obtained by camera shooting, such as an optical flow camera, which may shoot a motion trajectory. The optical flow camera refers to a camera capable of reflecting an optical flow field. The video images shot by the optical flow camera can reflect the moving speed and the moving direction of pixels in the images.
In some embodiments, the video acquisition module may acquire the first video from the aforementioned camera via a network. Alternatively, the video acquisition module may be integrated with the aforementioned camera. In some embodiments, the video acquisition module may acquire the first video from the aforementioned camera via the bus. In some embodiments, the first video may be acquired through an interface including, but not limited to, a program interface, a data interface, a transport interface, and the like. For example, when the information classification system of the intelligent security device works, the first video can be automatically extracted from the interface.
Step 220, determining an activity track of a preset object based on the first video.
The activity track may refer to a route that a preset object, for example, a person, passes through. For example, the activity track may include elevator door > > door front > > door.
The activity track determination module may determine the activity track of the preset object based on the first video through various methods. For example, optical flow analysis is performed on the first video. For another example, the sequence features of an image sequence formed by the images of the frames of the first video are analyzed, the sequence features are input into a trained machine learning model, and the motion track of the preset object is determined.
Step 230, determining an activity scene of the preset object based on the activity track.
The activity scene refers to the situation of the activity of a preset object, such as a person going out, entering, passing in front of a door, staying in front of the door, a cat entering a balcony, a bird eating on a balcony and the like.
In some embodiments, the scene determination module may determine the activity scene of the preset object based on the activity trajectory through a variety of methods. For example, by training the machine learning model, the machine learning model input may be a curve formed by connecting positions of preset objects at various time points, and the output may be a corresponding activity scene type.
In some embodiments, the scene determination module may determine at least one type of preset activity scene and an activity track corresponding to each preset activity scene, may determine a preset activity scene matched with the activity track of the preset object in the at least one type of preset activity scene, and determines the activity scene of the preset object based on the matched preset activity scene. For more on determining the activity scene of the preset object based on the activity track, refer to fig. 3 and its related description.
And 240, identifying the identity of the preset object based on the second video.
The identity of the preset object refers to a relationship or role between the preset object and the user. For example, the identity of the preset object may include owner's family, strangers, owner's neighbor friends, express delivery take-out, and the like. As another example, the preset object identity may include employees of the enterprise, visitors of the enterprise, logistics of the enterprise, and the like.
The identity determination module may identify the identity of the preset object through various methods. In some embodiments, the identity determination module may obtain at least one preset object feature based on the second video, where the preset object feature may include a human face feature, a gait feature of the preset object, an accessory feature of the preset object, a fingerprint, a voiceprint, an iris, and the like, and may determine the identity of the preset object based on the at least one preset object feature. Wherein the preset object accessory characteristics may include at least one of characteristics of an accessory on a person, such as a clothing characteristic, a hair style characteristic, a clothing characteristic, and the like. For more details on the identity of the preset object, reference may be made to fig. 4 and the description thereof in this specification.
In some embodiments, the identity determination module may identify the identity of the preset object through a machine learning model based on the second video. See fig. 5 and its description in this specification for the identification of the preset object by the machine learning model.
Step 250, determining a push rating of the first video and/or the second video based on the activity scene and the identity.
The push rating may reflect a rating of the importance and/or urgency of the first video and/or the second video. In some embodiments, the push levels may include level 1, level 2, level 3, etc., where a higher level provides a higher degree of importance and/or urgency to the security area information. For example, level 1 is the highest level, corresponding to the highest degree of importance and/or urgency, level 2 is the next to level 3 is the lowest level, corresponding to the lowest degree of importance and/or urgency.
In some embodiments, the push level determination module may determine the push level of the first video and/or the second video related to the preset object based on the activity scene and the identity of the preset object. In some embodiments, the push level determination module may further determine the push level of the first video and/or the second video related to the preset object based on a duration, a specific sound, a specific action (e.g., picking a lock, unlocking, knocking a door, pressing a doorbell, smashing a window), and/or the like.
This description figure 6 provides an example of determining a push level for a first video and/or a second video.
Step 260, distributing the first video and/or the second video to one or more users based on the push level.
In some embodiments, the different push levels correspond to different video (first video and/or second video) distribution methods. For example, the level 1 push level may correspond to a distribution method in which a video is sent to a user and a connection between a user terminal and each camera is opened, and the user may select to view the video or directly view a real-time image. Also for example, the level 2 push level may correspond to a distribution method such as sending video to a user (but not opening a connection to each camera). As another example, the level 3 push level may correspond to a distribution method that periodically sends video to a user terminal, a user account, or a user mailbox. In some embodiments, the distribution module may distribute videos of different push levels to users of different permissions. For example, a level 1 push level video may be sent to owners and family members, security personnel, monitoring personnel, and the like. Also for example, a level 2 or level 3 push level video may be sent only to the owner.
FIG. 3 is an exemplary flow diagram of a method of determining an active scene in accordance with some embodiments described herein.
Step 310, determining at least one type of preset activity scene and an activity track corresponding to each preset activity scene.
The at least one type of preset activity scene may include a first type of activity scene and a second type of activity scene; the first-class activity scene may include scenes passing through a sensitive monitoring area, such as a visiting scene, an entrance scene, an exit scene, and the like; the second category of activity scenes may include scenes that do not pass through a sensitive monitoring area, such as a pass-by scene, a stay-in-front scene, a move-in-front object scene, and so on.
In some embodiments, the scene determination module may determine an activity track corresponding to each preset activity scene. For example, the activity tracks corresponding to the first type of activity scene may include a track entering a sensitive monitoring area and a track leaving the sensitive monitoring area, and the activity tracks corresponding to the second type of activity scene may include a regular track. The regular track refers to an action track (for example, downstairs > > stairs > > front of a door > > stairs > > upstairs) of a preset object without a behavior such as entering or exiting a door and needing special attention of a user. The track entering the sensitive monitoring area refers to an activity track corresponding to the preset object entering the sensitive monitoring area, for example, a door entering track. The leaving sensitive monitoring area track refers to an activity track corresponding to the preset object leaving the sensitive monitoring area, for example, a going-out track.
Step 320, determining the preset activity scene matched with the activity track of the preset object in the at least one type of preset activity scene, and determining the activity scene of the preset object based on the matched preset activity scene.
In some embodiments, the preset object activity scene corresponding to the activity track of the preset object may be determined according to whether the activity track of the preset object matches with one or some activity tracks included in a certain activity scene in at least one type of preset activity scenes. As an example, if it is determined that the activity track of the preset object is downstairs > > stairs > > front of the door > > stairs > > upstairs, and the activity track belongs to a conventional track, the activity scene of the preset object may be one or a combination of several of a passing scene, a staying scene in front of the door, and a moving object scene in front of the door. For another example, if the activity track of the preset object is determined to be the entry track, the activity scene of the preset object may be the entry scene. For another example, if the activity track of the preset object around the door is determined to be the exit track, the activity scene of the preset object may be the exit scene.
In some embodiments, the scene determination module may further determine a preset object moving state based on the first video, and the preset object moving state may include a preset object moving state or no preset object moving state.
In some embodiments, the scene determination module may determine a preset object movement state corresponding to each preset activity scene. For example, there is a preset object moving corresponding to a first type of activity scene, and there is no preset object moving corresponding to a second type of activity scene.
In some embodiments, the scene determination module may determine a preset object motion trajectory and a preset object moving state corresponding to each preset motion scene. For example:
the preset object moving track comprises a door entering track, and if the preset object moving state is that a preset object moves, the preset object moving state is corresponding to a door entering scene; the preset object moving track comprises an exit track, and if the preset object moving state is that a preset object moves, the preset object moving state is corresponding to an exit scene; the preset object moving track does not comprise an entrance track and an exit track, and if the preset object moving state is that the preset object moves, the preset object moving state corresponds to a passing scene; the preset object moving track does not comprise an entrance track and an exit track, and if the preset object moving state is that no preset object moves, the preset object moving state corresponds to a stay scene; the preset object motion track does not include an entry track and an exit track, but the preset object motion track appears in a specific area behind a door (for example, an express delivery placing area set by a user or a system), and if the preset object motion state is that the preset object moves, the preset object motion state corresponds to a scene of moving objects in front of the door.
In some embodiments, the scene determination module may determine a preset activity scene matching the activity trajectory of the preset object and the preset object moving state in at least one type of preset activity scene, and determine the activity scene of the preset object based on the matched preset activity scene.
In some embodiments, the context determination module may also determine a door lock status, which may include an open status or a closed status. The scene determination module may determine the state of the door lock through various methods such as image recognition, door lock detection, and the like.
In some embodiments, the scenario determination module may determine a door lock status corresponding to each preset activity scenario. For example, the on state corresponds to a first type of activity scenario, and the off state corresponds to a second type of activity scenario.
In some embodiments, the scene determination module may determine the preset object activity track and the door lock state corresponding to each preset activity scene. For example:
the preset object moving track comprises a door entering track, and if the door lock is in an opening state in the process, the preset object moving track corresponds to a door entering scene; the preset object moving track comprises an exit track, and if the door lock is in an open state in the process, the preset object moving track corresponds to an exit scene; the preset object motion track does not comprise an exit track and an entry track, the time of the occurrence of the motion track is below a threshold value, and if the door lock is in a closed state in the process, the preset object motion track corresponds to a passing scene; the preset object moving track does not comprise an exit track and an entry track, the time of the occurrence of the moving track is above a threshold value, and in the process, if the door lock is in a closed state, the preset object moving track corresponds to a stay scene in front of the door; the preset object motion track does not include an exit track and an entry track, but the preset object motion track appears in a specific area behind a door (for example, an express delivery placing area set by a user or a system), and in the process, if the door lock is in a closed state, the preset object motion track corresponds to a scene of moving objects before the door.
In some embodiments, the scene determination module may determine a preset activity scene matching an activity track and a door lock state of a preset object in at least one type of preset activity scene, and determine an activity scene of the preset object based on the matched preset activity scene.
In some embodiments, the scene determination module may determine a preset activity scene matching an activity track, a door back image, and a door lock state of a preset object in at least one type of preset activity scene, and determine the activity scene of the preset object based on the matched preset activity scene. For example:
the preset object moving track comprises a door entering track, the movement of the preset object is represented by a door back image, and in the process, if the door lock is in an opening state, the door entering scene is judged; the preset object moving track comprises an exit track, the movement of the preset object is represented by the image behind the door, and in the process, if the door lock is in an open state, the door lock is judged to be in an exit scene; the preset object motion track does not comprise an exit track and an entry track, the time of the occurrence of the motion track is below a threshold value, and if the door lock is in a closed state in the process, the situation is judged to pass through; the preset object moving track does not comprise an exit track and an entry track, the time of the moving track is above a threshold value, and in the process, if the door lock is in a closed state, the preset object moving track is judged to be a stay scene; the preset object motion track does not comprise an exit track and an entry track, but the preset object motion track is in a specific area behind a door (for example, an express delivery placing area set by a user or a system), the image representation behind the door shows that the preset object moves and an article moves, and in the process, if the door lock is in a closed state, the scene of the article before the door is moved is judged.
It should be noted that the above description of the process 300 is for illustration and description only and is not intended to limit the scope of the present disclosure. Various modifications and changes to flow 300 will be apparent to those skilled in the art in light of this description. However, such modifications and variations are intended to be within the scope of the present description.
Fig. 4 is an exemplary flow diagram of a method of identifying an identity in accordance with some embodiments of the present description.
And step 410, acquiring at least one preset object feature from a human face feature, a preset object gait feature and a preset object accessory feature based on the second video, wherein the preset object accessory feature comprises at least one of a clothing feature, a hair style feature and a wearing article feature. In some embodiments, this step 410 may be performed by an identification module.
The human face features refer to human face features. The facial features may include skin tone, skin texture, facial features, cosmetic features, etc.
The gait characteristics refer to the characteristics of the gait, such as the magnitude, the direction, the action point and the like of the force when a person walks, and are the reflection of the walking habits of the person in the steps of falling feet, rising feet and supporting and swinging. Gait characteristics may include a person's step size, stride length, stride frequency, pace, gait cycle, equal stride length when walking, stride frequency, pace, etc.
Accessory features refer to features that dress or carry items. Such as a workcard, a helmet, a serving box, a drinking water bucket, a cart, etc., carried by a person, and also, for example, clothing, headwear, hats, etc., on a person. In some embodiments, the accessory features include clothing features, hair style features, clothing features, carrying features, and the like.
The identity recognition module may obtain the at least one preset object characteristic through a plurality of methods. For example, the face features and the accessory features are acquired by image recognition, the gait features are acquired by gait analysis of video images, and the like.
By combining the gait characteristics and the attached characteristics, the situation that the face is shielded and the like cannot identify the identity can be avoided, and the accuracy of identity identification can be improved.
Step 420, determining the identity of the preset object based on the at least one preset object feature.
In some embodiments, the identity module may determine the identity of the predetermined object based on at least one predetermined object characteristic.
In some embodiments, first, the identity recognition module may determine at least one preset identity and at least one preset object feature corresponding to each preset identity. The at least one preset identity may include a family of the owner, a stranger, a neighbor friend of the owner, a takeout of express, an employee of the enterprise, a preset object of a visit of the enterprise, a preset object of a logistics of the enterprise, and the like. At least one preset object feature corresponding to each preset identity can be obtained by user-defined or by performing feature extraction on historical data and the like.
Secondly, the identity recognition module may determine a preset identity matching with the at least one preset object feature of the preset object among at least one preset identity, and determine the identity of the preset object based on the matched preset identity. For example, express delivery takeout identity matches with the characteristics such as wearing worker's tablet, wearing uniform, carrying the food delivery case, and when predetermineeing the object and having characteristics such as wearing worker's tablet, wearing uniform, carrying the food delivery case, identity identification module can predetermine the identity of object and establish to express delivery takeout.
In some embodiments, the identification module may detect the preset object activity around the door using a sensor such as a point infrared sensor (PIR) or a laser distance sensor that may detect the preset object. When the sensor detects that a person approaches (for example, the distance between the person and the door is smaller than a preset threshold), the identity recognition module can perform human shape detection based on the image, estimate the height of the preset object based on the human shape detection, further calculate the optimal face recognition position (for example, 30cm away from the door) based on the height of the preset object, and perform more accurate face recognition based on the optimal face recognition. Human shape detection refers to detection of the shape, contour, etc. of a preset object. In some embodiments, the identity recognition module may further prompt the optimal face recognition position through a feedback screen interface to guide the pre-set object in front of the door to move to the optimal face recognition position. In some embodiments, the identity recognition module may further prompt the optimal standing position on the ground through laser light, and the optimal standing position may be a position corresponding to the optimal face recognition position.
In some embodiments, the accuracy of identity recognition can be improved by prompting the optimal face position and the optimal standing position.
FIG. 5 is another exemplary flow chart of a method of identifying an identity according to some embodiments of the present description.
In some embodiments, the identity recognition module may recognize the identity of the preset object through a machine learning model based on the second video. The machine learning model may include, but is not limited to, a combination of one or more of a neural network model, a support vector machine model, a k-nearest neighbor model, a decision tree model, and the like. The input to the machine learning model may include preset object related images and the output of the machine learning model may include the identity of the preset object, e.g., family, courier take away, etc.
As shown in fig. 5, the machine learning model may initially mark the identity and the characteristics of the identified preset object. In one aspect, the push level determining module may determine the preset object push level of the preset object based on the identity of the primary label and the activity scene identified in fig. 3. On the other hand, the identity module may prompt the user to verify whether the initial mark is correct, for example, prompt the user to verify in the information pushed to the user, regularly (8 pm per day) prompt the user to verify, and the like. If the user thinks the initial mark has error, the mark can be marked again.
In some embodiments, the faces, gait, accessory features, etc. of the relabeled preset objects and the corresponding identity classes may be stored in a sample library for training of the machine learning model.
Fig. 6 is a schematic diagram of a surveillance video distribution method according to some embodiments of the present description.
In some embodiments, different activity scenarios and different identities correspond to different push levels.
As shown in fig. 6, in a first type of activity scenario (e.g., visiting, entering, going out, etc.), if the preset object identity is family, neighbor, or expressage, a lower preset object push level (e.g., level 2 or level 3) is corresponded; if the preset object identity is a stranger, a higher pushing level (for example, level 1) is corresponded.
In some embodiments, the trajectory determination module may record the duration of the action trajectory for the same identity. For example, if the duration is greater than or equal to a threshold (e.g., 5 minutes) in a second type of activity scenario (e.g., stay, etc.), then a higher push level (e.g., level 1) is corresponding; if the duration is less than the threshold in the second category of scenarios, a lower push level (e.g., level 3) is corresponded.
In some embodiments, the push tier determination module may determine the push tier based on the scene type, duration, and preset object identity. For example, when the duration of the second type of activity scene is greater than the threshold, if the preset object identity is a property person or cleaning, a lower pushing level (e.g., level 3) is corresponded; if the preset object identity is a stranger, a higher pushing level (e.g., level 1) is corresponded. As shown in fig. 6, if the duration is greater than or equal to the threshold (e.g., 5 minutes) in the first type scenario and the preset object identity is express delivery, a higher push level (e.g., level 1) is corresponded.
In some embodiments, the push level determination module may further determine the push level through sounds and/or actions detected by the respective cameras. For example, in the second type of activity scene, if an action of pressing a doorbell, knocking a door, or the like or a sound is detected, the push level is set to 2; if an action such as picking a lock or a sound is detected, the push level is set to level 1.
In some embodiments, the user may set the forecast event information through the terminal 120. The forecast event information may include a preset identity, a preset occurrence time, and the like. For example, a friend will be in 10: 00 visit, take-out will be in 11: 50 to, etc.
In some embodiments, the push level determination module may receive forecast event information set by a user, extract a preset identity, a preset occurrence time, and the like therein, and set a preset push level of the forecast event information. In some embodiments, when the identity determination module identifies a person corresponding to a preset person identity at a preset occurrence time, pushing is performed according to a preset pushing level. For example, the push tier determination module will send the event "take out will be at 11: push rank of 50 to "is set to 1 rank, 11: and 50, the identity determination module identifies that a takeaway person appears before the door, and pushes the takeaway person to the user in a pushing mode (for example, dialing the telephone of the user) corresponding to the level 1 pushing level.
Some possible benefits of embodiments of the present application include, but are not limited to: (1) the identity can be identified according to the characteristics of gait, attachments and the like under the condition that the face is shielded or the face image is incomplete, so that the accuracy and efficiency of identity identification are improved; (2) personnel samples can be automatically collected, self-adaptive learning is realized, manual sample collection is avoided, and labor cost is saved; (3) monitoring blind areas can be avoided by collecting multi-directional videos such as the second video, the first video and the like; (4) the activity scene can be judged through the activity track, the activity scene and the identity are comprehensively considered to determine the pushing grade, the reasonable and effective distribution of the monitoring video is realized, and the information bombing and the important information submerging are avoided. It is to be noted that different embodiments may produce different advantages, and in different embodiments, any one or combination of the above advantages may be produced, or any other advantages may be obtained.
Having thus described the basic concept, it will be apparent to those skilled in the art that the foregoing detailed disclosure is to be regarded as illustrative only and not as limiting the present specification. Various modifications, improvements and adaptations to the present description may occur to those skilled in the art, although not explicitly described herein. Such modifications, improvements and adaptations are proposed in the present specification and thus fall within the spirit and scope of the exemplary embodiments of the present specification.
Also, the description uses specific words to describe embodiments of the description. Reference throughout this specification to "one embodiment," "an embodiment," and/or "some embodiments" means that a particular feature, structure, or characteristic described in connection with at least one embodiment of the specification is included. Therefore, it is emphasized and should be appreciated that two or more references to "an embodiment" or "one embodiment" or "an alternative embodiment" in various places throughout this specification are not necessarily all referring to the same embodiment. Furthermore, some features, structures, or characteristics of one or more embodiments of the specification may be combined as appropriate.
Additionally, the order in which the elements and sequences of the process are recited in the specification, the use of alphanumeric characters, or other designations, is not intended to limit the order in which the processes and methods of the specification occur, unless otherwise specified in the claims. While various presently contemplated embodiments of the invention have been discussed in the foregoing disclosure by way of example, it is to be understood that such detail is solely for that purpose and that the appended claims are not limited to the disclosed embodiments, but, on the contrary, are intended to cover all modifications and equivalent arrangements that are within the spirit and scope of the embodiments herein. For example, although the system components described above may be implemented by hardware devices, they may also be implemented by software-only solutions, such as installing the described system on an existing server or mobile device.
Similarly, it should be noted that in the preceding description of embodiments of the present specification, various features are sometimes grouped together in a single embodiment, figure, or description thereof for the purpose of streamlining the disclosure aiding in the understanding of one or more of the embodiments. This method of disclosure, however, is not intended to imply that more features than are expressly recited in a claim. Indeed, the embodiments may be characterized as having less than all of the features of a single embodiment disclosed above.
Numerals describing the number of components, attributes, etc. are used in some embodiments, it being understood that such numerals used in the description of the embodiments are modified in some instances by the use of the modifier "about", "approximately" or "substantially". Unless otherwise indicated, "about", "approximately" or "substantially" indicates that the number allows a variation of ± 20%. Accordingly, in some embodiments, the numerical parameters used in the specification and claims are approximations that may vary depending upon the desired properties of the individual embodiments. In some embodiments, the numerical parameter should take into account the specified significant digits and employ a general digit preserving approach. Notwithstanding that the numerical ranges and parameters setting forth the broad scope of the range are approximations, in the specific examples, such numerical values are set forth as precisely as possible within the scope of the application.
For each patent, patent application publication, and other material, such as articles, books, specifications, publications, documents, etc., cited in this specification, the entire contents of each are hereby incorporated by reference into this specification. Except where the application history document does not conform to or conflict with the contents of the present specification, it is to be understood that the application history document, as used herein in the present specification or appended claims, is intended to define the broadest scope of the present specification (whether presently or later in the specification) rather than the broadest scope of the present specification. It is to be understood that the descriptions, definitions and/or uses of terms in the accompanying materials of this specification shall control if they are inconsistent or contrary to the descriptions and/or uses of terms in this specification.
Finally, it should be understood that the embodiments described herein are merely illustrative of the principles of the embodiments of the present disclosure. Other variations are also possible within the scope of the present description. Thus, by way of example, and not limitation, alternative configurations of the embodiments of the specification can be considered consistent with the teachings of the specification. Accordingly, the embodiments of the present description are not limited to only those embodiments explicitly described and depicted herein.

Claims (14)

1. A surveillance video distribution method, comprising:
acquiring a first video of a first preset area and a second video of a second preset area, wherein the security level of the first preset area is higher than that of the second preset area;
determining an activity track of a preset object based on the first video;
determining an activity scene of the preset object based on the activity track;
identifying the identity of the preset object based on the second video;
determining a push rating for the first video and/or the second video based on the activity scene and the identity;
distributing the first video and/or the second video to one or more users based on the push rating.
2. The method of claim 1, wherein the determining an activity track of a preset object based on the first video comprises:
and performing optical flow analysis on the first video by applying an optical flow algorithm to determine the activity track of the preset object.
3. The method of claim 1, wherein the determining an activity scene of the preset object based on the activity track comprises:
determining at least one type of preset activity scene and an activity track corresponding to each preset activity scene, wherein the at least one type of preset activity scene at least comprises a first type of activity scene and a second type of activity scene; wherein the first type of activity scenario comprises a scenario of passing through the sensitive monitoring area; the second type of activity scene comprises scenes which do not pass through the sensitive monitoring area;
and determining the preset activity scene matched with the activity track of the preset object in the at least one type of preset activity scene, and determining the activity scene of the preset object based on the matched preset activity scene.
4. The method of claim 3, wherein:
the activity track corresponding to the first class of activity scenes comprises: at least one of entering the sensitive monitoring area trajectory and leaving the sensitive monitoring area trajectory;
the activity track corresponding to the second type of activity scene comprises a conventional track.
5. The method of claim 3, wherein the method further comprises:
determining a preset object moving state based on the first video, wherein the preset object moving state comprises preset object movement or no preset object movement;
determining the moving state of the preset object corresponding to each preset activity scene;
determining the preset activity scene matched with the activity track of the preset object and the movement state of the preset object in the at least one type of preset activity scene, and determining the activity scene of the preset object based on the matched preset activity scene;
and the preset object movement corresponds to the first type of activity scene, and the non-preset object movement corresponds to the second type of activity scene.
6. The method of claim 3, wherein the method further comprises:
determining a door lock state, wherein the door lock state comprises an opening state or a closing state;
determining the door lock state corresponding to each preset activity scene;
determining the preset activity scene matched with the activity track of the preset object and the door lock state in the at least one type of preset activity scene, and determining the activity scene of the preset object based on the matched preset activity scene; the opening state corresponds to the first type of activity scene, and the closing state corresponds to the second type of activity scene.
7. The method of claim 1, wherein the identifying the identity of the preset object based on the second video comprises:
acquiring at least one preset object feature from a human face feature, a preset object gait feature and a preset object accessory feature based on the second video, wherein the preset object accessory feature comprises at least one of a clothing feature, a hair style feature and a wearing article feature;
determining the identity of the preset object based on the at least one preset object feature.
8. The method of claim 7, wherein said determining the identity of the preset object based on the at least one preset object feature comprises:
determining at least one preset identity and at least one preset object characteristic corresponding to each preset identity;
determining the preset identity matched with the at least one preset object characteristic of the preset object in the at least one preset identity, and determining the identity of the preset object based on the matched preset identity.
9. The method of claim 1, wherein different said identities for different said activity scenarios correspond to different said preset object push levels.
10. The method of claim 1, wherein different levels of the preset object push correspond to different push methods.
11. The method of claim 1, wherein the identifying the identity of the preset object based on the second video comprises:
identifying the identity of the preset object through a machine learning model based on the second video.
12. A surveillance video distribution system, the system comprising:
the video acquisition module is used for acquiring a first video of a first preset area and a second video of a second preset area, and the security level of the first preset area is higher than that of the second preset area;
the track determining module is used for determining an activity track of a preset object based on the first video;
the scene determining module is used for determining an activity scene of the preset object based on the activity track;
the identity determination module is used for identifying the identity of the preset object based on the second video;
a push level determination module for determining a push level of the first video and/or the second video based on the activity scene and the identity; and
a distribution module to distribute the first video and/or the second video to one or more users based on the push level.
13. A surveillance video distribution apparatus, the apparatus comprising a processor and a memory; the memory is configured to store instructions that, when executed by the processor, cause the apparatus to implement the method of any of claims 1-11.
14. A computer-readable storage medium, wherein the storage medium stores computer instructions which, when executed by a processor, implement the method of any one of claims 1 to 11.
CN202210137781.3A 2021-07-08 2022-02-15 Monitoring video distribution method and system Active CN114205565B (en)

Priority Applications (4)

Application Number Priority Date Filing Date Title
CN202210807908.8A CN115243009A (en) 2022-02-15 2022-02-15 Monitoring video distribution method and system
CN202210137781.3A CN114205565B (en) 2022-02-15 2022-02-15 Monitoring video distribution method and system
PCT/CN2022/104406 WO2023280273A1 (en) 2021-07-08 2022-07-07 Control method and system
CN202280048533.XA CN117730524A (en) 2021-08-13 2022-07-07 Control method and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210137781.3A CN114205565B (en) 2022-02-15 2022-02-15 Monitoring video distribution method and system

Related Child Applications (1)

Application Number Title Priority Date Filing Date
CN202210807908.8A Division CN115243009A (en) 2022-02-15 2022-02-15 Monitoring video distribution method and system

Publications (2)

Publication Number Publication Date
CN114205565A true CN114205565A (en) 2022-03-18
CN114205565B CN114205565B (en) 2022-07-29

Family

ID=80658945

Family Applications (2)

Application Number Title Priority Date Filing Date
CN202210137781.3A Active CN114205565B (en) 2021-07-08 2022-02-15 Monitoring video distribution method and system
CN202210807908.8A Pending CN115243009A (en) 2022-02-15 2022-02-15 Monitoring video distribution method and system

Family Applications After (1)

Application Number Title Priority Date Filing Date
CN202210807908.8A Pending CN115243009A (en) 2022-02-15 2022-02-15 Monitoring video distribution method and system

Country Status (1)

Country Link
CN (2) CN114205565B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2023280273A1 (en) * 2021-07-08 2023-01-12 云丁网络技术(北京)有限公司 Control method and system

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105100689A (en) * 2014-05-13 2015-11-25 杭州海康威视数字技术股份有限公司 Automatic teller machine (ATM) video surveillance method and apparatus
CN106937086A (en) * 2015-12-30 2017-07-07 霍尼韦尔国际公司 Video monitoring system with selectable operation scenario and the system training for the perception of improved situation
CN109345680A (en) * 2018-08-29 2019-02-15 中国建设银行股份有限公司 A kind of whole scene interactive approach, control equipment and computer-readable medium
CN111815675A (en) * 2020-06-30 2020-10-23 北京市商汤科技开发有限公司 Target object tracking method and device, electronic equipment and storage medium
WO2020248386A1 (en) * 2019-06-14 2020-12-17 平安科技(深圳)有限公司 Video analysis method and apparatus, computer device and storage medium
CN112672115A (en) * 2020-12-22 2021-04-16 珠海格力电器股份有限公司 Monitoring equipment control method and device, monitoring equipment and storage medium
CN113449592A (en) * 2021-05-18 2021-09-28 浙江大华技术股份有限公司 Escort task detection method, escort task detection system, electronic device and storage medium
CN113971782A (en) * 2021-12-21 2022-01-25 云丁网络技术(北京)有限公司 Comprehensive monitoring information management method and system

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105100689A (en) * 2014-05-13 2015-11-25 杭州海康威视数字技术股份有限公司 Automatic teller machine (ATM) video surveillance method and apparatus
CN106937086A (en) * 2015-12-30 2017-07-07 霍尼韦尔国际公司 Video monitoring system with selectable operation scenario and the system training for the perception of improved situation
CN109345680A (en) * 2018-08-29 2019-02-15 中国建设银行股份有限公司 A kind of whole scene interactive approach, control equipment and computer-readable medium
WO2020248386A1 (en) * 2019-06-14 2020-12-17 平安科技(深圳)有限公司 Video analysis method and apparatus, computer device and storage medium
CN111815675A (en) * 2020-06-30 2020-10-23 北京市商汤科技开发有限公司 Target object tracking method and device, electronic equipment and storage medium
CN112672115A (en) * 2020-12-22 2021-04-16 珠海格力电器股份有限公司 Monitoring equipment control method and device, monitoring equipment and storage medium
CN113449592A (en) * 2021-05-18 2021-09-28 浙江大华技术股份有限公司 Escort task detection method, escort task detection system, electronic device and storage medium
CN113971782A (en) * 2021-12-21 2022-01-25 云丁网络技术(北京)有限公司 Comprehensive monitoring information management method and system

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2023280273A1 (en) * 2021-07-08 2023-01-12 云丁网络技术(北京)有限公司 Control method and system

Also Published As

Publication number Publication date
CN114205565B (en) 2022-07-29
CN115243009A (en) 2022-10-25

Similar Documents

Publication Publication Date Title
CN108062809B (en) A kind of house access control system for realizing personnel's big data collection analysis
US10796514B2 (en) System and method for optimizing a facial recognition-based system for controlling access to a building
CN110491004B (en) Resident community personnel safety management system and method
CN107958258B (en) Method and system for tracking objects in a defined area
WO2022121059A1 (en) Intelligent integrated access control management system based on 5g internet of things and ai
US20180247504A1 (en) Identification of suspicious persons using audio/video recording and communication devices
US11315400B1 (en) Appearance based access verification
US20210196169A1 (en) Methods and System for Monitoring and Assessing Employee Moods
WO2018180588A1 (en) Facial image matching system and facial image search system
CN110766895A (en) Intelligent community abnormity alarm system and method based on target trajectory analysis
JPWO2015136938A1 (en) Information processing method and information processing system
CN113053013B (en) Access control method and device based on face recognition, computer equipment and medium
CN110930577A (en) Method for analyzing unregistered but actually living in personnel based on entrance guard data
CN114205565B (en) Monitoring video distribution method and system
CN108615286B (en) Intelligent management method and system for cell
CN108320359A (en) A kind of city Internet of Things wisdom gate inhibition and safety-protection system
CN106600471A (en) Guest service analysis decision system based on smart hotel
CN105976469A (en) Intelligent doorkeeper management system with robot and application thereof
CN112699328A (en) Network point service data processing method, device, system, equipment and storage medium
KR102150642B1 (en) Smart door system using wireless charging
CN207817817U (en) A kind of Identification of Images gate inhibition equipment Internet-based
CN112002052A (en) Data sharing method for smart community
CN115409654A (en) Unmanned hotel management method and system
US11983963B2 (en) Anti-spoofing visual authentication
Girme et al. Internet of Things Based Intelligent Security using Android Application

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CP01 Change in the name or title of a patent holder
CP01 Change in the name or title of a patent holder

Address after: 100094 room 1101, 11 / F, building 1, zone 2, 81 Beiqing Road, Haidian District, Beijing

Patentee after: Yunding Network Technology (Beijing) Co.,Ltd.

Address before: 100094 room 1101, 11 / F, building 1, zone 2, 81 Beiqing Road, Haidian District, Beijing

Patentee before: YUN DING NETWORK TECHNOLOGY (BEIJING) Co.,Ltd.