CN113378616A - Video analysis method, video analysis management method and related equipment - Google Patents

Video analysis method, video analysis management method and related equipment Download PDF

Info

Publication number
CN113378616A
CN113378616A CN202110224956.XA CN202110224956A CN113378616A CN 113378616 A CN113378616 A CN 113378616A CN 202110224956 A CN202110224956 A CN 202110224956A CN 113378616 A CN113378616 A CN 113378616A
Authority
CN
China
Prior art keywords
video
camera
analysis
subscription
management device
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110224956.XA
Other languages
Chinese (zh)
Inventor
陈普
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Huawei Cloud Computing Technologies Co Ltd
Original Assignee
Huawei Technologies Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huawei Technologies Co Ltd filed Critical Huawei Technologies Co Ltd
Priority to PCT/CN2021/079312 priority Critical patent/WO2021180004A1/en
Publication of CN113378616A publication Critical patent/CN113378616A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/80Camera processing pipelines; Components thereof
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/18Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/18Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast
    • H04N7/181Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast for receiving images from a plurality of remote sources

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Data Mining & Analysis (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Alarm Systems (AREA)
  • Closed-Circuit Television Systems (AREA)

Abstract

The application provides a video analysis method, which is applied to the field of video analysis and comprises the following steps: the method comprises the steps of obtaining a first video shot by a first camera, obtaining interactive data of a second video, and then executing a first video analysis task on the first video based on the interactive data of the second video to obtain an analysis result of the first video. Because the interactive data obtained by combining other video tasks are analyzed in the execution stage of the video analysis task, when some applications related to artificial intelligence AI trigger tasks such as target searching and target tracking, the data stored in the execution stage of the video analysis task can be responded, a large number of characteristic values do not need to be searched and calculated, explosive resource consumption is avoided, and response efficiency is improved.

Description

Video analysis method, video analysis management method and related equipment
The present application claims priority of chinese patent application entitled "distributed visual analysis method and system" filed by the chinese intellectual property office on 09.03.2020, application No. 202010159116.5, which is incorporated herein by reference in its entirety.
Technical Field
The present application relates to the field of video analysis, and in particular, to a video analysis method, a video analysis management method, and related apparatuses, devices, computer-readable storage media, and computer program products.
Background
Video surveillance is an important means of security. The traditional video monitoring system comprises a front-end camera, a transmission cable and a video monitoring platform, monitoring personnel can check videos of a monitoring area shot by the front-end camera through the video monitoring platform in a monitoring room, and the problem of labor cost for staring at the site is solved.
With the increase of the number of cameras and the complexity of video monitoring tasks, the cost of manually viewing videos is higher and higher. To this end, Artificial Intelligence (AI) technology has been introduced to assist in video analysis. Specifically, when some applications need to execute tasks such as target search and target tracking, feature extraction is performed on video streams shot by each camera in a background to obtain feature vectors corresponding to the video streams shot by each camera, and then the feature vectors are stored. When the application triggers the task, the stored characteristic vectors are searched and calculated according to the task, and therefore target searching and target tracking are achieved.
However, when an application triggers a task, searching and computing the stored feature vectors from the task results in explosive resource consumption, thereby resulting in explosive resource demand. Moreover, the scheme needs a large amount of calculation and is low in efficiency.
Disclosure of Invention
The application provides a video analysis method, and the method analyzes interactive data obtained by combining other video analysis tasks in the execution stage of the video analysis task, so that when an application triggers a task, the data stored in the execution stage of the video analysis task can be directly responded based on a large number of characteristic values without searching and calculating, explosive resource consumption is avoided, explosive resource requirements are avoided, the calculated amount is reduced, and the response efficiency is improved. The application also provides a management method of video analysis, and a device, equipment, a computer readable storage medium and a computer program product corresponding to the method.
In a first aspect, the present application provides a video analysis method. When the camera includes an intelligent camera having an analysis function, the video analysis method may be performed by the intelligent camera. When the camera is a non-intelligent camera (also referred to as a normal camera) without an analysis function, the video analysis method may also be performed by a video monitoring platform in the background (for example, a video analysis apparatus in the video monitoring platform).
The camera or the video analysis device may obtain a video shot by the camera (for example, a first video), and obtain interaction data of a video shot by another camera (for example, a second video), where the interaction data is obtained by performing a video analysis task on the video, and then the camera or the video analysis device performs the first video analysis task on the first video based on the interaction data of the second video, so as to obtain an analysis result of the first video.
The method analyzes interactive data obtained by combining other video analysis tasks in the execution stage of the video analysis tasks. In this way, when the application triggers tasks such as target search and target tracking, analysis can be performed directly according to data stored in the execution stage of the video analysis task (for example, data associated with interaction data of the second video obtained by executing the first video analysis task on the first video), without searching and calculating a large number of feature values. On one hand, explosive resource consumption is avoided, so that explosive resource requirements are avoided, the requirements can be met without configuring a high-performance computing cluster and high-capacity resources, the resource utilization rate is improved, and the cost is reduced; on the other hand, the first video is analyzed based on the interactive data of the second video, so that the collaborative analysis is realized, the analysis range is reduced, the calculated amount is reduced, and the analysis efficiency is improved.
In some possible implementations, the camera or the video analysis device may also obtain the action logic. Wherein the action logic comprises one or more of the following parameters: the video analysis system comprises a task type parameter for executing the first video analysis task, a time parameter for executing the first video analysis task, and a condition parameter for executing the first video analysis task.
The task type parameter is used to describe the type of video analysis task. Task types may include face monitoring (monitoring a designated face), body monitoring (monitoring a designated body), vehicle license plate monitoring (monitoring a designated license plate), vehicle feature monitoring (monitoring a designated vehicle by a feature value), crowd counting statistics, vehicle counting statistics, or specific behavior detection. The specific behavior can be set according to requirements, and for example, the specific behavior can be any one or more of behaviors such as a behavior of not wearing a mask, a behavior of fighting, a behavior of taking a business, a behavior of making a call while driving a vehicle, and the like.
The time parameter is used for describing the execution time of the video analysis task. The execution time may include an execution start time, and further, the execution time may further include an execution duration or an execution end time. The execution start time may be determined according to the distance between the cameras. Further, the execution start time may also be determined according to the speed of the target.
The condition parameter may include a similarity threshold. The similarity threshold may be used to determine whether the target in the first video and the target to be tracked in the second video are the same target. The similarity threshold may be set according to an empirical value, and may be set to 0.93, for example.
Wherein the action logic may also be used to indicate an adjustment logic for the camera. The adjustment logic includes adjusting a direction and/or a focal length of the camera. For example, in a particular behavior detection and alarm scenario, the action logic may include adjusting the direction and focus of the camera to enable focused targets for behavior detection.
The camera or the video analysis device executes a first video analysis task on the first video based on the interaction data of the action logic and the second video, so that on one hand, collaborative analysis of videos based on the associated region can be realized, on the other hand, the video analysis task is executed through the action logic, and targeted analysis, such as analysis of human faces or analysis of human bodies, can be realized, so that the analysis efficiency can be improved, and further the response efficiency can be improved.
In some possible implementations, the camera or the video analysis device may receive the action logic transmitted by the management device. Wherein the action logic is to indicate an analysis logic for the video. The action logic may be written by an administrator in accordance with a programming interface provided by the management device. The administrator may be a professional with the ability to compile action logic.
In some possible implementations, the analysis result of the first video includes data associated with the interaction data of the second video. In this way, the camera or the video analysis device can perform analysis based on the data associated with the interaction data of the second video, and can quickly respond to tasks such as target detection and target tracking without searching and calculating a large number of characteristic values. Burst-type resource consumption is avoided, burst-type resource requirements are avoided, and response efficiency is improved.
In some possible implementations, the interaction data of the second video includes information of an object to be tracked appearing in the second video. The information of the target to be tracked may include any one or more of an attribute, a feature value, or a target image of the target to be tracked. In consideration of the transmission overhead of the interactive data, the target image can be stored, and then the storage address of the target image is adopted to replace the target image in the interactive data for transmission, so that the transmission data volume can be greatly reduced, and the transmission overhead is reduced.
Accordingly, the camera or the video analysis device may identify the target in the first video, obtain information of the target, and then obtain an analysis result of the first video according to the information of the target and the information of the target to be tracked in the second video. The analysis result comprises information of the target, the information similarity of which with the target to be tracked in the second video meets the preset condition.
Therefore, the target can be tracked through the video in the relevant area in the video analysis stage, when the target detection and target tracking task is triggered, the target detection and the target tracking are carried out through comparison and search of a large number of characteristic values, explosive resource consumption is avoided, explosive resource requirements are avoided, the target detection and the target tracking task are directly responded according to the data in the video analysis stage, and the response efficiency is improved.
In some possible implementations, the camera or the video analysis device may identify the target in the first video according to a pre-trained artificial intelligence model, such as a human body detection model, a human face detection model, a crowd counting model, or a vehicle counting model, to obtain information of the target.
The target in the first video is identified by the artificial intelligence model, so that automatic video monitoring can be realized, on one hand, the labor cost can be reduced, on the other hand, errors caused by manual staring can be avoided, and the comprehensiveness and accuracy of video monitoring can be improved.
In some possible implementations, a subscription relationship exists between the first camera and the second camera. The subscription relationship among the cameras corresponds to the subscription relationship of the interactive data of the video shot by the cameras. For example, if the first camera subscribes to the second camera, the first video analysis task for analyzing the first video shot by the first camera subscribes to the interactive data of the second video shot by the second camera.
The subscription relationship is pre-constructed by the management device. Specifically, the management device may establish a subscription relationship between the cameras through different modes, and thus, when performing video analysis, the management device may send interactive data of videos shot by other cameras subscribed by the camera to the camera or the video analysis device based on the subscription relationship, so that the camera or the video analysis device realizes collaborative analysis based on the interactive data, and improves analysis efficiency.
In some possible implementations, the video analysis method described above may be performed by the first camera. Wherein, this first camera can be intelligent camera. Similarly, the interaction data of the second video may be obtained by the second camera performing a second video analysis task on the second video. Wherein the second camera may be an intelligent camera.
In some possible implementation manners, the first camera acquires the interactive data of the second video from the second camera, for example, when the cameras are connected in a point-to-point manner, the cameras can directly receive and send the interactive data based on a subscription relationship without transferring the interactive data through a management device. Therefore, the receiving and sending efficiency of the interactive data can be improved, and resource occupation caused by transfer through the management device is avoided.
In some possible implementations, the first camera may also obtain the interaction data of the second video from the management device. Specifically, the management device may be configured to forward the interactive data based on the subscription relationship, and the camera or the video analysis device performs a second analysis task on the second video to obtain the interactive data of the second video, and may report the interactive data to the management device, so that the management device may issue the interactive data of the second video to the first camera according to the subscription relationship. Therefore, the processing logic of the camera can be simplified, and the threshold of the camera is reduced.
In some possible implementations, the camera or the video analysis device may generate the interaction data of the first video according to the analysis result of the first video, for example, part of or all of information in the analysis result of the first video may be used as the interaction data of the first video, and then the interaction data of the first video is sent to the management device or the third camera. And a subscription relationship exists between the third camera and the first camera. In this way, continuous analysis may be achieved, for example, continuous tracking of targets.
In some possible implementations, the method is performed by a video analytics device that establishes a communication connection with the first camera and the second camera.
In some possible implementation manners, the video analysis method is applied to security monitoring. Wherein the first video analysis task comprises one or more of the following tasks: person detection and tracking, vehicle detection and tracking, crowd counting statistics, vehicle counting statistics, specific behavior detection and alarm.
In a second aspect, the present application provides a method for managing video analytics. The method may be performed by a management device. The management device is in communication connection with the plurality of cameras. The management means may be a software means that may be deployed on a general purpose device, such as a server. The management device may be a hardware device having a management function. For convenience of description, the management device is exemplified as a software device in the present application.
Specifically, the management device receives interactive data of a second video, wherein the second video is obtained by shooting with the second camera, and then the management device sends the interactive data of the second video to the first camera or the video analysis device, so that the first camera or the video analysis device executes a first video analysis task on the first video based on the interactive data of the second video. And the first camera and the second camera have a subscription relationship, and the first video is shot by the first camera.
The method analyzes interactive data obtained by combining other video analysis tasks in the execution stage of the video analysis tasks. Therefore, when the application triggers tasks such as target searching, target tracking and the like, analysis can be directly carried out according to data stored in the execution stage of the video analysis task, and a large number of characteristic values do not need to be searched and calculated. Therefore, explosive resource consumption is avoided, explosive resource requirements are avoided, the analysis range is narrowed, the calculated amount is reduced, and the analysis efficiency is improved.
In some possible implementations, before the management apparatus receives the interaction data of the second video, the management apparatus may further establish a subscription relationship between the first camera and the second camera in the at least one camera. The management device can execute the step of establishing the subscription relationship once, and can directly respond based on the established subscription relationship when responding to the subsequent task, thereby avoiding unnecessary resource waste and improving the response efficiency.
In some possible implementations, the management apparatus may establish a subscription relationship between the first camera and the second camera in the at least one camera through a preconfigured mode. Specifically, the management device receives a subscription instruction sent by the first camera, where the subscription instruction includes a subscription parameter, and the subscription parameter may be, for example, a distance parameter, and queries, according to the subscription parameter, the second camera that satisfies the subscription parameter, and establishes a subscription relationship between the first camera and the second camera.
In some possible implementations, the management apparatus may establish a subscription relationship between the first camera and the second camera in the at least one camera through a passive execution mode. Specifically, the management device receives a subscription instruction sent by a manager, where the subscription instruction may include an identifier of a camera whose subscription relationship is to be established, and then the management device establishes the subscription relationship between the first camera and the second camera according to the subscription instruction.
Thus, the management device can establish a subscription relationship in different stages through the pre-configuration mode or the passive execution mode, for example, when a video analysis task is started or in the process of executing the video analysis task, and realize collaborative analysis based on the subscription relationship, thereby satisfying personalized analysis requirements.
In some possible implementations, the management device may further send action logic to the first camera or the video analysis device to cause the first camera or the video analysis device to perform the first video analysis task on the first video based on the interaction data of the second video and the action logic. Therefore, the video analysis can be performed in a targeted manner based on the action logic, and the analysis efficiency is improved.
In some possible implementations, before the management device sends the action logic to the first camera or the video analysis device, the management device may further obtain the action logic written by a manager through a programmable interface of the management device. In some embodiments, the management device may also obtain action logic configured by the user through the video analytics application.
In a third aspect, the present application provides a video analysis apparatus. The device comprises:
the communication unit is used for acquiring a first video shot by a first camera and acquiring interactive data of a second video, wherein the second video is shot by a second camera;
and the analysis unit is used for executing a first video analysis task on the first video based on the interactive data of the second video to obtain an analysis result of the first video, wherein the interactive data of the second video is obtained by executing a second video analysis task on the second video.
In some possible implementations, the communication unit is further configured to:
obtaining action logic, wherein the action logic comprises one or more of the following parameters: executing a task type parameter of the first video analysis task, executing a time parameter of the first video analysis task, and executing a condition parameter of the first video analysis task;
the analysis unit is specifically configured to:
performing a first video analysis task on the first video based on the action logic and interaction data of the second video.
In some possible implementations, the communication unit is specifically configured to:
receiving the action logic sent by a management device, wherein the action logic is written by a manager according to a programming interface provided by the management device.
In some possible implementations, the analysis result of the first video includes data associated with the interaction data of the second video.
In some possible implementations, the interaction data of the second video includes information of an object to be tracked appearing in the second video;
the analysis unit is specifically configured to:
identifying a target in the first video to obtain information of the target;
and obtaining an analysis result of the first video according to the information of the target and the information of the target to be tracked in the second video, wherein the analysis result comprises the information of the target of which the information similarity with the target to be tracked in the second video meets a preset condition.
In some possible implementations, the analysis unit is specifically configured to:
and identifying the target in the first video according to a pre-trained artificial intelligence model to obtain the information of the target.
In some possible implementations, a subscription relationship exists between the first camera and the second camera, and the subscription relationship is pre-constructed by a management device.
In some possible implementations, the analysis unit is further configured to:
generating interactive data of the first video according to the analysis result of the first video;
the communication unit is further configured to:
and sending the interactive data of the first video to a management device or a third camera, wherein a subscription relationship exists between the third camera and the first camera.
In a fourth aspect, the present application provides a management device. The management device is connected with a plurality of camera communication, management device includes:
the communication unit is used for receiving interactive data of a second video, wherein the second video is obtained by shooting through the second camera; and sending the interactive data of the second video to the first camera or the video analysis device so that the first camera or the video analysis device executes a first video analysis task on the first video based on the interactive data of the second video, wherein a subscription relationship exists between the first camera and the second camera, and the first video is obtained by shooting through the first camera.
In some possible implementations, the apparatus further includes:
and the subscription unit is used for establishing a subscription relationship between the first camera and the second camera in the at least one camera.
In some possible implementations, the subscription unit is specifically configured to:
receiving a subscription instruction sent by the first camera, wherein the subscription instruction comprises subscription parameters;
and querying the second camera meeting the subscription parameters according to the subscription parameters, and establishing a subscription relation between the first camera and the second camera.
In some possible implementations, the subscription unit is specifically configured to:
receiving a subscription instruction sent by a manager;
and establishing a subscription relation between the first camera and the second camera according to the subscription instruction.
In some possible implementations, the communication unit is further configured to:
sending action logic to the first camera or the video analysis device to enable the first camera or the video analysis device to execute the first video analysis task on the first video based on the interaction data of the second video and the action logic.
In some possible implementations, the communication unit is further configured to:
and acquiring action logic written by a manager through a programmable interface of the management device, or acquiring action logic configured by a user through a video analysis application.
In a fifth aspect, the present application provides an apparatus comprising a processor and a memory. The processor and the memory are in communication with each other. The memory stores executable program codes, and the processor reads the executable program codes stored in the memory to realize the functions of the video analysis device according to any implementation manner of the third aspect of the present application or realize the functions of the management device according to any implementation manner of the fourth aspect of the present application.
In a sixth aspect, the present application provides a camera. The camera comprises a processor, a memory and an image sensor, wherein the image sensor is used for acquiring a first video, the memory stores executable program codes, and the processor reads the executable program codes to realize the functions of the video analysis device in the third aspect of the application.
In a seventh aspect, the present application provides a computer-readable storage medium, where instructions are stored, where the instructions are executed in a device, and the device implements the functions of the video analysis apparatus according to any implementation manner of the third aspect of the present application, or implements the functions of the management apparatus according to any implementation manner of the fourth aspect of the present application.
In an eighth aspect, the present application provides a computer program product containing instructions, which when executed by a device, implements the functions of the video analysis apparatus according to any implementation manner of the third aspect of the present application, or implements the functions of the management apparatus according to any implementation manner of the fourth aspect of the present application.
The present application can further combine to provide more implementations on the basis of the implementations provided by the above aspects.
Drawings
In order to more clearly illustrate the technical method of the embodiments of the present application, the drawings used in the embodiments will be briefly described below.
Fig. 1 is an application scene diagram of a video analysis method according to an embodiment of the present application;
fig. 2A is a schematic interface diagram of triggering target detection or track search in an application according to an embodiment of the present disclosure;
fig. 2B is a schematic flowchart of a track search according to an embodiment of the present disclosure;
FIG. 3 is a diagram of a trajectory of an object provided by an embodiment of the present application;
fig. 4 is an application scene diagram of a video analysis method according to an embodiment of the present application;
fig. 5 is an application scene diagram of a video analysis method according to an embodiment of the present application;
fig. 6 is a flowchart of a video analysis method according to an embodiment of the present application;
fig. 7 is a positional relationship diagram of a camera provided in the embodiment of the present application;
fig. 8 is a flowchart of a management method for video analysis according to an embodiment of the present application;
fig. 9 is an interaction flowchart of a video analysis method according to an embodiment of the present application;
fig. 10 is an interaction flowchart of a video analysis method according to an embodiment of the present application;
fig. 11 is a schematic structural diagram of a video analysis apparatus according to an embodiment of the present application;
fig. 12 is a schematic structural diagram of a management device according to an embodiment of the present application;
fig. 13 is a schematic structural diagram of an apparatus provided in an embodiment of the present application;
FIG. 14 is a schematic structural diagram of an apparatus provided in an embodiment of the present application;
fig. 15 is a schematic structural diagram of a camera provided in an embodiment of the present application.
Detailed Description
The terms "first" and "second" in the embodiments of the present application are used for descriptive purposes only and are not to be construed as indicating or implying relative importance or implicitly indicating the number of technical features indicated. Thus, a feature defined as "first" or "second" may explicitly or implicitly include one or more of that feature.
Some technical terms referred to in the embodiments of the present application will be first described.
Video monitoring refers to a method for monitoring a geographic area by using video monitoring equipment (also called a camera, which may be a camera fixedly arranged at a certain position, or a camera in an unmanned aerial vehicle, a camera in a patrol vehicle, etc.) for recording the field conditions of the geographic area. Video surveillance technically solves the problem of labor cost for manual staring (inspection personnel going to the site), and therefore is widely applied in many occasions. For example, video monitoring can be applied to occasions such as community management, enterprise management, road traffic management and the like so as to guarantee personal and property safety, enterprise asset safety and smooth traffic of community residents.
Although the traditional video monitoring solves the problem of manual staring, the video still needs to be checked manually to realize video monitoring. With the increase of the deployment quantity of the video monitoring cameras, the workload of manually checking videos is increased, and the labor cost is further increased. Based on this, the industry introduces an Artificial Intelligence (AI) technology to assist in analyzing the video, so as to reduce the workload of manually checking the video and reduce the labor cost.
Artificial intelligence is the simulation of a human mental process and/or intelligent behavior (e.g., learning, reasoning, thinking, planning, etc.) by a computer program running on a computer. As a branch of artificial intelligence, Deep Learning (DL) has made a great progress in tasks such as image recognition. Therefore, the industry mainly adopts deep learning to assist in analyzing videos to realize video monitoring.
Deep learning is performed by extracting features of data to find deep feature representations of the data, and then classifying, regressing, predicting and the like the data. The deep learning is mainly applied to scenes such as perception, decision and the like in the field of artificial intelligence, such as image recognition, voice recognition, natural language translation, computer gaming and the like.
In an AI-based video monitoring scene, a video monitoring platform decodes videos shot by each camera (for example, cameras installed in different geographic areas) to obtain an image corresponding to each video, extracts a feature vector from the image by using a deep learning model, and stores the feature vector. The feature vector can be characterized by a binary datum, which is also referred to as a feature value. When a trigger task is applied, for example, a target search and target tracking task is triggered, the video monitoring platform may search and calculate the stored feature values according to the task, for example, search the stored feature values according to the search condition, and then calculate the similarity between the searched feature values and the preset feature values, so as to obtain the running track of the target and implement the target search and tracking.
However, when the application triggers the task, the video monitoring platform searches and calculates the stored characteristic values according to the task, which causes explosive resource consumption, thus resulting in explosive resource demand. For example, the video surveillance platform configures a high performance computing cluster to support feature value search and configures a large capacity memory to support feature value comparison. These resources (e.g., computing resources, memory resources) may be idle during periods other than the eigenvalue search, comparison, resulting in lower resource utilization. In addition, when an application triggers a task, the video surveillance platform needs to perform a large amount of calculation to obtain a result (such as a target track) required by the application, so that the response efficiency of the application is low.
In view of this, the present application provides a video analysis method. When the camera includes an intelligent camera having an analysis function, the video analysis method may be performed by the intelligent camera. When the camera is a non-intelligent camera (also referred to as a normal camera) without an analysis function, the video analysis method may also be performed by a video monitoring platform in the background (for example, a video analysis apparatus in the video monitoring platform).
Specifically, the camera or the video analysis apparatus may obtain a video shot by the camera (for convenience of description, this embodiment of the present application is referred to as a first video), and obtain interaction data of a video shot by another camera (for convenience of description, this embodiment of the present application is referred to as a second video), where the interaction data is obtained by performing a video analysis task on the video, and then the camera or the video analysis apparatus performs the first video analysis task on the first video based on the interaction data of the second video, so as to obtain an analysis result of the first video.
The method analyzes interactive data obtained by combining other video analysis tasks in the execution stage of the video analysis tasks. In this way, when the application triggers tasks such as target search and target tracking, analysis can be performed directly according to data stored in the execution stage of the video analysis task (for example, data associated with interaction data of the second video obtained by executing the first video analysis task on the first video), without searching and calculating a large number of feature values. On one hand, explosive resource consumption is avoided, so that explosive resource requirements are avoided, the requirements can be met without configuring a high-performance computing cluster and high-capacity resources, the resource utilization rate is improved, and the cost is reduced; on the other hand, the first video is analyzed based on the interactive data of the second video, so that the collaborative analysis is realized, the analysis range is reduced, the calculated amount is reduced, and the analysis efficiency is improved.
For example: in a scene of a target searching and tracking task, when a first video shot by a camera is analyzed, interactive data of a second video shot by other cameras is obtained, and the interactive data can include the type, color and characteristic value of a target to be tracked. When the interactive data is applied to the analysis of the first video, all targets shot by the first video can be detected and feature extracted, and then the types, colors and feature values of the tracked targets in the interactive data are compared respectively, information of the targets in the first video, the similarity of which with the tracked targets meets preset conditions, is determined, and an analysis result is obtained. And then the analysis result of the first video can be reported to the management device, or the interactive data of the first video is generated based on the analysis result and is used for the analysis tasks of other videos.
It should be noted that the method is not a video analysis method developed for a specific service, and can be widely applied to various video monitoring scenes. For example, the method can be applied to tracking criminal suspects in criminal investigation scenes, or monitoring incidents (such as fighting) violating the security management regulations in security management scenes, or monitoring hit vehicles in traffic management scenes, or identifying important customers (VIPs) in market management scenes, and evacuating dense people streams, so that people stream optimization is realized.
In order to make the technical solution of the present application clearer and easier to understand, an application scenario of the present application is described below with reference to the drawings.
Referring to an application scene schematic diagram of the video analysis method shown in fig. 1, the scene includes a plurality of cameras 100, a video monitoring platform 200, and an application 300, the plurality of cameras 100 are respectively established with a communication connection with the video monitoring platform 200, and the video monitoring platform 200 is established with the application 300.
The application 300 may be a video analysis application that may provide any one or more of track search, target detection, target tracking, target counting, specific behavior detection, and alarm, among other functions. The target can be an entity such as a person, a vehicle, an animal and the like, and the specific behavior refers to a preset detectable behavior such as fighting and wearing a mask. Based on the video monitoring platform 200, the video analysis application may provide functions such as person detection and tracking, vehicle detection and tracking, crowd counting statistics, vehicle, specific behavior detection and alarm, and the like. In this embodiment, the application 300 may be a dedicated client for implementing functions such as track search and target detection, or may be a browser for providing track search service and target detection service.
The application 300 may provide an interactive interface. The interactive interface may be a Graphical User Interface (GUI) or a Command User Interface (CUI). For ease of understanding, the following is illustrated with an interactive interface as the GUI.
As shown in fig. 2A, the application 300 presents videos shot by a plurality of cameras 100, such as videos 1 to N shot by cameras 1 to N, to the user through the GUI. The GUI also carries a target detection control and a track search control. The user (specifically, a person using the application 300, such as a security person or a reconnaissance person) may trigger the target detection operation or the track search operation through the target detection control or the track search control. Taking the example that the user triggers the track search operation, the user may trigger the track search control, input an image of a target to be searched, and thus trigger the track search operation on the target, and the application 300 may send a track search request to the video monitoring platform 200 in response to the track search operation on the target.
The video surveillance platform 200 includes a management device 202 and a storage and search device 204. When the plurality of cameras 100 connected to the video surveillance platform 200 include a general camera (i.e., a camera that is not capable of analyzing and processing video), the video surveillance platform 200 may further include a video analyzing device 206. The video analysis means 206 is used to perform video analysis tasks on the video captured by the video head 100. When the plurality of cameras 100 connected to the video monitoring platform 200 are all smart cameras (i.e., cameras capable of analyzing and processing video), the functions of the video analysis apparatus 206 may also be performed by the smart cameras.
The management device 202 is configured to establish a subscription relationship between the cameras in the plurality of cameras 100. For example: the first camera and the second camera are cameras in different geographic positions, the management device 202 may receive a subscription instruction sent by the first camera, where the subscription instruction includes a subscription parameter, and the management device 202 may query, according to the subscription parameter, the second camera that satisfies the subscription parameter, and establish a subscription relationship between the first camera and the second camera.
The first camera shoots a first video, and the second camera shoots a second video. The second camera or the video analysis device 206 may perform a second video analysis task on the second video to obtain interaction data of the second video. Wherein the interaction data is used for interaction between video analysis tasks or for interaction between cameras. Specifically, the management device 202 is further configured to receive interaction data of the second video, and send the interaction data of the second video to the first camera or the video analysis device 206 according to a subscription relationship between the first camera and the second camera. In this way, the first camera or the video analysis device 206 may perform the first video analysis task on the first video based on the interactive data of the second video, and obtain an analysis result of the first video. The analysis result of the first video comprises data associated with the interaction data of the second video.
In some possible implementations, the management apparatus 202 further includes a programming interface unit for acquiring action logic written by a manager through a programming interface. The management device 202 is used to send action logic to each camera or video analysis device 206. Wherein the action logic is to indicate an analysis logic for the video. The action logic may include any one or more of a task type parameter to perform the video analysis task, a time parameter to perform the video analysis task, and a condition parameter to perform the video analysis task. For example, the first camera or video analytics device 206 may perform the first video analytics task on the first video based on the action logic for the first video analytics task and the interaction data for the second video, thereby obtaining data associated with the interaction data for the second video.
The storage and search device 204 is used for storing the analysis results after analyzing the videos, such as: the analysis result of the second video, and the analysis result of the first video. Wherein, the analysis results of different videos have an association relationship. For example: if the analysis result of the first video is an analysis result associated with the interaction data of the second video, the analysis result of the first video and the analysis result of the second video have an association relationship stored in the search apparatus 204, and the association relationship may be embodied by having an association identifier in common or by using a relationship table, which is not limited in this application. In addition, the analysis result of the first video may also be used to generate interactive data of the first video, and the interactive data of the first video may be sent to the management apparatus 202 or another camera (for example, a third camera) having a subscription relationship with the first camera, so as to analyze a third video shot by the third camera. Correspondingly, the analysis result of the third video shot by the third camera is an analysis result associated with the interactive data of the first video, that is, the analysis result of the third video is also associated with the analysis result of the first video.
The storage and search means 204 may also provide a search interface through which search conditions may be received, and data satisfying the search conditions may be searched out according to the search conditions. For example, the storage and search apparatus 204 supports searching by time, geographic location (e.g., where a camera is located), camera identification, attributes, feature values, images, and the like. It should be noted that, when the storage and search apparatus 204 performs a search based on an image, the image may be converted into a feature value before the search is performed.
The storage and search means 204 is further configured to generate a track of the target in response to the track search request based on the interaction data of the second video and data associated with the interaction data of the second video. For example, referring to fig. 2B, when a user needs to obtain a track of a tracked target, the first camera, the second camera, and the third camera subscribe to each other, when the second camera finds the tracked target, the second camera sends interactive data of the second video to the first camera, the first camera analyzes the first video based on the interactive data of the second video and also finds the tracked target, and then sends the interactive data of the first video to the third camera, and the third camera analyzes the third video based on the interactive data of the first video and finds the tracked target. The analysis result of the second video, the analysis result of the first video, and the analysis result of the third video may be stored in the storage and search device 204 by way of a relational table, and the analysis results are related to each other. Based on the above-mentioned correlated analysis results, when the track of the tracked target needs to be searched, the track of the tracked target can be obtained more conveniently without carrying out a large number of characteristic value searching and comparison operations, thereby avoiding explosive resource consumption, avoiding explosive resource demand, improving resource utilization rate, and improving analysis efficiency without carrying out a large number of calculations.
Further, the storage and search apparatus 204 may also return the trajectory of the target to the application 300, so that the application 300 presents the trajectory of the target to the user through a user interface, such as a GUI. Specifically, referring to the interface diagram of the trajectory of the target shown in fig. 3, the application 300 may obtain the geographic positions of the multiple cameras 100, generate the distribution diagram of the cameras 100 based on the geographic positions of the multiple cameras 100, and then display the trajectory of the target on the distribution diagram of the cameras 100. As shown in fig. 3, camera head 100 is characterized by a circle. And when a certain camera detects a target, displaying a portrait at a position corresponding to the camera in the distribution schematic diagram, wherein the portrait is used for representing the appearance of the target at the position. The application 300 may sequentially connect the corresponding targets through directional arrows in the distribution diagram according to the time sequence of the detected occurrence of the targets, and present the trajectory of the targets based on the path formed by the directional arrows.
In other possible implementations, the functions of the management apparatus 202 in the video surveillance platform 200 may also be implemented by the application 300. Referring to fig. 4, an application scenario of the video analysis method is shown, where the scenario includes a plurality of cameras 100, a video monitoring platform 200 and an application 300. The video monitoring platform 200 includes a storage and search management device 204, and further, the video monitoring platform 200 further includes a video analysis device 206. The functions of the storage and search management apparatus 204 and the video analysis apparatus 206 are described with reference to the embodiment shown in fig. 1. The application 300 is also used to implement the functionality of the management apparatus 202 in the embodiment shown in fig. 1.
In particular, the application 300 includes a management apparatus 302. The management device 302 may be a functional module, plug-in, applet, etc. of the application 300. The management device 302 is configured to receive a subscription instruction sent by a manager, and establish a subscription relationship between the first camera and the second camera according to the subscription instruction. In this way, the management apparatus 302 may send the interaction data of the second video to the first camera or the video analysis apparatus 206 according to the subscription relationship. When the management apparatus is implemented by the application 300, the user of the application 300 may serve as the identity of the administrator, or the user of the application 300 may configure the application 300 by a professional administrator before using the functions of the application 300. When the management device is independent of the application 300, the manager is not usually a user of the application 300, but a professional for establishing a subscription relationship and configuring action logic.
In some possible implementations, the management apparatus 302 is further configured to obtain the action logic configured by the administrator or the user through the application 300, and send the action logic to the first camera or the video analysis apparatus 206. Accordingly, the first camera or the video analysis device 206 may perform the first video analysis task on the first video according to the action logic and the interaction data of the second video, so as to obtain the analysis result of the first video. In the scenario shown in fig. 1, the camera 100 may be a rifle bolt, a ball machine, or a gate or a camera on a gate control system. Further, the camera 100 may be a camera installed at a fixed position, or may be a camera attached to a mobile tool, such as a camera on an unmanned aerial vehicle or a camera on a vehicle-mounted device.
The application 300 may be deployed in a terminal. The terminal includes but is not limited to a desktop, a notebook computer, a tablet computer, a smart phone or a smart wearable device. Wherein, intelligence wearing equipment can include intelligent wrist-watch, intelligent bracelet, intelligent glasses etc..
The video surveillance platform 200 may be deployed in a cloud environment or may be deployed in a local data center. Wherein the cloud environment indicates a cloud computing cluster owned by a cloud service provider for providing computing, storage, and communication resources. A cloud computing cluster may include a central cloud comprising at least one central computing device (e.g., a central server) and an edge cloud comprising at least one edge computing device (e.g., an edge server). The local data center refers to a data center to which the user belongs. The local data center includes at least one local computing device, for example, at least one local server.
The parts of the video monitoring platform 200 may be deployed in a centralized manner in a cloud environment or a local data center, or may be deployed in a distributed manner in a cloud environment or a local data center. The embodiment of the present application is illustrated by the video surveillance platform 200 being distributively deployed in a cloud environment.
Referring to the system architecture diagram of the video analysis method shown in fig. 5, the video analysis device 206 of the video monitoring platform 200 is deployed in an edge cloud, and the management device 202 and the storage and search device 204 of the video monitoring platform 200 are deployed in a central cloud. The video monitoring platform 200 in the cloud environment can perform collaborative analysis on the video shot by the camera 100. The application 300 is deployed at a terminal. The terminal can run the application 300 to interact with the video monitoring platform 200 in the cloud environment, so that video monitoring is achieved.
When the video monitoring platform 200 is deployed in a cloud environment, the video analysis method can be provided for users in a cloud service manner. Specifically, running an instance of video surveillance platform 200 in a cloud environment, application 300 may respond to user-triggered application operations, such as: and (4) track searching operation, which interacts with the instance of the video monitoring platform 200, so as to realize the function of the application.
When the video monitoring platform 200 is deployed in the local data center, specifically, a user may install a software package of the video monitoring platform 200 in a local server, and then operate the installed video monitoring platform 200, thereby implementing the video analysis method according to the embodiment of the present application. In some embodiments, the software package may also be an installation-free software package, and a user may directly run the installation-free software package in a server, thereby implementing the video analysis method according to the embodiment of the present application.
Next, taking an example that a camera associated with the video monitoring platform 200 in a video monitoring scene is a common camera, and thus the video monitoring platform 206 executes a video analysis task on videos shot by the cameras, a video analysis method provided in the embodiment of the present application is described in detail.
Referring to fig. 6, a flow chart of a video analysis method is shown, the method comprising:
s602: the video analysis device 206 acquires a first video shot by the first camera.
The first camera is specifically any one or more of a plurality of cameras deployed in a monitored area. The surveillance area is the geographical area covered by the camera. In some embodiments, the monitored area may include any one or more of a mall, a cell, a company, a road, and the like geographic area. The video taken by the first camera is referred to as a first video.
The video analysis device 206 may obtain the first video shot by the first camera in real time, receive the first video actively reported by the first camera, or periodically obtain the first video from the first camera. In addition, in consideration of transmission efficiency, the first camera may compress the first video to obtain a first compressed video, and the video analysis device 206 may obtain the first compressed video and then decompress the first compressed video to obtain the first video.
S604: the video analysis means 206 obtains interaction data of the second video.
The second video is shot by the second camera. The second camera is one or more of a plurality of cameras deployed in the surveillance area. The second camera and the first camera are cameras in different geographic locations. The interactive data is information for describing video contents, and particularly information describing objects appearing in the video. The target is an object that is present in the physical world in a manner that is intuitive, and may be, for example, a movable living body such as a human or an animal or a movable non-living body such as a vehicle.
The interactive data of the second video may be obtained by performing a second video analysis task on the second video. The interactive data type may be different depending on the video analysis task. The following illustrates the interactive data in connection with different video analysis tasks.
For example, when the video analysis task is target detection and target tracking, the interaction data may be any one or more of attributes, feature values or target images of the target. In consideration of the transmission overhead of the interactive data, the video analysis device 206 may further store the target image, and then transmit the target image in the interactive data by using the storage address of the target image instead of the target image, so that the amount of transmission data may be greatly reduced, and the transmission overhead may be reduced. Wherein the storage address can be characterized by a binary value, an encoded value (e.g., a base64 encoded value), or a Uniform Resource Locator (URL).
The properties of the target are a depiction of abstract aspects of the target. For example, where the target is a person, the attributes of the target may include any one or more of the type of target being a person, the gender of the person, the height of the person, and the clothing of the person (including, for example, the type, color of clothing, shoes, hats, and/or glasses). For another example, when the object is a vehicle, the attribute of the object may include the type of the object being a vehicle, a license plate number, a vehicle type, a vehicle color, and the like.
The feature value refers to a value expressing a feature of interest. The feature value may be a feature vector obtained by extracting features of the image, or a tensor, or a binary value obtained based on the feature vector. In order to improve the accuracy and reduce the amount of calculation, the video analysis device 206 may extract features of a specific portion of the target in the image to obtain feature values. For example, when the type of the target is a human, the video analysis device 206 may perform feature extraction on the human face to obtain a feature value.
The target image refers to an image frame including a target in the video stream. In some possible implementations, the target image may also be an image of a particular part of the target in an image frame of the target. For example, when the target is a human, the target image may include a human face image, a human body image, and the like.
For another example, when the video analysis task is a target counting statistic (e.g., a crowd counting statistic, a vehicle counting statistic), the interaction data may further include any one or more of the number, the density, and the like of the targets. Wherein the density is used to measure the density of the target and can be generally characterized by the number of targets per unit area. When the video analysis task detects and alarms for a specific behavior, the interactive data can also comprise the behavior of a target and the like.
In geographic areas such as railway stations, subway stations, square, video analysis device 206 can also detect the number of people in the above-mentioned geographic areas through the AI model, and when the number of people exceeded the first threshold value of setting for, can carry out the risk suggestion, if remind to step on the risk higher etc.. For example, in a geographic area such as a road or a parking lot, the video analysis device 206 may detect the number of vehicles in the geographic area by the AI model, and may prompt the user to detour or stop at another geographic area when the number of vehicles exceeds a set second threshold. The first threshold and the second threshold may be set according to empirical values, which is not limited in the embodiment of the present application.
For ease of understanding, the embodiments of the present application also provide an example of interactive data. As follows:
Figure BDA0002956906300000131
Figure BDA0002956906300000141
in the above example, the interaction data includes attributes of the object (including, for example, type of person, sex of person, jacket color), face feature values, and coded values of the face image. The interactive data also comprises a data identifier which is used for uniquely identifying the interactive data.
In the target tracking scene, the interactive data of the second video includes information of the target to be tracked appearing in the second video. In some embodiments, the video analysis device 206 may determine that the target appearing in the second video is the target to be tracked when the target appearing in the second video matches a preset monitoring target. The video analysis device 206 may perform feature extraction on the object appearing in the second video, perform feature extraction on the image of the monitoring object, then calculate the distance of the feature value, and determine whether the object appearing in the second video matches the monitoring object based on the distance. In other embodiments, the video analysis device 206 may determine the target appearing in the second video as the target to be tracked in response to a user's designation operation for the target. For example: the application 300 may present the second video to the user, the user may specify a certain target appearing in the second video, and the application 300 may report the user-triggered specified operation on the target to the video analysis device 206. The video analysis apparatus 206 may determine the target specified by the user as the target to be tracked in response to a specifying operation of the target triggered by the user.
In some possible implementations, a subscription relationship exists between the first camera and the second camera. Correspondingly, the interactive data of the first video shot by the first camera and the interactive data of the second video shot by the second camera have a subscription relationship. The video analysis device 206 may receive the interaction data of the second video sent by the management device 202 based on the subscription relationship.
The subscription relationship may be pre-constructed by the management apparatus 202. The management apparatus 202 may establish a subscription relationship through multiple modes. The different modes are described in detail below.
One mode is a pre-configured mode. Specifically, the management device 202 receives a subscription instruction sent by the first camera, where the subscription instruction includes a subscription parameter, and then the management device 202 queries the second camera meeting the subscription parameter according to the subscription parameter, and establishes a subscription relationship between the first camera and the second camera.
The subscription parameter may include a distance parameter. When the video analysis device 206 starts the first video analysis task, the first camera may send a subscription instruction to the management device 202, where the subscription instruction includes the following subscription parameters: the peripheral distance is not more than n kilometers (km). The subscription parameter is a distance parameter, where n is greater than 0. n may be set according to an empirical value, for example, may take a value of 2, and correspondingly, the subscription parameter may be expressed as receiver. In this way, the management apparatus 202 may determine the second camera meeting the subscription parameter according to the position relationship between the cameras 100 and the distance parameter, so as to establish the subscription relationship between the first camera and the second camera.
Further, the management device 202 may further determine a second camera having a subscription relationship with the first camera more accurately on a distance basis in combination with the historical average vehicle speed and/or walking time. For example, in a hit-and-miss vehicle tracking scenario, a camera of the vehicle may be found to establish a subscription relationship with a first camera within 1 hour of the future based on distance and historical average vehicle speed. For another example, in a person tracking scenario, a camera that may find the pedestrian within 1 hour in the future may establish a subscription relationship with the first camera based on distance and walking time.
Among them, the distance between the cameras 100, the historical average vehicle speed, the walking time, and the like may be stored in a map, specifically, in a map database. Fig. 7 provides an example of a positional relationship diagram of the cameras 100, and as shown in fig. 7, each camera 100 serves as a node of the diagram, and the distance between the cameras 100 and the historical average vehicle speed between two cameras 100 may be stored as an edge of the diagram.
The other mode is a passive execution mode. Specifically, the management device 202 receives a subscription instruction sent by a manager, and the management device 202 establishes a subscription relationship between the first camera and the second camera according to the subscription instruction. In some scenarios, the administrator may generate a subscription instruction through the programmable interface, where the subscription instruction may include an identifier of the camera 100 that is to establish the subscription relationship, and then send the subscription instruction to the management apparatus 202, so that the management apparatus 202 may directly establish the subscription relationship between the first camera and the second camera based on the subscription instruction. The video analysis apparatus 206 may passively receive interaction data of other video analysis tasks, such as interaction data of a second video, when performing video analysis. Further, the video analysis device 206 may also receive action logic according to which the first video analysis task is performed on the first video. For example, the video analysis device 206 may perform face monitoring, body monitoring, and the like on the first video according to the action logic.
In some possible implementations, the subscription relationship may also be established by the camera 100 through a communication mode. Specifically, the cameras 100 may communicate with each other, and thus, the cameras 100 may establish a subscription relationship based on information such as distance. For example, a first camera may broadcast a subscription request message to other cameras, where the subscription request message is used to request a camera within a distance of 2km from the first camera to establish a subscription relationship, and a camera receiving the subscription request message may determine whether the distance is within 2km, and if so, return a subscription response message. Thus, the first camera establishes a subscription relationship with the second camera. The first camera may then also report the established subscription relationship to the management device 202.
For security, the management device 202 may also authenticate the subscription rights before establishing the subscription relationship. When the subscription authority authentication passes, the management device 202 allows the subscription relationship to be established, and when the subscription authority authentication fails, the management device 202 refuses to establish the subscription relationship. When the subscription relationship is established by the camera 100 through the communication mode, the camera 100 may perform subscription authority authentication. The subscription authority authentication can be realized through a signature verification mechanism based on a public and private key pair.
It should be noted that S602 and S604 may be executed in parallel, or may be executed sequentially according to a set sequence, for example, the video analysis device 206 may execute S602 and then S604, or execute S604 and then S602 first, and this embodiment of the present application is only exemplified by executing S602 and then S604.
S606: the video analysis device 206 performs a first video analysis task on the first video based on the interactive data of the second video, and obtains an analysis result of the first video.
The interaction data of the second video includes related data of the objects appearing in the second video, including, for example, attributes of the objects, feature values of the objects, images of the objects, the number of the objects, or the density of the objects, etc. The video analysis device 206 may perform a first video analysis task on the first video through the AI model based on the interactive data to obtain an analysis result of the first video, where the analysis result of the first video includes data associated with the interactive data of the second video.
Specifically, the video analysis device 206 may recognize the target in the first video, for example, the target in the first video may be recognized by a pre-trained AI model, and obtain information of the target. The pre-trained AI models may include face detection models, human detection models, crowd counting models, vehicle counting models, and the like. Then, the video analysis device 206 obtains an analysis result of the first video according to the information of the target and the information of the target to be tracked in the second video. And the analysis result comprises information of the target, the information similarity of which with the target to be tracked in the second video meets the preset condition. The similarity meeting the preset condition may be that the similarity reaches a preset similarity threshold, or that the similarity reaches a maximum, or the like. The following is an example of a human detection and tracking scenario. Specifically, the video analysis device 206 may identify people in the first video and obtain information of at least one person in the first video. The information of the person may include at least one of a gender, a jacket color, a face feature value, and a face image. The video analysis device 206 compares the information of at least one person in the first video with the information of the person to be tracked in the second video, for example, similarity of face feature values may be calculated, and when the similarity reaches a preset similarity threshold (for example, 0.95), it is determined that the person in the first video is the person to be tracked, and the video analysis device 206 may obtain an analysis result of the first video according to the information of the person.
In some possible implementations, the video analysis device 206 may further obtain an action logic, for example, an action logic issued by the management device 202 is received, where the action logic may be written by a manager according to a programming interface provided by the management device 202, and then the video analysis device 206 performs a first video analysis task on the first video based on interaction data of the action logic and the second video, so as to obtain an analysis result of the first video.
Wherein the action logic is to indicate an analysis logic for the video. The action logic may include one or more of the following parameters: the video analysis system comprises a task type parameter for executing the video analysis task, a time parameter for executing the video analysis task and a condition parameter for executing the video analysis task.
The task type parameter is used to describe the type of video analysis task. Task types may include face monitoring (monitoring a designated face), body monitoring (monitoring a designated body), vehicle license plate monitoring (monitoring a designated license plate), vehicle feature monitoring (monitoring a designated vehicle by a feature value), crowd counting statistics, vehicle counting statistics, or specific behavior detection. The specific behavior can be set according to requirements, and for example, the specific behavior can be any one or more of behaviors such as a behavior of not wearing a mask, a behavior of fighting, a behavior of taking a business, a behavior of making a call while driving a vehicle, and the like.
The time parameter is used for describing the execution time of the video analysis task. The execution time may include an execution start time, and further, the execution time may further include an execution duration or an execution end time. The execution start time may be determined according to the distance between the cameras 100, for example, when the distance between the cameras is 2km, the execution of the above-described video analysis task may be started after 100 seconds (second) based on the distance and speed determination.
The condition parameter may include a similarity threshold. The similarity threshold may be used to determine whether the target in the first video and the target to be tracked in the second video are the same target. For example, when the similarity between the target and the target to be tracked is greater than the similarity threshold, the target is determined to be the same target, otherwise, the target is determined to be a different target. The similarity threshold may be set according to an empirical value, and may be set to 0.93, for example.
In other possible implementations, the action logic may also be used to indicate adjustment logic for the camera head 100. The adjustment logic includes adjusting the orientation and/or focus of the camera head 100. For example, in a particular behavior detection and alarm scenario, the action logic may include adjusting the orientation and focus of camera 100 to enable focused targets for behavior detection.
The following illustrates the video analysis device 206 performing a first video analysis task on a first video based on interaction data of the action logic and a second video in a target tracking scenario.
Specifically, the action logic issued by the video analysis device 206 and received by the management device 202 is as follows:
Figure BDA0002956906300000161
Figure BDA0002956906300000171
the video analysis device 206 may detect the target in the first video after 100s according to the above action logic, and obtain information such as the attribute of the target, the feature value of the target, and the target image. The video analysis means 206 determines the similarity between the target in the first video and the target to be tracked in the second video by comparing the information (attribute, feature value, or target image) of the target in the first video and the information of the target to be tracked in the second video, and compares the similarity with a similarity threshold value, thereby determining whether the targets are the same target. When the target appearing in the second video is the same target as the target appearing in the first video, the video analysis device 206 may generate a trajectory of the target according to the video source position (specifically, the position of the camera 100), thereby implementing target detection and tracking.
In some possible implementations, a subscription relationship exists between the third camera and the first camera. The process of establishing the subscription relationship may refer to the above description of the related content. The video analysis means 206 may also generate interaction data of the first video according to the analysis result of the first video. The interaction data of the first video may include information of objects appearing in the first video. The interaction data may include any one or more of attributes of the target, feature values of the target, or an image of the target. The video analysis means 206 may then send the interaction data of the first video to the management means 202. In this way, when analyzing the third video captured by the third camera, the management device 202 may perform a third video analysis task on the third video in combination with the interactive data of the first video.
Based on the above description, the video analysis method provided in the embodiment of the present application analyzes the interactive data obtained by combining with other video analysis tasks in the execution stage of the video analysis task to obtain the data associated with the interactive data, so that when the application triggers tasks such as target search and target tracking, the video analysis can be directly performed according to the associated data stored in the execution stage of the video analysis task without searching and calculating a large number of feature values. On one hand, explosive resource consumption is avoided, so that explosive resource requirements are avoided, the requirements can be met without configuring a high-performance computing cluster and high-capacity resources, the resource utilization rate is improved, and the cost is reduced; on the other hand, the first video is analyzed based on the interactive data of the second video, so that the collaborative analysis is realized, the analysis range is reduced, the calculated amount is reduced, and the analysis efficiency is improved.
The embodiment shown in fig. 6 describes a video analysis method from the perspective of the video analysis apparatus 206, and when the cameras are smart cameras, the video analysis method can also be executed by each camera itself (for example, the first camera). Specifically, a first camera acquires a first video, then acquires interactive data of a second video, and then executes a first video analysis task on the first video based on the interactive data of the second video. In some embodiments, the video analysis method may also be performed by the camera in cooperation with the video analysis device 206.
The first camera may obtain the interaction data of the second video from the management apparatus 202, for example, the interaction data of the second video sent by the reception management apparatus 202 based on the subscription relationship between the first camera and the second camera. In some embodiments, the second camera may be an intelligent camera, and the second camera may perform a second video analysis task on the second video to obtain the interactive data of the second video. Correspondingly, the first camera can also acquire the interactive data of the second video from the second camera.
In some possible implementations, the first camera may further obtain an action logic, for example, the action logic sent by the reception management apparatus 202, and then perform a first video analysis task on the first video based on the interaction data of the action logic and the second video, so as to obtain an analysis result of the first video.
When receiving the operation logic transmitted by the management device 202, the first camera may further detect whether or not a function corresponding to the operation logic is provided. If yes, the first camera can execute a first video analysis task on the first video based on the interaction data of the action logic and the second video; if not, the first camera may forward the action logic to the video analysis apparatus 206, and the video analysis apparatus 206 performs a first video analysis task on the first video based on the interaction data of the action logic and the second video.
Further, a subscription relationship exists between the third camera and the first camera. The first camera can also generate interactive data of the first video according to the analysis result of the first video, and then sends the interactive data of the first video to the third camera. Therefore, when the third camera analyzes the third video shot by the third camera, the third camera can perform collaborative analysis by combining the interactive data of the first video. Thus, continuous tracking of the target can be realized.
It should be further noted that the establishment of the subscription relationship and the issuing of the action logic in the embodiment shown in fig. 6 are exemplified by the implementation of the management apparatus 202. When the cameras 100 are connected in a peer-to-peer (P2P) manner, the cameras 100 may directly subscribe to interactive data and issue action logic.
In some possible implementations, the establishment of the subscription relationship and the issuing of the action logic may also be implemented by a video analytics application (e.g., application 300 in fig. 1). The video analytics application may provide an action logic configuration interface, which may be a GUI or CUI, through which a user may configure action logic. In this way, the video analysis application may obtain the action logic and then send it to the corresponding camera, or send it to the video analysis apparatus 206. Similarly, the video analytics application may establish a subscription relationship between the cameras 100 in a pre-configured mode or a passive execution mode.
The video analysis method is described in detail above, and the management method of video analysis is described next from the perspective of the management apparatus (e.g., the management apparatus 202).
Referring to fig. 8, a flow chart of a method for managing video analytics is shown, the method comprising:
s802: the management device 202 establishes a subscription relationship between a first camera and a second camera of the plurality of cameras.
The management apparatus 202 may establish a subscription relationship between the first camera and the second camera through multiple modes, for example, the subscription relationship between the first camera and the second camera may be established through a preconfigured mode and/or a passive execution mode. The pre-configuration mode and the passive execution mode are described in detail below.
In a pre-configuration mode, the management device 202 receives a subscription instruction sent by the first camera, where the subscription instruction includes a subscription parameter, and then the management device 202 queries the second camera meeting the subscription parameter according to the subscription parameter, and establishes a subscription relationship between the first camera and the second camera. The subscription parameter may include a distance parameter.
Specifically, when the first camera or the video analysis device 206 starts the first video analysis task, the first camera may send a subscription instruction to the management device 202. The subscription instruction comprises subscription parameters, for example, distance parameters indicating that the distance from the first camera is less than the set distance. In this way, the management apparatus 202 may determine the second camera meeting the subscription parameter according to the position relationship between the cameras 100 and the distance parameter, so as to establish the subscription relationship between the first camera and the second camera.
Further, the management device 202 may further determine a second camera having a subscription relationship with the first camera more accurately on a distance basis in combination with the historical average vehicle speed and/or walking time.
In the passive execution mode, the management device 202 receives a subscription instruction sent by a manager, and establishes a subscription relationship between the first camera and the second camera according to the subscription instruction. In some scenarios, the administrator may generate a subscription instruction through the programmable interface, where the subscription instruction may include an identifier of the camera 100 that is to establish the subscription relationship, and then send the subscription instruction to the management apparatus 202, so that the management apparatus 202 may directly establish the subscription relationship between the first camera and the second camera based on the subscription instruction.
It should be noted that the video analysis management method according to the embodiment of the present application may not execute the above-mentioned S802. For example, after establishing the subscription relationship between the first camera and the second camera, the management apparatus 202 may execute subsequent tasks based on the subscription relationship without establishing the subscription relationship for each task.
S804: the management apparatus 202 receives the interactive data of the second video.
The second video is a video shot by the second camera. The interactive data of the second video may be obtained by the second camera or the video analysis device 206 performing a second video analysis task on the second video. The management apparatus 202 may obtain the interaction data of the second video from the second camera or the video analysis apparatus 206.
The interaction data of the second video may include information of the target to be tracked appearing in the second video. The target to be tracked may be a designated target, or may also be a target determined by matching the target appearing in the second video with the monitoring target by the second camera or the video analysis device 206.
The interactive data type may be different depending on the video analysis task. The following illustrates the interactive data in connection with different video analysis tasks.
For example, when the second video analysis task is target detection and target tracking (person detection and tracking, vehicle detection and tracking, etc.), the interaction data of the second video may be any one or more of information such as an attribute, a feature value, and an image of the target to be tracked. For another example, when the second video analysis task is target counting statistics (crowd counting statistics, vehicle counting statistics, etc.), the interaction data of the second video may be one or more of information such as the number of targets or the density of the targets.
The above S802 and S804 may be executed in parallel, or may be executed sequentially according to a set sequence, for example, the video analysis apparatus 206 may execute S802 first and then execute S804, or execute S804 first and then execute S802, and this embodiment of the present application is only exemplified by executing S802 first and then executing S804.
S806: the management device 202 sends the interaction data of the second video to the first camera or the video analysis device 206 according to the subscription relationship.
When the first camera is a normal camera (without a video analysis function), the management apparatus 202 sends the interactive data of the second video to the video analysis apparatus 206 based on the subscription relationship between the first camera and the second camera. As such, the video analytics device 206 may perform a first video analytics task on a first video based on interaction data of a second video.
When the first camera is an intelligent camera (having a video analysis function), the management device 202 sends the interactive data of the second video to the first camera based on the subscription relationship between the first camera and the second camera. In some embodiments, the management apparatus 202 may also transmit the interaction data of the second video to the video analysis apparatus 206. As such, the first camera or video analytics device 206 may perform a first video analytics task on the first video based on the interaction data of the second video.
In some possible implementations, the management apparatus 202 may further send an action logic to the first camera or the video analysis apparatus, so that the first camera or the video analysis apparatus performs the first video analysis task on the first video based on the interaction data of the second video and the action logic.
Before the management device 202 sends the action logic to the first camera or the video analysis device, the management device 202 may obtain the action logic written by the manager through the programmable interface. In some embodiments, the action logic may also be configured by the video analytics application.
In order to make the technical solution of the present application clearer and easier to understand, the video analysis method according to the embodiment of the present application will be introduced from the perspective of interaction with a specific application scenario.
Referring to fig. 9, a flow chart of a video analysis method applied to a scene search based on a face track includes the following steps:
s902: the camera a transmits a subscription instruction to the management apparatus 202.
The subscription instruction includes a subscription parameter. The subscription parameter may be a distance parameter. For example, the distance parameter may be within 2km of the periphery, and the subscription instruction is used to instruct the management device 202 to establish a subscription relationship between the camera a and cameras within 2km of the periphery of the camera a.
S904: the management apparatus 202 queries for a camera satisfying the subscription parameter.
The management apparatus 202 may determine the cameras satisfying the subscription parameter according to the geographical position relationship of the cameras. When the management apparatus 202 locally stores the geographical position relationship of the cameras, the management apparatus 202 may locally search for the cameras that satisfy the subscription parameters. When the management apparatus 202 does not locally store the geographical position relationship of the camera, the management apparatus 202 may send an inquiry request to the location apparatus, and the location apparatus may return an inquiry response to the inquiry request to the management apparatus 202. The inquiry response carries the cameras meeting the subscription parameters.
S906: the management apparatus 202 completes the subscription relationship recording.
In this embodiment, it is assumed that camera B is a camera that satisfies the subscription parameters, for example, camera B is a camera within a distance from camera A2 km. Correspondingly, the management apparatus 202 may record the correspondence between the camera a and the camera B to establish the subscription relationship between the camera a and the camera B.
It should be noted that S902 to S906 are only one implementation manner of establishing the subscription relationship, and specifically, the subscription relationship is established through a pre-configuration mode. In other possible implementations of the embodiment of the present application, the management apparatus 202 may also establish the subscription relationship in other manners, for example, through a passive execution mode.
S908: and the camera B analyzes the video shot by the camera B to obtain an analysis result of the video.
The camera B is an intelligent camera and can execute a video analysis task on a video shot by the camera B, so that the video is analyzed, and an analysis result of the video shot by the camera B is obtained.
S909: the camera B reports the analysis result of the video shot by the camera B to the management device 202.
The analysis result includes information of the face F. For example: the analysis result includes the feature value of the face F and the image of the face F. In some possible implementations, the analysis result may further include information of the human body F, such as wearing information of gender, jacket color, and the like.
S910: the management apparatus 202 transmits the analysis result of the video shot by the camera B to the storage and search apparatus 204 to store the analysis result.
S911: the management apparatus 202 issues an operation logic to the camera a.
Specifically, the management device 202 may receive action logic written by a manager through a programmable interface provided by the management device 202. When the management apparatus 202 detects that the similarity between the face F and the monitored face reaches a preset threshold, it may send an action logic to a camera, such as the camera a, which has a subscription relationship with the camera B. In some embodiments, management device 202 may also issue action logic to camera a based on the indication of camera B.
It should be noted that the management device 202 may issue corresponding action logic according to actual requirements. For example: when the camera a has a good face capturing capability, the management device 202 may issue an action logic for instructing face monitoring; when the camera a has a good human body capturing capability, the management device 202 may issue an action logic for instructing human body monitoring.
S912: the management apparatus 202 issues interactive data including information of the face F to the camera a.
Specifically, the management device 202 issues interactive data of a video shot by the camera B to the camera a based on the subscription relationship between the camera a and the camera B. The management device 202 may obtain the interactive data of the video captured by the camera B according to part or all of the information in the analysis result of the video captured by the camera B. In some embodiments, the interaction data includes information of a human face F.
S911 and S912 may be executed in parallel, or may be executed in sequence according to a set sequence. For example: the management device 202 may issue the action logic first and then issue the interactive data; or the management device 202 may issue the interactive data first and then issue the action logic.
S914: the camera A caches action logic and interaction data including face F information.
The camera a caches the action logic and the interaction data for use in subsequent comparisons. The camera a may use the cached action logic and the interaction data based on a certain policy, such as a timeliness policy. In some possible implementations, the camera a may use the cached action logic and interaction data after the set start time, and delete the action logic and interaction data when the set deletion time arrives. For example, if the startup time is 10 minutes and the deletion time is 10 hours, the camera a may use the action logic and the interaction data after the cache time reaches 10 minutes, and delete the action logic and the interaction data after the cache time reaches 10 hours.
S916: and the camera A analyzes the video shot by the camera A according to the action logic to obtain the information of the person in the camera A.
The camera a is an intelligent camera, and can execute a video analysis task on a video shot by the camera a based on action logic issued by the management device 202, so that the video is analyzed, and information of people in the camera a is obtained. The information of the person may include face information. For example: the information of the person in the camera a may include a feature value of the face G and an image of the face G. In some possible implementations, the information of the person in the camera a may further include information of the human body G, such as a gender, a jacket color, and the like.
Specifically, the action logic includes at least one of a task type parameter, a time parameter, a condition parameter for performing the video analysis task. In this example, it is assumed that the task type is face monitoring, and the similarity threshold is 0.95 (condition parameter) after the time for starting the analysis is 100 s. And calling a face detection model to identify the face in the video shot by the camera A based on the task type and the time for starting analysis to obtain face information. The face information includes information of the face G, such as a feature value of the face G and an image of the face G.
S918: and the camera A determines that the face F and the face G correspond to the same person according to the similarity of the face F and the face G.
Specifically, the camera a may calculate a distance between a feature value of the face F and a feature value of the face G, thereby determining a similarity between the face F and the face G, compare the similarity with a similarity threshold, and determine that the face F and the face G belong to the same person when the similarity is greater than or equal to the similarity threshold.
S920: the camera a reports the analysis result of the video shot by the camera a to the management apparatus 202.
The analysis result of the video shot by the camera A comprises information of the human face G. For example: the analysis result of the video shot by the camera A comprises the characteristic value of the human face G and the image of the human face G. The analysis result also comprises the incidence relation between the face F and the face G.
S922: the management apparatus 202 transmits the analysis result of the video shot by the camera a to the storage and search apparatus 204 to cause the storage and search apparatus 204 to store the analysis result.
The analysis result carries the association relationship between the face F and the face G. In the embodiment of the present application, it is assumed that the face F and the face G correspond to the same person.
S924: the application 300 sends a track search request to the storage and search apparatus 204.
Specifically, the application 300 may generate a track search request in response to a track search operation triggered by a user, where the track search request may carry a face image of a monitoring target, such as a suspect, and then the application 300 sends the track search request to the storage and search apparatus 204.
S926: the storage and search means 204 generates a trajectory based on the analysis result of the video captured by the camera a and the analysis result of the video captured by the camera B.
Specifically, the storage and search device 204 may perform a search based on a face image carried in the trajectory search request. When the face image is matched with the image of the face F, the storage and search device 204 may connect the geographic positions of the camera a and the camera B to form a track according to the association relationship between the face F and the face G.
S928: the store and search means 204 sends the tracks to the application 300.
S930: the application 300 presents the track to the user.
In some possible implementations, the camera may have action logic built in. For example: the camera a may have a built-in action logic, and thus, the video analysis method according to the embodiment of the present application may not execute the above S911. The camera a can perform a video analysis task on the video shot by the camera a based on the built-in action logic.
In some possible implementation manners, the camera head can also send out action logic to other camera heads in the process of some logic processing for completing the camera head to cooperate with certain tasks. For example: after finding a certain monitored face, the camera B acquires the face and body images of the person at that time and informs peripheral cameras such as the camera A to cooperatively track the person. Specifically, the camera B sends the interactive data and the action logic to the camera a. When the cameras which accord with face snapshot are relatively few (or the faces which accord with requirements cannot be grabbed under some conditions), the number of the cameras which can monitor the human body is more, and the personnel can be better tracked by utilizing the cameras which can monitor the human body to carry out collaborative analysis.
Based on the above description, it can be seen that the embodiment of the present application implements customization of the associated area by establishing a subscription relationship, and constructs a networked trajectory tracking by performing collaborative analysis on videos of the associated area. On one hand, massive characteristic value searching and characteristic value comparison operations are avoided, explosive resource consumption is avoided, explosive resource requirements are further avoided, and on the other hand, analysis efficiency is improved by reducing calculated amount.
Next, referring to a flow chart of a video analysis method shown in fig. 10, the method is applied to a specific behavior detection and alarm scenario, where the specific behavior may be a behavior without wearing a mask, and the method specifically includes the following steps:
s1002: the management device 202 receives action logic written by a manager through a programming interface.
The management device 202 is provided with a programming interface. The administrator may write the action logic through a programming language based on the programming interface. The action logic may specifically be logic for specific behavior detection and alarm. The specific behavior may specifically be a non-mask wearing behavior. The management device 202 may receive action logic written by a manager through the programming interface. In some embodiments, the management device 202 may also receive action logic configured by the administrator in a configuration manner.
S1004: the management apparatus 202 specifies the camera corresponding to the camera a based on the camera position information.
Specifically, the management apparatus 202 may determine a camera for crowd counting statistics and a camera for detecting wearing behavior of the mask according to the camera position information, so as to establish a subscription relationship between the cameras. The camera used for crowd counting statistics can be a high-point camera, the observation area of the camera is wide, but details in the picture cannot be cleared, and the camera used for detecting the wearing behavior of the mask can clearly display the details in the picture.
In the embodiment of the present application, it is assumed that the cameras in the monitoring area include a camera a and a camera B. Camera A is regional high point camera for crowd's count statistics, camera B is used for detecting the gauze mask and wears the action.
S1006: the management apparatus 202 completes the subscription relationship recording.
Specifically, the management apparatus 202 may record a correspondence relationship between the camera a and the camera B to establish a subscription relationship between the camera a and the camera B. In this way, the management apparatus 202 may forward the interactive data based on the subscription relationship, thereby implementing data interaction between the camera a and the camera B.
S1008: the camera A analyzes the video shot by the camera A to obtain an analysis result of the video.
The camera A is an intelligent camera and can execute a video analysis task on a video shot by the camera A, so that the video is analyzed, and a video analysis result is obtained. The analysis results include the number of people in the video.
S1010: the camera a reports the analysis result of the video shot by the camera a to the management apparatus 202.
S1011: the management apparatus 202 issues an operation logic to the camera B.
Specifically, the management device 202 may receive action logic written by a manager through a programmable interface provided by the management device 202. Wherein the action logic comprises at least one of a task type parameter, a time parameter and a condition parameter for executing the video analysis task. In this example, it is assumed that the task type is specific behavior detection, such as mask wearing behavior detection. The condition parameters include trigger conditions of the task. Assume in this example that the triggering condition is that the number of people in the video is greater than a first threshold.
S1012, the management device 202 sends the interactive data of the video shot by the camera A to the camera B.
The interactive data of the video shot by the camera A is from the analysis result of the video shot by the camera A. The interactive data may be part or all of the information in the analysis result of the video photographed by the camera a. In some embodiments, the interaction data for the video captured by camera a may include the number of people in the video.
S1014: the camera B judges whether the number of people in the video is larger than a first threshold value or not based on the action logic. If yes, S1016 is performed.
S1016: and the camera B analyzes the video shot by the camera B based on the action logic so as to detect the person who does not wear the mask.
The camera B is an intelligent camera and can execute a video analysis task on a video shot by the camera B, so that the video is analyzed, and an analysis result of the video shot by the camera B is obtained. The analysis result includes information of the person who does not wear the mask. The information of the person not wearing the mask may include any one or more of attributes, feature values and face images of the person not wearing the mask.
S1018: the camera B reports the analysis result including the information of the person who does not wear the mask to the management device 202.
S1020: the management apparatus 202 transmits the analysis result including the information of the person who does not wear the mask to the application 300.
S1022: the application 300 gives an alarm according to information of persons who do not wear the mask.
Specifically, the application 300 may generate an alarm message based on the information of the person not wearing the mask, typically presenting the alarm information for prompting. In some embodiments, the application 300 may also broadcast information based on the person not wearing the mask to implement an alarm prompt.
This embodiment is exemplified by the management apparatus 202 implementing interactive data forwarding and action logic issuing. In other possible implementation manners of the embodiment of the application, the camera a may also directly send the action logic and the interaction data including the number of people in the video to the camera B, and the camera B executes the step of judging whether the number of people in the video is greater than the first threshold value based on the action logic, so as to determine whether to trigger the mask wearing behavior detection.
In some possible implementations, after the camera a reports the analysis result of the video to the management device 202, the management device 202 may first perform the step of determining whether the number of people in the video is greater than the first threshold, and if so, the management device 202 issues an action logic to the camera B. And the camera B detects the mask wearing behavior of the video shot by the camera B based on the action logic.
It should be noted that, if the number of people in the video is smaller than the first threshold for a certain time, the camera B may stop the video analysis, for example, stop the detection of the wearing behavior of the mask, so as to release the resource. Specifically, camera a or management device 202 may instruct camera B to stop detecting the mask wearing behavior by sending action logic.
It should be noted that, in the embodiments shown in fig. 9 and fig. 10, the camera a and the camera B are both exemplified as an intelligent camera, and in some possible implementations, the camera a and the camera B may also be ordinary cameras, so that the video analysis performed by the video analysis device 206 may also be performed on the video shot by the camera a and the video shot by the camera B. The video analysis device 206 may perform a video analysis task on the corresponding video based on the action logic, thereby implementing video analysis.
The video analysis method and the management method of video analysis provided by the embodiment of the present application are described in detail with reference to fig. 1 to 10, and the apparatuses and devices provided by the embodiment of the present application are described with reference to the drawings.
Referring to the schematic structural diagram of the video analysis apparatus 206 shown in fig. 11, the apparatus 206 includes:
a communication unit 2062, configured to obtain a first video captured by a first camera;
the communication unit 2062 is further configured to obtain interaction data of a second video, where the second video is obtained by shooting with a second camera;
an analyzing unit 2064, configured to perform a first video analysis task on the first video based on the interactive data of the second video, and obtain an analysis result of the first video, where the interactive data of the second video is obtained by performing a second video analysis task on the second video.
In some possible implementations, the communication unit 2062 is further configured to:
obtaining action logic, wherein the action logic comprises one or more of the following parameters: executing a task type parameter of the first video analysis task, executing a time parameter of the first video analysis task, and executing a condition parameter of the first video analysis task;
the analysis unit 2064 is specifically configured to:
performing a first video analysis task on the first video based on the action logic and interaction data of the second video.
In some possible implementations, the communication unit 2062 is specifically configured to:
receiving the action logic sent by the management device 202, wherein the action logic is written by a manager according to a programming interface provided by the management device 202.
In some possible implementations, the analysis result of the first video includes data associated with the interaction data of the second video.
In some possible implementations, the interaction data of the second video includes information of an object to be tracked appearing in the second video;
the analysis unit 2064 is specifically configured to:
identifying a target in the first video to obtain information of the target;
and obtaining an analysis result of the first video according to the information of the target and the information of the target to be tracked in the second video, wherein the analysis result comprises the information of the target of which the information similarity with the target to be tracked in the second video meets a preset condition.
In some possible implementations, the analysis unit 2064 is specifically configured to:
and identifying the target in the first video according to the pre-trained AI model to obtain the information of the target.
In some possible implementations, there is a subscription relationship between the first camera and the second camera, and the subscription relationship is pre-constructed by the management apparatus 202.
In some possible implementations, the analysis unit 2064 is further configured to:
generating interactive data of the first video according to the analysis result of the first video;
the communication unit 2062 is further configured to:
and sending the interactive data of the first video to a management device 202 or a third camera, wherein a subscription relationship exists between the third camera and the first camera.
In some possible implementations, the video analysis apparatus 206 establishes a communication connection with the first camera and the second camera.
In some possible implementations, the video analysis apparatus 206 is applied to security monitoring, and the first video analysis task includes one or more of the following tasks: person detection and tracking, vehicle detection and tracking, crowd counting statistics, vehicle counting statistics, specific behavior detection and alarm.
The video analysis apparatus 206 according to the embodiment of the present application may correspond to performing the method described in the embodiment of the present application, and the above and other operations and/or functions of each module/unit of the video analysis apparatus 206 are respectively for implementing the corresponding flow of each method in the embodiment shown in fig. 6, and are not repeated herein for brevity.
Referring next to the schematic structural diagram of the management apparatus 202 shown in fig. 12, the management apparatus 202 is communicatively connected to a plurality of cameras, and the apparatus 202 includes:
the communication unit 2024 is configured to receive interaction data of a second video, where the second video is captured by the second camera;
the communication unit 2024 is further configured to send the interaction data of the second video to the first camera or the video analysis apparatus, so that the first camera or the video analysis apparatus executes a first video analysis task on the first video based on the interaction data of the second video, where there is a subscription relationship between the first camera and the second camera, and the first video is obtained by shooting with the first camera.
In some possible implementations, the apparatus 202 further includes:
a subscription unit 2022, configured to establish a subscription relationship between the first camera and the second camera in the at least one camera.
In some possible implementations, the subscription unit 2022 is specifically configured to:
receiving a subscription instruction sent by the first camera, wherein the subscription instruction comprises subscription parameters;
and querying the second camera meeting the subscription parameters according to the subscription parameters, and establishing a subscription relation between the first camera and the second camera.
In some possible implementations, the subscription unit 2022 is specifically configured to:
receiving a subscription instruction sent by a manager;
and establishing a subscription relation between the first camera and the second camera according to the subscription instruction.
In some possible implementations, the communication unit 2024 is further configured to:
sending action logic to the first camera or video analytics device 206 to cause the first camera or video analytics device 206 to perform a first video analytics task on a first video based on the interaction data of the second video and the action logic.
In some possible implementations, the communication unit 2024 is further configured to:
before sending the action logic to the first camera or the video analysis device, obtaining the action logic written by a manager through a programmable interface, or obtaining the action logic configured by a user through a video analysis application.
The management apparatus 202 according to the embodiment of the present application may correspond to perform the method described in the embodiment of the present application, and the above and other operations and/or functions of each module/unit of the management apparatus 202 are respectively for implementing corresponding flows of each method in the embodiment shown in fig. 8, and are not described herein again for brevity.
The embodiment of the application also provides a device 1300. The apparatus 1300 may be a server or a cluster of servers in a cloud environment, or a server or a cluster of servers in a local data center. The apparatus 1300 is specifically configured to implement the functions of the video analysis device 206 in the embodiment shown in fig. 11.
Fig. 13 provides a schematic structural diagram of a device 1300, and as shown in fig. 13, the device 1300 includes a bus 1301, a processor 1302, a communication interface 1303, and a memory 1304. Communication among processor 1302, memory 1304, and communication interface 1303 is via bus 1301.
The bus 1301 may be a Peripheral Component Interconnect (PCI) bus, an Extended Industry Standard Architecture (EISA) bus, or the like. The bus may be divided into an address bus, a data bus, a control bus, etc. For ease of illustration, only one thick line is shown in FIG. 13, but this is not intended to represent only one bus or type of bus.
The processor 1302 may be any one or more of a Central Processing Unit (CPU), a Graphics Processing Unit (GPU), a Micro Processor (MP), a Digital Signal Processor (DSP), and the like.
The communication interface 1303 is used for communication with the outside. For example, the communication interface 1303 is used to obtain a first video shot by a first camera and obtain interactive data of a second video, or receive action logic sent by the management apparatus 202, send the interactive data of the first video to the management apparatus 202, and so on.
The memory 1304 may include volatile memory (volatile memory), such as Random Access Memory (RAM). The memory 1304 may also include a non-volatile memory (non-volatile memory), such as a read-only memory (ROM), a flash memory, a Hard Disk Drive (HDD), or a Solid State Drive (SSD).
The memory 1304 stores executable code that the processor 1302 executes to perform the video analytics methods described above.
Specifically, in the case of implementing the embodiment shown in fig. 11, and in the case of implementing the units of the video analysis apparatus 206 described in the embodiment of fig. 11 by software, software or program codes required for executing the functions of the communication unit 2062 and the analysis unit 2064 in fig. 11 may be partially or entirely stored in the memory 1304. The processor 1302 executes the program codes corresponding to the respective units stored in the memory 1304, and executes a video analysis method.
The embodiment of the application also provides a device 1400. The device 1400 may be a server or a cluster of servers in a cloud environment, or a server or a cluster of servers in a local data center. The apparatus 1400 is specifically configured to implement the functions of the management device 202 in the embodiment shown in fig. 12.
Fig. 14 provides a schematic diagram of the structure of a device 1400, as shown in fig. 14, the device 1400 comprising a bus 1401, a processor 1402, a communication interface 1403, and a memory 1404. Communication between the processor 1402, the memory 1404, and the communication interface 1403 occurs via a bus 1401.
The specific implementation of the bus 1401, the processor 1402, the communication interface 1403 and the memory 1404 can be described with reference to the embodiment shown in fig. 13. Specifically, in the case of implementing the embodiment shown in fig. 12, and in the case of implementing the modules or units of the management apparatus 202 described in the embodiment of fig. 12 by software, software or program codes required for executing the functions of the subscription unit 2022 and the communication unit 2024 in fig. 12 may be partially or entirely stored in the memory 1404. The processor 1402 executes the program codes corresponding to the respective units stored in the memory 1404, and executes a management method of video analysis.
The embodiment of the application further provides a camera 100, and the camera 100 can be an intelligent camera. The camera 100 is specifically used for implementing the video analysis method provided in the embodiment of the present application.
Fig. 15 provides a schematic structural diagram of a camera 100, and as shown in fig. 15, the camera 100 includes a bus 1501, a processor 1502, a communication interface 1503, a memory 1504, and an image sensor 1505. The processor 1502, the memory 1504, the communication interface 1503, and the image sensor 1505 communicate with each other via the bus 1501.
The specific implementation of the bus 1501, the processor 1502, the communication interface 1503 and the memory 1504 can be described with reference to the embodiment shown in fig. 13. The image sensor 1505 is a photosensitive element for converting the light image on the photosensitive surface into an electrical signal in proportion to the light image, thereby achieving video capture. Image sensor 1505 may include different types of sensors such as a charge-coupled device (CCD), a Complementary Metal Oxide Semiconductor (CMOS), and the like.
Specifically, the image sensor 1505 is used for capturing a first video, the processor 1502 can obtain the first video captured by the image sensor 1505 through the bus 1501, the communication interface 1503 is used for obtaining interaction data of a second video, the second video is obtained by shooting through a second camera, the communication interface 1503 transmits the interaction data of the second video to the processor 1502 through the bus 1501, the processor 1502 executes program codes in the memory 1504, executes interaction data based on the second video, executes a first video analysis task on the first video, and obtains an analysis result of the first video.
The embodiment of the application also provides a computer readable storage medium. The computer-readable storage medium can be any available medium that a computing device can store or a data storage device, such as a data center, that contains one or more available media. The usable medium may be a magnetic medium (e.g., floppy disk, hard disk, magnetic tape), an optical medium (e.g., DVD), or a semiconductor medium (e.g., solid state disk), among others. The computer-readable storage medium includes instructions that instruct the apparatus 1300 to execute the video analysis method applied to the video analysis device 206 described above, or the apparatus 1400 to execute the management method applied to the video analysis of the management device 202 described above, or the camera 100 to execute the video analysis method.
Embodiments of the present application also provide a computer program product comprising one or more computer instructions. When loaded and executed on a computing device, cause the processes or functions described in accordance with embodiments of the application to occur, in whole or in part.
The computer instructions may be stored in a computer readable storage medium or transmitted from one computer readable storage medium to another, for example, the computer instructions may be transmitted from one website site, computer, or data center to another website site, computer, or data center by wire (e.g., coaxial cable, fiber optic, Digital Subscriber Line (DSL)) or wirelessly (e.g., infrared, wireless, microwave, etc.).
When the computer program product is executed by a computer, the computer performs any one of the aforementioned video analysis methods or management methods of video analysis. The computer program product may be a software installation package, which may be downloaded and executed on a computer in case any of the aforementioned video analysis methods or management methods of video analysis needs to be used.
The description of the flow or structure corresponding to each of the above drawings has emphasis, and a part not described in detail in a certain flow or structure may refer to the related description of other flows or structures.

Claims (36)

1. A method of video analysis, the method comprising:
acquiring a first video shot by a first camera;
acquiring interactive data of a second video, wherein the second video is obtained by shooting through a second camera;
and executing a first video analysis task on the first video based on the interactive data of the second video to obtain an analysis result of the first video, wherein the interactive data of the second video is obtained by executing a second video analysis task on the second video.
2. The method of claim 1, further comprising:
obtaining action logic, wherein the action logic comprises one or more of the following parameters: executing a task type parameter of the first video analysis task, executing a time parameter of the first video analysis task, and executing a condition parameter of the first video analysis task;
the performing a first video analysis task on the first video based on the interaction data of the second video comprises:
performing a first video analysis task on the first video based on the action logic and interaction data of the second video.
3. The method of claim 2, wherein the obtaining action logic comprises: receiving the action logic sent by a management device, wherein the action logic is written by a manager according to a programming interface provided by the management device.
4. The method of any one of claims 1-3, wherein the analysis result of the first video includes data associated with the interaction data of the second video.
5. The method according to any one of claims 1-4, wherein the interaction data of the second video comprises information of an object to be tracked appearing in the second video;
the executing a first video analysis task on the first video based on the interactive data of the second video to obtain an analysis result of the first video comprises:
identifying a target in the first video to obtain information of the target;
and obtaining an analysis result of the first video according to the information of the target and the information of the target to be tracked in the second video, wherein the analysis result comprises the information of the target of which the information similarity with the target to be tracked in the second video meets a preset condition.
6. The method of claim 5, wherein the identifying the object in the first video and obtaining the information of the object comprises:
and identifying the target in the first video according to a pre-trained Artificial Intelligence (AI) model to obtain the information of the target.
7. The method according to any one of claims 1-6, wherein there is a subscription relationship between the first camera and the second camera, the subscription relationship being pre-constructed by a management device.
8. The method of any of claims 1-7, wherein the method is performed by the first camera; and the interactive data of the second video is obtained by the second camera executing a second video analysis task on the second video.
9. The method of claim 8, wherein the obtaining interactive data of the second video comprises:
the first camera acquires the interactive data of the second video from the second camera, or,
and the first camera acquires the interactive data of the second video from a management device.
10. The method according to any one of claims 1-9, further comprising:
generating interactive data of the first video according to the analysis result of the first video;
and sending the interactive data of the first video to a management device or a third camera, wherein a subscription relationship exists between the third camera and the first camera.
11. The method according to any one of claims 1-10, wherein the method is performed by a video analytics device that establishes a communication connection with the first camera and the second camera.
12. The method according to any one of claims 1-11, wherein the method is applied to security monitoring, and the first video analysis task comprises one or more of the following tasks: person detection and tracking, vehicle detection and tracking, crowd counting statistics, vehicle counting statistics, specific behavior detection and alarm.
13. A management method for video analysis is applied to a management device, wherein the management device is in communication connection with a plurality of cameras, and the method comprises the following steps:
the management device receives interactive data of a second video, wherein the second video is obtained by shooting through the second camera;
the management device sends the interactive data of the second video to a first camera or a video analysis device, so that the first camera or the video analysis device executes a first video analysis task on a first video based on the interactive data of the second video, wherein a subscription relationship exists between the first camera and the second camera, and the first video is obtained by shooting through the first camera.
14. The method of claim 13, wherein before the managing device receives the interactive data for the second video, the method further comprises:
the management device establishes a subscription relationship between the first camera and the second camera in the at least one camera.
15. The method of claim 14, wherein the establishing, by the management device, a subscription relationship between the first camera and the second camera of the at least one camera comprises:
the management device receives a subscription instruction sent by the first camera, wherein the subscription instruction comprises subscription parameters;
and the management device inquires the second camera meeting the subscription parameters according to the subscription parameters and establishes a subscription relation between the first camera and the second camera.
16. The method of claim 14, wherein the establishing, by the management device, a subscription relationship between the first camera and the second camera of the at least one camera comprises:
the management device receives a subscription instruction sent by a manager;
and the management device establishes a subscription relationship between the first camera and the second camera according to the subscription instruction.
17. The method according to any one of claims 13-16, further comprising:
the management device sends action logic to the first camera or the video analysis device so that the first camera or the video analysis device executes the first video analysis task on the first video based on the interaction data of the second video and the action logic.
18. The method of claim 17, wherein before the managing device sends the action logic to the first camera or the video analytics device, the method further comprises:
the management device obtains the action logic written by a manager through a programmable interface of the management device, or the management device obtains the action logic configured by a user through a video analysis application.
19. A video analysis apparatus, comprising:
the communication unit is used for acquiring a first video shot by a first camera and acquiring interactive data of a second video, wherein the second video is shot by a second camera;
and the analysis unit is used for executing a first video analysis task on the first video based on the interactive data of the second video to obtain an analysis result of the first video, wherein the interactive data of the second video is obtained by executing a second video analysis task on the second video.
20. The apparatus of claim 19, wherein the communication unit is further configured to:
obtaining action logic, wherein the action logic comprises one or more of the following parameters: executing a task type parameter of the first video analysis task, executing a time parameter of the first video analysis task, and executing a condition parameter of the first video analysis task;
the analysis unit is specifically configured to:
performing a first video analysis task on the first video based on the action logic and interaction data of the second video.
21. The apparatus according to claim 20, wherein the communication unit is specifically configured to:
receiving the action logic sent by a management device, wherein the action logic is written by a manager according to a programming interface provided by the management device.
22. The apparatus according to any one of claims 19 to 21, wherein the analysis result of the first video comprises data associated with the interaction data of the second video.
23. The apparatus according to any one of claims 19-22, wherein the interaction data of the second video comprises information of an object to be tracked appearing in the second video;
the analysis unit is specifically configured to:
identifying a target in the first video to obtain information of the target;
and obtaining an analysis result of the first video according to the information of the target and the information of the target to be tracked in the second video, wherein the analysis result comprises the information of the target of which the information similarity with the target to be tracked in the second video meets a preset condition.
24. The apparatus according to claim 23, wherein the analysis unit is specifically configured to:
and identifying the target in the first video according to a pre-trained Artificial Intelligence (AI) model to obtain the information of the target.
25. The apparatus according to any one of claims 19-24, wherein a subscription relationship exists between the first camera and the second camera, and the subscription relationship is pre-constructed by a management apparatus.
26. The apparatus of any one of claims 19-25,
the analysis unit is further configured to:
generating interactive data of the first video according to the analysis result of the first video;
the communication unit is further configured to:
and sending the interactive data of the first video to a management device or a third camera, wherein a subscription relationship exists between the third camera and the first camera.
27. A management device communicatively coupled to a plurality of cameras, the management device comprising:
the communication unit is used for receiving interactive data of a second video, wherein the second video is obtained by shooting through the second camera; and sending the interactive data of the second video to a first camera or a video analysis device so that the first camera or the video analysis device executes a first video analysis task on the first video based on the interactive data of the second video, wherein a subscription relationship exists between the first camera and the second camera, and the first video is obtained by shooting with the first camera.
28. The apparatus of claim 27, further comprising:
and the subscription unit is used for establishing a subscription relationship between the first camera and the second camera in the at least one camera.
29. The management device according to claim 28, wherein the subscription unit is specifically configured to:
receiving a subscription instruction sent by the first camera, wherein the subscription instruction comprises subscription parameters;
and querying the second camera meeting the subscription parameters according to the subscription parameters, and establishing a subscription relation between the first camera and the second camera.
30. The management device according to claim 28, wherein the subscription unit is specifically configured to:
receiving a subscription instruction sent by a manager;
and establishing a subscription relation between the first camera and the second camera according to the subscription instruction.
31. The management device according to any one of claims 27 to 30, wherein the communication unit is further configured to:
sending action logic to the first camera or the video analysis device to enable the first camera or the video analysis device to execute the first video analysis task on the first video based on the interaction data of the second video and the action logic.
32. The management device according to any one of claims 27 to 31, wherein the communication unit is further configured to:
and acquiring action logic written by a manager through a programmable interface of the management device, or acquiring action logic configured by a user through a video analysis application.
33. An apparatus, characterized in that the apparatus comprises a processor and a memory, the memory storing executable program code, the processor reading the executable program code stored by the memory to implement the functionality of the video analysis device of any of the preceding claims 19-26 or to implement the functionality of the management device of any of the preceding claims 27-32.
34. A camera comprising a processor, a memory and an image sensor, wherein the image sensor is used for acquiring a first video, the memory stores executable program code, and the processor reads the executable program code to realize the functions of the video analysis device as claimed in any one of the preceding claims 19 to 26.
35. A computer-readable storage medium, characterized in that instructions are stored therein, which when run in a device, implement the functionality of the video analysis apparatus of any of the preceding claims 19-26 or the functionality of the management apparatus of any of the preceding claims 27-32.
36. A computer program product comprising instructions which, when executed by an apparatus, implement the functionality of the video analysis device of any preceding claim 19 to 26 or the management device of any preceding claim 27 to 32.
CN202110224956.XA 2020-03-09 2021-03-01 Video analysis method, video analysis management method and related equipment Pending CN113378616A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
PCT/CN2021/079312 WO2021180004A1 (en) 2020-03-09 2021-03-05 Video analysis method, video analysis management method, and related device

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN2020101591165 2020-03-09
CN202010159116 2020-03-09

Publications (1)

Publication Number Publication Date
CN113378616A true CN113378616A (en) 2021-09-10

Family

ID=77569647

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110224956.XA Pending CN113378616A (en) 2020-03-09 2021-03-01 Video analysis method, video analysis management method and related equipment

Country Status (2)

Country Link
CN (1) CN113378616A (en)
WO (1) WO2021180004A1 (en)

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113869778B (en) * 2021-10-13 2023-06-27 深圳联和智慧科技有限公司 Unmanned aerial vehicle river course inspection method and system based on city management
CN114743157B (en) * 2022-03-30 2023-03-03 中科融信科技有限公司 Pedestrian monitoring method, device, equipment and medium based on video
CN116132814B (en) * 2022-04-01 2023-12-22 港珠澳大桥管理局 Submarine immersed tube splicing structure information acquisition equipment, acquisition method, device and equipment
CN115174941B (en) * 2022-07-06 2023-03-24 灵羲科技(北京)有限公司 Real-time motion performance analysis and real-time data sharing method based on multiple paths of video streams
CN115984675B (en) * 2022-12-01 2023-10-13 扬州万方科技股份有限公司 System and method for realizing multipath video decoding and AI intelligent analysis

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8279283B2 (en) * 2005-11-18 2012-10-02 Utc Fire & Security Americas Corporation, Inc. Methods and systems for operating a video surveillance system
CN101610412B (en) * 2009-07-21 2011-01-19 北京大学 Visual tracking method based on multi-cue fusion
CN110866480B (en) * 2019-11-07 2021-09-17 浙江大华技术股份有限公司 Object tracking method and device, storage medium and electronic device

Also Published As

Publication number Publication date
WO2021180004A1 (en) 2021-09-16

Similar Documents

Publication Publication Date Title
WO2021180004A1 (en) Video analysis method, video analysis management method, and related device
US11568723B2 (en) Video surveillance and object recognition
JP6088541B2 (en) Cloud-based video surveillance management system
CN104205127A (en) Recognition-based security
KR102237086B1 (en) Apparatus and method for controlling a lobby phone that enables video surveillance through a communication terminal that can use a 5G mobile communication network based on facial recognition technology
US11636710B2 (en) Methods and apparatus for reducing false positives in facial recognition
KR102297217B1 (en) Method and apparatus for identifying object and object location equality between images
CN111565225B (en) Character action track determining method and device
US11250251B2 (en) Method for identifying potential associates of at least one target person, and an identification device
US11341351B2 (en) Methods and apparatus for facial recognition on a user device
CN111898592B (en) Track data processing method and device and computer readable storage medium
CN110852306A (en) Safety monitoring system based on artificial intelligence
KR102054930B1 (en) Method and apparatus for sharing picture in the system
CN109214326A (en) A kind of information processing method, device and system
CN113487055A (en) Intelligent ticket pre-selling method and device
Mukherjee et al. Energy efficient face recognition in mobile-fog environment
CN112689120A (en) Monitoring method and device
CN113438286B (en) Information pushing method and device, electronic equipment and storage medium
Rashmi et al. Video surveillance system and facility to access Pc from remote areas using smart phone
KR102047927B1 (en) Method and System for Collecting Data in Real Time by Filtering in Massive M2M Device Environment
CN112487082A (en) Biological feature recognition method and related equipment
US20210264165A1 (en) Video analytics system for dwell-time determinations
Orellana et al. Face Recognition for Criminal Identification: An Alert System in Suspects Scenarios
Huu et al. Proposing Smart System for Detecting and Monitoring Vehicle Using Multiobject Multicamera Tracking
CN117095324A (en) Event detection method, device, equipment and computer storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
TA01 Transfer of patent application right
TA01 Transfer of patent application right

Effective date of registration: 20220214

Address after: 550025 Huawei cloud data center, jiaoxinggong Road, Qianzhong Avenue, Gui'an New District, Guiyang City, Guizhou Province

Applicant after: Huawei Cloud Computing Technology Co.,Ltd.

Address before: 518129 Bantian HUAWEI headquarters office building, Longgang District, Guangdong, Shenzhen

Applicant before: HUAWEI TECHNOLOGIES Co.,Ltd.

SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination