CN112836565B - Monitoring video processing method and device, monitoring system, storage medium and terminal - Google Patents

Monitoring video processing method and device, monitoring system, storage medium and terminal Download PDF

Info

Publication number
CN112836565B
CN112836565B CN202011360214.1A CN202011360214A CN112836565B CN 112836565 B CN112836565 B CN 112836565B CN 202011360214 A CN202011360214 A CN 202011360214A CN 112836565 B CN112836565 B CN 112836565B
Authority
CN
China
Prior art keywords
information
edge computing
computing node
appearing
objects
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202011360214.1A
Other languages
Chinese (zh)
Other versions
CN112836565A (en
Inventor
徐骁翔
都大龙
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Xinyi Intelligent Technology Co ltd
Original Assignee
Shanghai Xinyi Intelligent Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Xinyi Intelligent Technology Co ltd filed Critical Shanghai Xinyi Intelligent Technology Co ltd
Priority to CN202011360214.1A priority Critical patent/CN112836565B/en
Publication of CN112836565A publication Critical patent/CN112836565A/en
Application granted granted Critical
Publication of CN112836565B publication Critical patent/CN112836565B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/52Surveillance or monitoring of activities, e.g. for recognising suspicious objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/70Information retrieval; Database structures therefor; File system structures therefor of video data
    • G06F16/78Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
    • G06F16/783Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content
    • G06F16/7837Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content using objects detected or recognised in the video content
    • G06F16/784Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content using objects detected or recognised in the video content the detected or recognised objects being people
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/70Information retrieval; Database structures therefor; File system structures therefor of video data
    • G06F16/78Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
    • G06F16/7867Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using information manually generated, e.g. tags, keywords, comments, title and artist information, manually generated time, location and usage information, user ratings
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/172Classification, e.g. identification
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/18Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast
    • H04N7/181Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast for receiving images from a plurality of remote sources

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Human Computer Interaction (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Library & Information Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Signal Processing (AREA)
  • General Health & Medical Sciences (AREA)
  • Databases & Information Systems (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Image Analysis (AREA)
  • Closed-Circuit Television Systems (AREA)

Abstract

A method and a device for processing a monitoring video, a monitoring system, a storage medium and a terminal are provided, wherein the method for processing the monitoring video comprises the following steps: receiving object information of objects in respective monitoring ranges, which are reported by respective edge computing nodes, wherein the object information comprises: characteristic information and auxiliary information; classifying all the objects according to the characteristic information of each object to respectively obtain the auxiliary information of each object from each edge computing node; integrating the auxiliary information of each object from each edge computing node, and taking the integrated auxiliary information as the auxiliary information of each object; storing the object information of each object. According to the scheme, time consumption of data processing and accuracy of classification results can be considered.

Description

Monitoring video processing method and device, monitoring system, storage medium and terminal
Technical Field
The embodiment of the invention relates to the field of video monitoring, in particular to a monitoring video processing method and device, a storage medium and a terminal.
Background
In some scenes, cameras for monitoring are generally arranged, the cameras collect videos or images and the like in a monitoring range, and the collected images or videos and the like are uploaded to a central server.
In the prior art, the central server performs search comparison in the billion data base according to videos or pictures and the like reported by all cameras, but the time is long, and the analysis capability is poor. In order to reduce time consumption, an approximate search algorithm is generally adopted for data classification, but the adoption of the approximate search algorithm tends to result in lower accuracy, so that the time consumption of data processing and the accuracy of classification results cannot be considered.
Disclosure of Invention
The technical problem solved by the embodiment of the invention is that the time consumption of data processing and the accuracy of classification results cannot be considered.
In order to solve the above technical problems, an embodiment of the present invention provides a method for processing a surveillance video, including: receiving object information of objects in respective monitoring ranges, which are reported by respective edge computing nodes, wherein the object information comprises: characteristic information and auxiliary information; classifying all the objects according to the characteristic information of each object to respectively obtain the auxiliary information of each object from each edge computing node; integrating the auxiliary information of each object from each edge computing node, and taking the integrated auxiliary information as the auxiliary information of each object; storing the object information of each object.
Optionally, the integrating the auxiliary information of each object from each edge computing node includes: and integrating the auxiliary information of each object from each edge computing node according to time sequence.
Optionally, the integrating the auxiliary information of each object from each edge computing node according to a time sequence includes: detecting whether the auxiliary information of each object from each edge computing node has a superposition part, wherein the superposition part refers to the same part in the auxiliary information uploaded by each edge computing node; when the overlapping part exists, performing a de-duplication operation to obtain the auxiliary information after de-duplication; and integrating the auxiliary information of each object from each edge computing node according to time sequence based on the auxiliary information after the duplication removal.
Optionally, the object information of all the objects in the respective monitoring range reported by the edge computing nodes is obtained in the following manner: all videos in the monitoring range of each edge computing node are acquired, and the videos are analyzed to obtain objects and corresponding characteristic information appearing in each frame of image of the videos; classifying the objects appearing in all frame images of the video according to the characteristic information of the objects appearing in all frame images of the video, and determining all objects appearing in the monitoring range of each edge computing node; for each object, object information of each object is obtained from an image in which the object appears.
Optionally, the monitoring video processing method further includes: receiving object information of an object appearing in the past, which is sent by an adjacent edge computing node, wherein the object appearing in the past refers to an object appearing in a monitoring range of the adjacent edge computing node in a set time period; and updating the characteristic information of the object in the edge computing node according to the characteristic information of the object which appears.
Optionally, the updating the feature information of the object in the edge computing node according to the feature information of the object that appears, includes: comparing the characteristic information of the object appearing in the past with the characteristic information of the object in the edge computing node; when the comparison result shows that the object which is the same as the appearing object exists in the object of the edge computing node, merging the object information of the object which is the same as the appearing object in the object of the edge computing node with the object information of the appearing object, and taking the merged object information as the object information of the appearing object; or when the comparison result shows that the object which is the same as the object which appears in the past does not exist in the object of the edge computing node, adding the object information of the object which appears in the past into the object information of the object of the edge computing node.
Optionally, the monitoring video processing method further includes: and after receiving the object information of the appearing object sent by the adjacent edge computing node, discarding the object information of the appearing object when the appearing object is not detected to appear in the monitoring range of the edge computing node within a preset time period.
Optionally, the auxiliary information includes: the edge calculates the node identification and camera information of the path.
Optionally, the camera information of the approach is obtained in the following manner: all the objects appearing in the node are calculated aiming at each edge, and all the images corresponding to each object are obtained; acquiring image acquisition time and acquisition cameras of all images corresponding to each object, and determining the cameras of each object path according to the sequence of the image acquisition time to obtain the camera information of the path.
Optionally, the monitoring video processing method further includes: comparing the characteristic information of each object with the characteristic information stored in the household registration database; and determining the identity information of each object according to the comparison result, wherein the object information comprises the identity information.
Optionally, the monitoring video processing method further includes: receiving a trace request, the trace request comprising: tracking characteristic information of an object; comparing the characteristic information of the tracking object with the characteristic information of all the stored objects; and finding out the tracking object from all the stored objects according to the comparison result, and acquiring the object information of the tracking object.
Optionally, the storing the object information of each object includes: detecting whether the object exists in the history storage information according to the characteristic information of each object; when the object exists, merging object information of the object with object information of the object in the history storage information; and storing the combined object information as object information of the object.
The embodiment of the invention also provides a monitoring system, which comprises: center server, edge compute node and camera, wherein: the monitoring range of the edge computing node is internally provided with a plurality of cameras, the edge computing node obtains object information of objects in the monitoring range according to monitoring videos reported by the cameras and reports the object information to the central server, and the object information comprises: characteristic information and auxiliary information; the central server is used for classifying all the objects according to the characteristic information of each object to obtain the auxiliary information of each object from each edge computing node, integrating the auxiliary information of each object from each edge computing node, taking the integrated auxiliary information as the auxiliary information of each object, and storing the object information of each object.
The embodiment of the invention also provides a monitoring video processing device, which comprises: a receiving unit, configured to receive object information of objects in respective monitoring ranges sent by respective edge computing nodes, where the object information includes: characteristic information and auxiliary information; the classification unit is used for classifying all the objects according to the characteristic information of each object to respectively obtain the attached information of each object from each edge computing node; the processing unit is used for integrating the auxiliary information of each object from each edge computing node and taking the integrated auxiliary information as the auxiliary information of each object; and a storage unit for storing the object information of each object by the user.
The embodiment of the invention also provides a storage medium, wherein the computer readable storage medium is a nonvolatile storage medium or a non-transient storage medium, and a computer program is stored on the storage medium, and the computer program executes the steps of any monitoring video processing method when being executed by a processor.
The embodiment of the invention also provides a terminal, which comprises a memory and a processor, wherein the memory stores a computer program capable of running on the processor, and the processor executes any one of the steps of the monitoring video processing method when running the computer program.
Compared with the prior art, the technical scheme of the embodiment of the invention has the following beneficial effects:
receiving object information of objects in respective monitoring ranges sent by each edge computing node, wherein the object information comprises characteristic information and auxiliary information, classifying all the objects according to the characteristic information of each object to obtain auxiliary information of each object from each edge computing node, integrating the auxiliary information of each object from each edge computing node, taking the integrated auxiliary information as the auxiliary information of each object, and storing the object information of each object. The received object information of the object is obtained after the processing of each edge computing node, and the central server only needs to integrate the processed object information, so that the data processing time efficiency of the central server can be improved, and the accuracy of the classification result can be improved by classifying the object based on the characteristic information of the object, so that the time efficiency of object classification and the accuracy of the classification result in the data processing process can be both considered.
In addition, when personnel tracking is performed, only the images of the personnel to be tracked and the characteristic information of each stored object are required to be compared, and compared with the prior art that the images of the personnel to be tracked and the stored hundred million-level videos or images are required to be compared one by one, the accuracy of the tracking result and the tracking timeliness can be expected.
Drawings
FIG. 1 is a flow chart of a surveillance video processing method in an embodiment of the invention;
fig. 2 is a flowchart of a method for acquiring object information in an embodiment of the present invention;
fig. 3 is a schematic structural diagram of a surveillance video processing apparatus according to an embodiment of the present invention;
FIG. 4 is a schematic diagram of a topology network of a monitoring system in an embodiment of the invention;
fig. 5 is a schematic diagram of a topology network of a monitoring system in the prior art.
Detailed Description
As described above, in the prior art, as shown in fig. 5, a network topology diagram of a surveillance video system in the prior art is given, and each camera 43 reports an acquired image or video to the central server 41. The central server 41 makes check comparison in the billion-level database according to videos or pictures and the like reported by all cameras 43, but takes longer and has poor analysis capability. In order to reduce time consumption, an approximate search algorithm is generally adopted for data classification, but the adoption of the approximate search algorithm tends to result in lower accuracy, so that the time consumption of data processing and the accuracy of classification results cannot be considered.
In order to solve the above problem, in the embodiment of the present invention, object information of objects in respective monitoring ranges sent by respective edge computing nodes is received, where the object information includes feature information and auxiliary information, all objects are only required to be classified according to the feature information of each object, so as to obtain auxiliary information of each object from each edge computing node, the auxiliary information of each object from each edge computing node is integrated, and the integrated auxiliary information is used as auxiliary information of each object, and object information of each object is stored. The received object information of the object is obtained after the processing of each edge computing node, and the center server only needs to integrate the processed object information, so that the data processing time efficiency of the center server can be improved, and the accuracy of the classification result can be improved by classifying the object based on the characteristic information of the object, so that the time efficiency of object classification and the accuracy of the classification result in the data processing process can be both considered.
In order to make the above objects, features and advantages of the embodiments of the present invention more comprehensible, the following detailed description of the embodiments of the present invention refers to the accompanying drawings.
An embodiment of the present invention provides a monitoring video processing method, which may be used in a monitoring system, and referring to fig. 4, a topology network schematic diagram of a monitoring system in an embodiment of the present invention is provided, where the monitoring system may include a central server 41 and edge computing nodes 42, and a plurality of monitoring cameras 43 are disposed in a monitoring range of each edge computing node 41. The edge compute node 42 is communicatively connected to the central server 41 to send data to the central server.
Referring to fig. 1, a flowchart of a surveillance video processing method in an embodiment of the present invention is provided, which may specifically include the following steps:
and S11, receiving object information of the objects in the monitoring range sent by each edge computing node.
In implementations, the object information may include feature information as well as ancillary information.
The feature information is used to characterize the features of the object. The feature information may include facial feature information, human feature information, and the like.
The ancillary information is used to identify information related to the object, such as one or more of edge computing node information, passing camera information, etc., where the object appears.
The edge computing node information may include an edge computing node identifier, time information appearing in the edge computing node, and the like, wherein the edge computing node identifier uniquely corresponds to the edge computing node, and the edge computing node identifier is used for identifying and distinguishing different edge computing nodes. The time information that appears within the edge computing node may be a time period that appears within the edge computing node.
The elapsed camera information may include a camera identification, time information present at the camera, and the like. The camera identification is uniquely corresponding to the camera, and is used for identifying and distinguishing different cameras. The time information present in the camera may be a time period present in the monitoring range of the camera, i.e. a time point of entering the monitoring range of the camera and a time point of exiting the monitoring range of the camera.
In a specific implementation, each edge computing node may correspond to a plurality of cameras, that is, a plurality of cameras are arranged in the monitoring range of each edge computing node. Thus, the sum of the monitoring ranges of all cameras within an edge computing node is the monitoring range of that edge computing node.
In some embodiments, node information, passing camera information and the like can be calculated based on the edge of the object, so as to obtain track information of the object, time when the object passes through each place and the like.
Specifically, when each camera collects a monitoring video or image, the collected video or image carries information such as the identification of the camera and the collection time. Because the monitoring range of each camera is fixed, the track information of the object, the time of each place of the way and the like can be obtained according to the cameras where the object appears and the time of each camera.
Step S12, classifying all the objects according to the characteristic information of each object to obtain the auxiliary information of each object from each edge computing node.
In a specific implementation, due to mobility of the object, the object may not be fixed to move within the monitoring range of one edge computing node, may move from one place to another place, and thus span multiple edge computing nodes, and respectively appear in the monitoring ranges of the multiple edge computing nodes, so that the same situation may exist for the object within the monitoring range of each edge computing node.
Based on the above, in the embodiment of the present invention, the objects may be classified according to the feature information of each object, so as to obtain the auxiliary information of each object from each edge computing node.
Specifically, by comparing the feature information of each object, when the similarity between the object reported by one edge computing node and the feature information of the object reported by another edge computing node meets a set similarity threshold, the two objects can be determined to be the same object, so as to obtain the auxiliary information reported by the two edge computing nodes from which the object comes. And by analogy, the auxiliary information reported by the edge computing node where the object appears can be obtained.
Accordingly, when the similarity of the feature information of one object and another object does not satisfy the set similarity threshold, it is determined that the two objects are not the same object.
Based on the mobility of the objects, the edge computing nodes where the objects appear are different, and the obtained edge computing nodes from which the auxiliary information of each object comes are different.
For example, when an object appears in the monitoring range of only one edge computing node a, after all objects are classified according to the feature information of the object, the attached information of the object from the edge computing node a is obtained.
For another example, when an object appears in the monitoring ranges of the edge computing node a, the edge computing node B and the edge computing node C, after classifying all the objects according to the characteristic information of the object, the auxiliary information of the object is obtained from the edge computing node a, the edge computing node B and the edge computing node C.
Step S13, integrating the auxiliary information of each object from each edge computing node, and taking the integrated auxiliary information as the auxiliary information of each object.
In a specific implementation, the auxiliary information of each object from each edge computing node may be integrated according to a time sequence, so as to obtain auxiliary information meeting a certain time sequence requirement, and the integrated auxiliary information is used as the auxiliary information of each object.
For example, the ancillary information is integrated in a positive sequence. I.e. the ancillary information is integrated in a preceding and following manner occurring first and then in time. For another example, the ancillary information is integrated in a reverse order. That is, the ancillary information is integrated in a later manner that occurs later, before and after.
Step S14, storing object information of each object.
In a specific implementation, each edge computing node may periodically report object information of objects in the monitoring range, so that in the history storage information, object information of each object in the history time may be stored.
In the embodiment of the invention, in order to ensure smooth progress of the storage work, whether the object exists in the history storage information can be detected according to the characteristic information of each object; when the object exists, merging object information of the object with object information of the object in the history storage information; and storing the combined object information as object information of the object.
As can be seen from the above, the object information of the objects in the respective monitoring ranges sent by the respective edge computing nodes is received, where the object information includes feature information and auxiliary information, all the objects are only required to be classified according to the feature information of each object, so as to obtain auxiliary information of each object from the respective edge computing nodes, the auxiliary information of each object from the respective edge computing nodes is integrated, and the integrated auxiliary information is used as the auxiliary information of each object, so as to store the object information of each object. The received object information of the object is obtained after being processed by each edge computing node, and the center server only needs to integrate the processed object information, so that the data processing time efficiency of the center server can be improved, classification is carried out based on the characteristic information of the object, and the accuracy of the classification result can be improved. Therefore, time consumption of object classification and accuracy of classification results in the data processing process can be considered.
In a specific implementation, in step S13, the auxiliary information from the respective edge computing nodes of each object may be integrated in time sequence in the following manner:
specifically, whether each object has an overlapping portion of the auxiliary information from each edge computing node is detected, wherein the overlapping portion refers to the same portion in the auxiliary information reported by each edge computing node. When the overlapping part exists, performing the de-duplication operation to obtain the auxiliary information after de-duplication; and integrating the auxiliary information of each object from each edge computing node according to time sequence based on the auxiliary information after the duplication removal. The integrated information is simplified, so that occupation of storage space is reduced.
In a specific implementation, steps S11 to S14 may be performed by a central server. Each edge computing node may be communicatively connected to a central server.
In a specific implementation, the object information of all objects in the monitoring range reported by each edge computing node may be obtained by each edge computing node in the following manner, and the method for obtaining object information in the embodiment of the present invention given below with reference to fig. 2 is specifically described, where the method for obtaining object information specifically may include the following steps:
In step S21, each edge computing node may acquire all the videos within the respective monitoring range, and analyze the acquired videos to obtain the object appearing in each frame of image of the video and the feature information of each object.
In a specific implementation, for a certain edge computing node, the edge computing node can acquire videos acquired by all cameras in a self monitoring range, and analyze the acquired videos to obtain each frame of image in the videos. And carrying out face recognition on each frame of image, and identifying the object and the corresponding characteristic information appearing in each frame of image, wherein the characteristic information is face characteristic information. And the human body recognition can be carried out on each frame of image so as to obtain the object and the corresponding characteristic information appearing in each frame of image, and at the moment, the characteristic information is human body characteristic information.
It can be understood that face recognition and human body recognition can be performed on each frame of image at the same time, and at this time, the obtained feature information includes face feature information and human body feature information. Because a plurality of objects appearing in the same frame of image may have shielding conditions, at this time, the face feature information and the body feature information of the object may not be obtained simultaneously by one frame of image, but because the object has mobility, the probability of obtaining the face feature information and the body feature information of the object is still very high along with the movement of the object and the matching of the face recognition and the body recognition results in other frame of images appearing in the object.
Step S22, classifying the objects appearing in all frame images of the video according to the characteristic information of the objects appearing in all frame images of the video, and determining all objects appearing in the monitoring range of each edge computing node.
In a specific implementation, for an edge computing node, the edge computing node may classify objects appearing in all frames of the video according to feature information of objects appearing in all frame images of the video acquired by the edge computing node, and determine all objects appearing in a monitoring range of the edge computing node. Wherein each edge computing node adopts the same processing mode.
Step S23, for each object, obtaining object information of each object according to the image of the object.
In implementations, when classifying objects that appear in all frame images in a video, the images in which the respective objects appear may be determined. For each object, object information of each image is obtained according to the image of the object.
Due to the mobility of objects, it may happen that an object moves from one edge computing node to another. In order to obtain the movement condition of each object between the edge computing nodes, the edge computing nodes can periodically send the object information of the objects which appear in a set time period to the adjacent nodes. Wherein the appearing object refers to an object that appears within the monitoring range of the edge computing node within a set period of time. Wherein two or more edge computing nodes are adjacent to each other due to the adjacent two or more edge computing nodes.
In a specific implementation, the edge computing node periodically transmits object information of objects which appear in a set period of time to the adjacent nodes. The set time period may be 5 minutes, 10 minutes or one hour, or may be other values, and may be specifically set according to the requirement. For example, the edge computing node transmits object information of an object that appears 5 minutes recently to the neighboring node every 5 minutes.
In implementations, the edge computing node may send the object information for which the object was present to the neighboring nodes in a variety of ways.
In one embodiment of the present invention, the edge computing node broadcasts object information of the appearing object in a broadcast manner, and the neighboring node receives the object information of the appearing object by listening to the broadcast. Wherein, a directional broadcasting mode can be adopted, and a non-directional broadcasting mode can also be adopted.
In another embodiment of the present invention, the edge computing node may obtain an identifier of a neighboring edge node, where the edge computing node is communicatively connected to the neighboring node, and send object information of the object that has occurred to the neighboring node according to the identifier of the neighboring node.
It will be appreciated that the edge computing node may also send the object information of the object that has occurred to the neighboring nodes in other ways, which are not illustrated here.
In implementations, object information sent by neighboring edge computing nodes that have an object present may be received. And updating the object information of the object in the edge computing node according to the characteristic information of the object which appears. The object information of the objects appearing is sent to the adjacent nodes for the adjacent nodes to integrate the object information, so that the continuity of the object information of each object is ensured.
Specifically, the feature information of the object that has appeared is compared with the feature information of the object in the present edge computing node. When the comparison result shows that the object identical to the object which appears in the edge computing node exists in the object, combining the object information of the object identical to the object which appears in the edge computing node with the object information of the object which appears in the edge computing node, and taking the combined object information as the object information of the object which appears in the edge computing node.
When the comparison result shows that the object which is the same as the object which appears in the past does not exist in the object of the edge computing node, the object information of the object which appears in the past is added to the object information of the object of the edge computing node.
In a specific implementation, the edge computing node may store, in an object list manner, objects that appear in the monitoring range of the edge computing node and object information of each object. When the appearing object appears in the object list, the object information of the appearing object is merged into the object information of the edge computing node. When the appearing object is not present in the object list, the appearing object is added to the object list of the edge computing node.
For example, when the edge computing node receives the object information of the third sheet, and when the third sheet appears in the monitoring range of the edge computing node, that is, when the third sheet exists in the object list, the received object information of the third sheet is combined with the object information of the third sheet stored in the edge computing node, and the combined object information is used as the object information of the third sheet.
For another example, the edge computing node receives the object information of the third sheet, and when the third sheet does not appear in the monitoring range of the edge computing node, that is, the object list does not have the third sheet, the third sheet is added into the object list, and the object information of the third sheet is stored.
In the prior art, each monitoring management department can only view the monitoring video in the respective monitoring area, and can not acquire the monitoring information in the monitoring management departments of the adjacent areas, so that when personnel tracking is performed, for example, family members can only start from the monitoring management department corresponding to the lost source when searching lost personnel, the monitoring management departments are monitored and checked one by one, the workload is large, and the efficiency is low. In the embodiment of the invention, each monitoring management place is provided with one edge computing node, and the edge computing nodes can establish the association of each object among the edge computing nodes by sending the object information of the object appearing to the adjacent nodes, so that the investigation of the edge computing nodes one by one is not needed, and the tracking efficiency is expected to be improved.
In a specific implementation, when the object information of the appearing object sent by the adjacent node appears in the edge computing node, the object information of the appearing object sent by the adjacent node is combined with the object information of the appearing object obtained by the edge computing node, so that when the edge computing node and the adjacent edge computing node report the object information of the object to the central server respectively, the overlapping part exists between the object information of the appearing object reported by the edge computing node and the object information of the appearing object reported by the adjacent edge computing node. Aiming at the superposition problem, when the central server integrates the auxiliary information of each object from each edge computing node, in order to improve the integrity and the compactness of the integration result and avoid the data redundancy, the de-duplication mode provided by the embodiment can be adopted to execute the de-duplication operation and remove the superposition part, wherein the superposition part mainly comprises the auxiliary information, and the auxiliary information of each object from each edge computing node is integrated according to the time sequence based on the de-duplicated auxiliary information.
In the embodiment of the present invention, the present edge computing node merges the object information of the object that appears in the present edge computing node and the object information of the object that appears in the present edge computing node, which are sent by the adjacent edge computing nodes, according to a time sequence.
In an implementation manner, after receiving the object information of the object which appears once and sent by the adjacent edge computing node, the object information of the object which appears once is discarded after the object which appears once and is not detected to appear in the monitoring range of the edge computing node within a preset time period. If the fact that the appearing object appears in the monitoring range of the edge computing node is not detected within the preset time length, the fact that the appearing object does not enter the monitoring range of the edge computing node is indicated, object information of the appearing object is discarded, and data redundancy can be reduced.
In a specific implementation, after the object information of each object is obtained, the feature information of each object can be compared with the feature information stored in the household registration database, and the identity information of each object is determined according to the comparison result, wherein the object information comprises the identity information.
The identity information may include an identification number, and may further include one or more of an address, a name, a gender, and the like.
In the embodiment of the invention, the central server can be connected with the household registration system to acquire household registration information, and the identity information of each object is determined, so that the object information can comprise not only the characteristic information of the object but also the identity information. When personnel tracking is performed, even if an image of a tracking object is not available, the tracking of the tracking object can be realized by tracking the identity information such as the identity card number, the name and the like of the object.
In implementations, a tracking request is received, the tracking request including characteristic information of a tracked object. Comparing the characteristic information of the tracking object with the characteristic information of all the stored objects; and finding out the tracking object from all the stored objects according to the comparison result, and acquiring the object information of the tracking object.
When personnel tracking is carried out, only the images of the personnel to be tracked and the characteristic information of each stored object are required to be compared, and compared with the prior art that the images of the personnel to be tracked and the stored hundred million-level videos or images are required to be compared one by one, the accuracy of the tracking result and the tracking timeliness can be effectively improved.
By adopting the monitoring video processing method provided by the embodiment of the invention, the center server in the prior art is used for carrying out urban billions-level retrieval comparison processing, and the method is divided into thousands of levels of retrieval comparison at the edge side (edge computing node). For example, in the prior art, the complexity of calculation and search at the center server side is N 2 By adopting the video monitoring method provided by the embodiment of the invention, the complexity of calculation and retrieval can be reduced to Nlog N, so that the calculation and processing timeliness of the central server can be improved.
In addition, the calculation and the processing process of the object information are carried out on the edge calculation nodes, so that the object information is processed in parallel, the real-time performance, the accuracy and the like of the object information processing can be improved, and the dependence on a central server is reduced.
In implementations, when identity information is included in the object information, the tracking request may also include the identity information. Therefore, even if the image information of the tracked object is not available, the tracked object can be tracked through the monitoring system according to the identity information of the tracked object.
In order to facilitate better understanding and implementation of the embodiments of the present invention by those skilled in the art, the embodiment of the present invention further provides a monitoring video processing apparatus.
Referring to fig. 3, which is a schematic structural diagram of a surveillance video processing apparatus in an embodiment of the present invention, a surveillance video processing apparatus 30 may include:
the receiving unit 31 is configured to receive object information of objects within respective monitoring ranges sent by respective edge computing nodes, where the object information includes: characteristic information and auxiliary information;
the classifying unit 32 is configured to classify all the objects according to the feature information of each object, so as to obtain the auxiliary information of each object from each edge computing node;
The processing unit 33 is configured to integrate the auxiliary information of each object from the edge computing nodes, and use the integrated auxiliary information as the auxiliary information of each object;
the storage unit 34 stores object information of each object.
In a specific implementation, the specific working principle and workflow of the surveillance video processing apparatus may refer to the description in the surveillance video processing method provided in the foregoing embodiment of the present invention, and will not be repeated herein.
The embodiment of the invention also provides a monitoring system, which comprises: center server, edge compute node and camera, wherein:
the monitoring range of the edge computing node is internally provided with a plurality of cameras, the edge computing node obtains object information of objects in the monitoring range according to monitoring videos reported by the cameras and reports the object information to the central server, and the object information comprises: characteristic information and auxiliary information;
the central server is used for classifying all the objects according to the characteristic information of each object to obtain the auxiliary information of each object from each edge computing node, integrating the auxiliary information of each object from each edge computing node, taking the integrated auxiliary information as the auxiliary information of each object, and storing the object information of each object.
In implementations, the monitoring system may be built based on edge calculations in the context of the fifth generation mobile communication system (5G). The edge computing node can adopt an edge server to improve the data computing efficiency of the server on the edge computing node and the central server, the data transmission and response efficiency between the central server and the edge computing node, and the like.
In specific implementation, the specific working principle and workflow of the monitoring system may refer to the description in the monitoring video processing method provided in the foregoing embodiment of the present invention, and will not be repeated here.
The embodiment of the invention also provides a storage medium, wherein the computer readable storage medium is a nonvolatile storage medium or a non-transient storage medium, and a computer program is stored on the storage medium, and the computer program is executed by a processor to execute the steps of the monitoring video processing method in any embodiment.
The embodiment of the invention also provides a terminal, which comprises a memory and a processor, wherein the memory stores a computer program capable of running on the processor, and the processor executes the steps of the monitoring video processing method in any embodiment when running the computer program.
Those of ordinary skill in the art will appreciate that all or part of the steps in the various methods of the above embodiments may be implemented by a program to instruct related hardware, the program may be stored in any computer readable storage medium, and the storage medium may include: ROM, RAM, magnetic or optical disks, etc.
Although the present invention is disclosed above, the present invention is not limited thereto. Various changes and modifications may be made by one skilled in the art without departing from the spirit and scope of the invention, and the scope of the invention should be defined in the appended claims.

Claims (12)

1. A surveillance video processing method, comprising:
receiving object information of objects in respective monitoring ranges, which are reported by respective edge computing nodes, wherein the object information comprises: feature information and auxiliary information, the auxiliary information including: computing node identification and path camera information by the edge;
classifying all the objects according to the characteristic information of each object to respectively obtain the auxiliary information of each object from each edge computing node;
integrating the auxiliary information of each object from each edge computing node, and taking the integrated auxiliary information as the auxiliary information of each object;
Storing object information of each object;
the object information of all objects in the monitoring range reported by each edge computing node can be obtained by each edge computing node in the following manner:
acquiring all videos in a monitoring range of each edge computing node, and analyzing the videos to obtain objects and corresponding characteristic information appearing in each frame of image of the videos;
classifying the objects appearing in all frame images of the video according to the characteristic information of the objects appearing in all frame images of the video, and determining all objects appearing in the monitoring range of each edge computing node;
for each object, obtaining object information of each object according to an image of the object;
receiving object information of an object appearing in the past, which is sent by an adjacent edge computing node, wherein the object appearing in the past refers to an object appearing in a monitoring range of the adjacent edge computing node in a set time period;
updating the characteristic information of the object in the edge computing node according to the characteristic information of the object appearing;
and after receiving the object information of the appearing object sent by the adjacent edge computing node, discarding the object information of the appearing object when the appearing object is not detected to appear in the monitoring range of the edge computing node within a preset time length.
2. The surveillance video processing method of claim 1, wherein said integrating the auxiliary information of each object from each edge computing node comprises:
and integrating the auxiliary information of each object from each edge computing node according to time sequence.
3. The surveillance video processing method of claim 2, wherein said integrating the attached information from the respective edge computing nodes for each object in time sequence comprises:
detecting whether the auxiliary information of each object from each edge computing node has a superposition part, wherein the superposition part refers to the same part in the auxiliary information uploaded by each edge computing node;
when the overlapping part exists, performing a de-duplication operation to obtain the auxiliary information after de-duplication;
and integrating the auxiliary information of each object from each edge computing node according to time sequence based on the auxiliary information after the duplication removal.
4. The surveillance video processing method of claim 1, wherein updating the feature information of the object in the edge computing node according to the feature information of the object that occurred, comprises:
Comparing the characteristic information of the object appearing in the past with the characteristic information of the object in the edge computing node;
when the comparison result shows that the object which is the same as the appearing object exists in the object of the edge computing node, merging the object information of the object which is the same as the appearing object in the object of the edge computing node with the object information of the appearing object, and taking the merged object information as the object information of the appearing object;
or when the comparison result shows that the object which is the same as the object which appears in the past does not exist in the object of the edge computing node, adding the object information of the object which appears in the past to the object information of the object of the edge computing node.
5. The surveillance video processing method of claim 1, wherein the camera information of the path is obtained by:
aiming at all objects appearing in each edge computing node, acquiring all images corresponding to each object; acquiring image acquisition time and acquisition cameras of all images corresponding to each object, and determining the cameras of each object path according to the sequence of the image acquisition time to obtain the camera information of the path.
6. The surveillance video processing method of claim 1, further comprising:
comparing the characteristic information of each object with the characteristic information stored in the household registration database;
and determining the identity information of each object according to the comparison result, wherein the object information comprises the identity information.
7. The surveillance video processing method of claim 1, further comprising:
receiving a trace request, the trace request comprising: tracking characteristic information of an object;
comparing the characteristic information of the tracking object with the characteristic information of all the stored objects;
and finding out the tracking object from all the stored objects according to the comparison result, and acquiring object information of the tracking object.
8. The surveillance video processing method of claim 1, wherein storing the object information of each object comprises:
detecting whether the object exists in the history storage information according to the characteristic information of each object;
when the object exists, merging object information of the object with object information of the object in the history storage information;
and storing the combined object information as object information of the object.
9. A surveillance video processing apparatus, comprising:
the receiving unit is used for receiving object information of objects in respective monitoring ranges sent by the edge computing nodes, wherein the object information comprises: feature information and auxiliary information, the auxiliary information including: computing node identification and path camera information by the edge;
the classification unit is used for classifying all the objects according to the characteristic information of each object to obtain the auxiliary information of each object from each edge computing node;
the processing unit is used for integrating the auxiliary information of each object from each edge computing node and taking the integrated auxiliary information as the auxiliary information of each object;
a storage unit for storing object information of each object by a user;
the object information of all objects in the monitoring range reported by each edge computing node can be obtained by each edge computing node in the following manner:
acquiring all videos in a monitoring range of each edge computing node, and analyzing the videos to obtain objects and corresponding characteristic information appearing in each frame of image of the videos;
Classifying the objects appearing in all frame images of the video according to the characteristic information of the objects appearing in all frame images of the video, and determining all objects appearing in the monitoring range of each edge computing node;
for each object, obtaining object information of each object according to an image of the object;
receiving object information of an object appearing in the past, which is sent by an adjacent edge computing node, wherein the object appearing in the past refers to an object appearing in a monitoring range of the adjacent edge computing node in a set time period;
updating the characteristic information of the object in the edge computing node according to the characteristic information of the object appearing;
and after receiving the object information of the appearing object sent by the adjacent edge computing node, discarding the object information of the appearing object when the appearing object is not detected to appear in the monitoring range of the edge computing node within a preset time length.
10. A monitoring system, comprising: center server, edge compute node and camera, wherein:
the monitoring range of the edge computing node is internally provided with a plurality of cameras, the edge computing node obtains object information of objects in the monitoring range according to monitoring videos reported by the cameras and reports the object information to the central server, and the object information comprises: feature information and auxiliary information, the auxiliary information including: computing node identification and path camera information by the edge;
The central server is used for classifying all the objects according to the characteristic information of each object to respectively obtain the auxiliary information of each object from each edge computing node, integrating the auxiliary information of each object from each edge computing node, taking the integrated auxiliary information as the auxiliary information of each object, and storing the object information of each object;
the edge computing nodes acquire object information of all objects in respective monitoring ranges reported by the edge computing nodes in the following manner:
acquiring all videos in a monitoring range of each edge computing node, and analyzing the videos to obtain objects and corresponding characteristic information appearing in each frame of image of the videos;
classifying the objects appearing in all frame images of the video according to the characteristic information of the objects appearing in all frame images of the video, and determining all objects appearing in the monitoring range of each edge computing node;
for each object, obtaining object information of each object according to an image of the object;
receiving object information of an object appearing in the past, which is sent by an adjacent edge computing node, wherein the object appearing in the past refers to an object appearing in a monitoring range of the adjacent edge computing node in a set time period;
Updating the characteristic information of the object in the edge computing node according to the characteristic information of the object appearing;
and after receiving the object information of the appearing object sent by the adjacent edge computing node, discarding the object information of the appearing object when the appearing object is not detected to appear in the monitoring range of the edge computing node within a preset time length.
11. A computer readable storage medium, the computer readable storage medium being a non-volatile storage medium or a non-transitory storage medium, having stored thereon a computer program, characterized in that the computer program when run by a processor performs the steps of the surveillance video processing method according to any of claims 1 to 8.
12. A terminal comprising a memory and a processor, the memory having stored thereon a computer program executable on the processor, characterized in that the processor executes the steps of the surveillance video processing method according to any of claims 1 to 8 when the computer program is executed.
CN202011360214.1A 2020-11-27 2020-11-27 Monitoring video processing method and device, monitoring system, storage medium and terminal Active CN112836565B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011360214.1A CN112836565B (en) 2020-11-27 2020-11-27 Monitoring video processing method and device, monitoring system, storage medium and terminal

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011360214.1A CN112836565B (en) 2020-11-27 2020-11-27 Monitoring video processing method and device, monitoring system, storage medium and terminal

Publications (2)

Publication Number Publication Date
CN112836565A CN112836565A (en) 2021-05-25
CN112836565B true CN112836565B (en) 2024-04-12

Family

ID=75923407

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011360214.1A Active CN112836565B (en) 2020-11-27 2020-11-27 Monitoring video processing method and device, monitoring system, storage medium and terminal

Country Status (1)

Country Link
CN (1) CN112836565B (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108062511A (en) * 2017-11-17 2018-05-22 维库(厦门)信息技术有限公司 A kind of trans-regional multi-cam target identification association tracking and computer equipment
CN110543803A (en) * 2018-05-29 2019-12-06 菜鸟智能物流控股有限公司 Monitoring method, device, server and storage medium
CN111405241A (en) * 2020-02-21 2020-07-10 中国电子技术标准化研究院 Edge calculation method and system for video monitoring
CN111901573A (en) * 2020-08-17 2020-11-06 泽达易盛(天津)科技股份有限公司 Fine granularity real-time supervision system based on edge calculation

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108062511A (en) * 2017-11-17 2018-05-22 维库(厦门)信息技术有限公司 A kind of trans-regional multi-cam target identification association tracking and computer equipment
CN110543803A (en) * 2018-05-29 2019-12-06 菜鸟智能物流控股有限公司 Monitoring method, device, server and storage medium
CN111405241A (en) * 2020-02-21 2020-07-10 中国电子技术标准化研究院 Edge calculation method and system for video monitoring
CN111901573A (en) * 2020-08-17 2020-11-06 泽达易盛(天津)科技股份有限公司 Fine granularity real-time supervision system based on edge calculation

Also Published As

Publication number Publication date
CN112836565A (en) 2021-05-25

Similar Documents

Publication Publication Date Title
CN110175549B (en) Face image processing method, device, equipment and storage medium
TWI425454B (en) Method, system and computer program product for reconstructing moving path of vehicle
CN107292240B (en) Person finding method and system based on face and body recognition
CN105574506B (en) Intelligent face pursuit system and method based on deep learning and large-scale clustering
US20210357678A1 (en) Information processing method and apparatus, and storage medium
CN108091140B (en) Method and device for determining fake-licensed vehicle
CN110826594A (en) Track clustering method, equipment and storage medium
CN109858354B (en) Face identity library, track table establishment method and face track query method and system
CN110969215A (en) Clustering method and device, storage medium and electronic device
CN110889314B (en) Image processing method, device, electronic equipment, server and system
CN112770265B (en) Pedestrian identity information acquisition method, system, server and storage medium
CN114627406A (en) Method, system, equipment and medium for identifying rapid crowd gathering behaviors
CN112836565B (en) Monitoring video processing method and device, monitoring system, storage medium and terminal
CN112183161B (en) Face database processing method, device and equipment
De Marsico et al. ES-RU: an e ntropy based rule to s elect r epresentative templates in face su rveillance
CN114648056A (en) Association method, electronic device and computer-readable storage medium
CN112559583B (en) Method and device for identifying pedestrians
CN113051981B (en) Face recognition method and device
CN113886631A (en) Video archive generation method and device and storage medium
CN114090909A (en) Graph code joint detection correlation method and device, computer equipment and storage medium
CN112883213B (en) Picture archiving method and device and electronic equipment
CN114265952A (en) Target retrieval method and device
CN113220750A (en) Method and device for identifying fellow persons and computer readable storage medium
CN112040325A (en) Video playing method and device, electronic equipment and storage medium
WO2013175771A1 (en) Image search system and image search method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right

Effective date of registration: 20210819

Address after: 200080 7th floor, No.137 Haining Road, Hongkou District, Shanghai

Applicant after: Shanghai Xinyi Intelligent Technology Co.,Ltd.

Address before: 100190 1008, 10th floor, building 51, 63 Zhichun Road, Haidian District, Beijing

Applicant before: Beijing Xinyi Intelligent Information Technology Co.,Ltd.

TA01 Transfer of patent application right
CB02 Change of applicant information

Country or region after: China

Address after: Room E176, 1st Floor, No. 406, Gulang Road, Putuo District, Shanghai 200080

Applicant after: Shanghai Xinyi Intelligent Technology Co.,Ltd.

Address before: 200080 7th floor, No.137 Haining Road, Hongkou District, Shanghai

Applicant before: Shanghai Xinyi Intelligent Technology Co.,Ltd.

Country or region before: China

CB02 Change of applicant information
GR01 Patent grant
GR01 Patent grant