Disclosure of Invention
The technical problem solved by the embodiment of the invention is that the time consumption of data processing and the accuracy of classification results cannot be considered.
In order to solve the above technical problems, an embodiment of the present invention provides a method for processing a surveillance video, including: receiving object information of objects in respective monitoring ranges, which are reported by respective edge computing nodes, wherein the object information comprises: characteristic information and auxiliary information; classifying all the objects according to the characteristic information of each object to respectively obtain the auxiliary information of each object from each edge computing node; integrating the auxiliary information of each object from each edge computing node, and taking the integrated auxiliary information as the auxiliary information of each object; storing the object information of each object.
Optionally, the integrating the auxiliary information of each object from each edge computing node includes: and integrating the auxiliary information of each object from each edge computing node according to time sequence.
Optionally, the integrating the auxiliary information of each object from each edge computing node according to a time sequence includes: detecting whether the auxiliary information of each object from each edge computing node has a superposition part, wherein the superposition part refers to the same part in the auxiliary information uploaded by each edge computing node; when the overlapping part exists, performing a de-duplication operation to obtain the auxiliary information after de-duplication; and integrating the auxiliary information of each object from each edge computing node according to time sequence based on the auxiliary information after the duplication removal.
Optionally, the object information of all the objects in the respective monitoring range reported by the edge computing nodes is obtained in the following manner: all videos in the monitoring range of each edge computing node are acquired, and the videos are analyzed to obtain objects and corresponding characteristic information appearing in each frame of image of the videos; classifying the objects appearing in all frame images of the video according to the characteristic information of the objects appearing in all frame images of the video, and determining all objects appearing in the monitoring range of each edge computing node; for each object, object information of each object is obtained from an image in which the object appears.
Optionally, the monitoring video processing method further includes: receiving object information of an object appearing in the past, which is sent by an adjacent edge computing node, wherein the object appearing in the past refers to an object appearing in a monitoring range of the adjacent edge computing node in a set time period; and updating the characteristic information of the object in the edge computing node according to the characteristic information of the object which appears.
Optionally, the updating the feature information of the object in the edge computing node according to the feature information of the object that appears, includes: comparing the characteristic information of the object appearing in the past with the characteristic information of the object in the edge computing node; when the comparison result shows that the object which is the same as the appearing object exists in the object of the edge computing node, merging the object information of the object which is the same as the appearing object in the object of the edge computing node with the object information of the appearing object, and taking the merged object information as the object information of the appearing object; or when the comparison result shows that the object which is the same as the object which appears in the past does not exist in the object of the edge computing node, adding the object information of the object which appears in the past into the object information of the object of the edge computing node.
Optionally, the monitoring video processing method further includes: and after receiving the object information of the appearing object sent by the adjacent edge computing node, discarding the object information of the appearing object when the appearing object is not detected to appear in the monitoring range of the edge computing node within a preset time period.
Optionally, the auxiliary information includes: the edge calculates the node identification and camera information of the path.
Optionally, the camera information of the approach is obtained in the following manner: all the objects appearing in the node are calculated aiming at each edge, and all the images corresponding to each object are obtained; acquiring image acquisition time and acquisition cameras of all images corresponding to each object, and determining the cameras of each object path according to the sequence of the image acquisition time to obtain the camera information of the path.
Optionally, the monitoring video processing method further includes: comparing the characteristic information of each object with the characteristic information stored in the household registration database; and determining the identity information of each object according to the comparison result, wherein the object information comprises the identity information.
Optionally, the monitoring video processing method further includes: receiving a trace request, the trace request comprising: tracking characteristic information of an object; comparing the characteristic information of the tracking object with the characteristic information of all the stored objects; and finding out the tracking object from all the stored objects according to the comparison result, and acquiring the object information of the tracking object.
Optionally, the storing the object information of each object includes: detecting whether the object exists in the history storage information according to the characteristic information of each object; when the object exists, merging object information of the object with object information of the object in the history storage information; and storing the combined object information as object information of the object.
The embodiment of the invention also provides a monitoring system, which comprises: center server, edge compute node and camera, wherein: the monitoring range of the edge computing node is internally provided with a plurality of cameras, the edge computing node obtains object information of objects in the monitoring range according to monitoring videos reported by the cameras and reports the object information to the central server, and the object information comprises: characteristic information and auxiliary information; the central server is used for classifying all the objects according to the characteristic information of each object to obtain the auxiliary information of each object from each edge computing node, integrating the auxiliary information of each object from each edge computing node, taking the integrated auxiliary information as the auxiliary information of each object, and storing the object information of each object.
The embodiment of the invention also provides a monitoring video processing device, which comprises: a receiving unit, configured to receive object information of objects in respective monitoring ranges sent by respective edge computing nodes, where the object information includes: characteristic information and auxiliary information; the classification unit is used for classifying all the objects according to the characteristic information of each object to respectively obtain the attached information of each object from each edge computing node; the processing unit is used for integrating the auxiliary information of each object from each edge computing node and taking the integrated auxiliary information as the auxiliary information of each object; and a storage unit for storing the object information of each object by the user.
The embodiment of the invention also provides a storage medium, wherein the computer readable storage medium is a nonvolatile storage medium or a non-transient storage medium, and a computer program is stored on the storage medium, and the computer program executes the steps of any monitoring video processing method when being executed by a processor.
The embodiment of the invention also provides a terminal, which comprises a memory and a processor, wherein the memory stores a computer program capable of running on the processor, and the processor executes any one of the steps of the monitoring video processing method when running the computer program.
Compared with the prior art, the technical scheme of the embodiment of the invention has the following beneficial effects:
receiving object information of objects in respective monitoring ranges sent by each edge computing node, wherein the object information comprises characteristic information and auxiliary information, classifying all the objects according to the characteristic information of each object to obtain auxiliary information of each object from each edge computing node, integrating the auxiliary information of each object from each edge computing node, taking the integrated auxiliary information as the auxiliary information of each object, and storing the object information of each object. The received object information of the object is obtained after the processing of each edge computing node, and the central server only needs to integrate the processed object information, so that the data processing time efficiency of the central server can be improved, and the accuracy of the classification result can be improved by classifying the object based on the characteristic information of the object, so that the time efficiency of object classification and the accuracy of the classification result in the data processing process can be both considered.
In addition, when personnel tracking is performed, only the images of the personnel to be tracked and the characteristic information of each stored object are required to be compared, and compared with the prior art that the images of the personnel to be tracked and the stored hundred million-level videos or images are required to be compared one by one, the accuracy of the tracking result and the tracking timeliness can be expected.
Detailed Description
As described above, in the prior art, as shown in fig. 5, a network topology diagram of a surveillance video system in the prior art is given, and each camera 43 reports an acquired image or video to the central server 41. The central server 41 makes check comparison in the billion-level database according to videos or pictures and the like reported by all cameras 43, but takes longer and has poor analysis capability. In order to reduce time consumption, an approximate search algorithm is generally adopted for data classification, but the adoption of the approximate search algorithm tends to result in lower accuracy, so that the time consumption of data processing and the accuracy of classification results cannot be considered.
In order to solve the above problem, in the embodiment of the present invention, object information of objects in respective monitoring ranges sent by respective edge computing nodes is received, where the object information includes feature information and auxiliary information, all objects are only required to be classified according to the feature information of each object, so as to obtain auxiliary information of each object from each edge computing node, the auxiliary information of each object from each edge computing node is integrated, and the integrated auxiliary information is used as auxiliary information of each object, and object information of each object is stored. The received object information of the object is obtained after the processing of each edge computing node, and the center server only needs to integrate the processed object information, so that the data processing time efficiency of the center server can be improved, and the accuracy of the classification result can be improved by classifying the object based on the characteristic information of the object, so that the time efficiency of object classification and the accuracy of the classification result in the data processing process can be both considered.
In order to make the above objects, features and advantages of the embodiments of the present invention more comprehensible, the following detailed description of the embodiments of the present invention refers to the accompanying drawings.
An embodiment of the present invention provides a monitoring video processing method, which may be used in a monitoring system, and referring to fig. 4, a topology network schematic diagram of a monitoring system in an embodiment of the present invention is provided, where the monitoring system may include a central server 41 and edge computing nodes 42, and a plurality of monitoring cameras 43 are disposed in a monitoring range of each edge computing node 41. The edge compute node 42 is communicatively connected to the central server 41 to send data to the central server.
Referring to fig. 1, a flowchart of a surveillance video processing method in an embodiment of the present invention is provided, which may specifically include the following steps:
and S11, receiving object information of the objects in the monitoring range sent by each edge computing node.
In implementations, the object information may include feature information as well as ancillary information.
The feature information is used to characterize the features of the object. The feature information may include facial feature information, human feature information, and the like.
The ancillary information is used to identify information related to the object, such as one or more of edge computing node information, passing camera information, etc., where the object appears.
The edge computing node information may include an edge computing node identifier, time information appearing in the edge computing node, and the like, wherein the edge computing node identifier uniquely corresponds to the edge computing node, and the edge computing node identifier is used for identifying and distinguishing different edge computing nodes. The time information that appears within the edge computing node may be a time period that appears within the edge computing node.
The elapsed camera information may include a camera identification, time information present at the camera, and the like. The camera identification is uniquely corresponding to the camera, and is used for identifying and distinguishing different cameras. The time information present in the camera may be a time period present in the monitoring range of the camera, i.e. a time point of entering the monitoring range of the camera and a time point of exiting the monitoring range of the camera.
In a specific implementation, each edge computing node may correspond to a plurality of cameras, that is, a plurality of cameras are arranged in the monitoring range of each edge computing node. Thus, the sum of the monitoring ranges of all cameras within an edge computing node is the monitoring range of that edge computing node.
In some embodiments, node information, passing camera information and the like can be calculated based on the edge of the object, so as to obtain track information of the object, time when the object passes through each place and the like.
Specifically, when each camera collects a monitoring video or image, the collected video or image carries information such as the identification of the camera and the collection time. Because the monitoring range of each camera is fixed, the track information of the object, the time of each place of the way and the like can be obtained according to the cameras where the object appears and the time of each camera.
Step S12, classifying all the objects according to the characteristic information of each object to obtain the auxiliary information of each object from each edge computing node.
In a specific implementation, due to mobility of the object, the object may not be fixed to move within the monitoring range of one edge computing node, may move from one place to another place, and thus span multiple edge computing nodes, and respectively appear in the monitoring ranges of the multiple edge computing nodes, so that the same situation may exist for the object within the monitoring range of each edge computing node.
Based on the above, in the embodiment of the present invention, the objects may be classified according to the feature information of each object, so as to obtain the auxiliary information of each object from each edge computing node.
Specifically, by comparing the feature information of each object, when the similarity between the object reported by one edge computing node and the feature information of the object reported by another edge computing node meets a set similarity threshold, the two objects can be determined to be the same object, so as to obtain the auxiliary information reported by the two edge computing nodes from which the object comes. And by analogy, the auxiliary information reported by the edge computing node where the object appears can be obtained.
Accordingly, when the similarity of the feature information of one object and another object does not satisfy the set similarity threshold, it is determined that the two objects are not the same object.
Based on the mobility of the objects, the edge computing nodes where the objects appear are different, and the obtained edge computing nodes from which the auxiliary information of each object comes are different.
For example, when an object appears in the monitoring range of only one edge computing node a, after all objects are classified according to the feature information of the object, the attached information of the object from the edge computing node a is obtained.
For another example, when an object appears in the monitoring ranges of the edge computing node a, the edge computing node B and the edge computing node C, after classifying all the objects according to the characteristic information of the object, the auxiliary information of the object is obtained from the edge computing node a, the edge computing node B and the edge computing node C.
Step S13, integrating the auxiliary information of each object from each edge computing node, and taking the integrated auxiliary information as the auxiliary information of each object.
In a specific implementation, the auxiliary information of each object from each edge computing node may be integrated according to a time sequence, so as to obtain auxiliary information meeting a certain time sequence requirement, and the integrated auxiliary information is used as the auxiliary information of each object.
For example, the ancillary information is integrated in a positive sequence. I.e. the ancillary information is integrated in a preceding and following manner occurring first and then in time. For another example, the ancillary information is integrated in a reverse order. That is, the ancillary information is integrated in a later manner that occurs later, before and after.
Step S14, storing object information of each object.
In a specific implementation, each edge computing node may periodically report object information of objects in the monitoring range, so that in the history storage information, object information of each object in the history time may be stored.
In the embodiment of the invention, in order to ensure smooth progress of the storage work, whether the object exists in the history storage information can be detected according to the characteristic information of each object; when the object exists, merging object information of the object with object information of the object in the history storage information; and storing the combined object information as object information of the object.
As can be seen from the above, the object information of the objects in the respective monitoring ranges sent by the respective edge computing nodes is received, where the object information includes feature information and auxiliary information, all the objects are only required to be classified according to the feature information of each object, so as to obtain auxiliary information of each object from the respective edge computing nodes, the auxiliary information of each object from the respective edge computing nodes is integrated, and the integrated auxiliary information is used as the auxiliary information of each object, so as to store the object information of each object. The received object information of the object is obtained after being processed by each edge computing node, and the center server only needs to integrate the processed object information, so that the data processing time efficiency of the center server can be improved, classification is carried out based on the characteristic information of the object, and the accuracy of the classification result can be improved. Therefore, time consumption of object classification and accuracy of classification results in the data processing process can be considered.
In a specific implementation, in step S13, the auxiliary information from the respective edge computing nodes of each object may be integrated in time sequence in the following manner:
specifically, whether each object has an overlapping portion of the auxiliary information from each edge computing node is detected, wherein the overlapping portion refers to the same portion in the auxiliary information reported by each edge computing node. When the overlapping part exists, performing the de-duplication operation to obtain the auxiliary information after de-duplication; and integrating the auxiliary information of each object from each edge computing node according to time sequence based on the auxiliary information after the duplication removal. The integrated information is simplified, so that occupation of storage space is reduced.
In a specific implementation, steps S11 to S14 may be performed by a central server. Each edge computing node may be communicatively connected to a central server.
In a specific implementation, the object information of all objects in the monitoring range reported by each edge computing node may be obtained by each edge computing node in the following manner, and the method for obtaining object information in the embodiment of the present invention given below with reference to fig. 2 is specifically described, where the method for obtaining object information specifically may include the following steps:
In step S21, each edge computing node may acquire all the videos within the respective monitoring range, and analyze the acquired videos to obtain the object appearing in each frame of image of the video and the feature information of each object.
In a specific implementation, for a certain edge computing node, the edge computing node can acquire videos acquired by all cameras in a self monitoring range, and analyze the acquired videos to obtain each frame of image in the videos. And carrying out face recognition on each frame of image, and identifying the object and the corresponding characteristic information appearing in each frame of image, wherein the characteristic information is face characteristic information. And the human body recognition can be carried out on each frame of image so as to obtain the object and the corresponding characteristic information appearing in each frame of image, and at the moment, the characteristic information is human body characteristic information.
It can be understood that face recognition and human body recognition can be performed on each frame of image at the same time, and at this time, the obtained feature information includes face feature information and human body feature information. Because a plurality of objects appearing in the same frame of image may have shielding conditions, at this time, the face feature information and the body feature information of the object may not be obtained simultaneously by one frame of image, but because the object has mobility, the probability of obtaining the face feature information and the body feature information of the object is still very high along with the movement of the object and the matching of the face recognition and the body recognition results in other frame of images appearing in the object.
Step S22, classifying the objects appearing in all frame images of the video according to the characteristic information of the objects appearing in all frame images of the video, and determining all objects appearing in the monitoring range of each edge computing node.
In a specific implementation, for an edge computing node, the edge computing node may classify objects appearing in all frames of the video according to feature information of objects appearing in all frame images of the video acquired by the edge computing node, and determine all objects appearing in a monitoring range of the edge computing node. Wherein each edge computing node adopts the same processing mode.
Step S23, for each object, obtaining object information of each object according to the image of the object.
In implementations, when classifying objects that appear in all frame images in a video, the images in which the respective objects appear may be determined. For each object, object information of each image is obtained according to the image of the object.
Due to the mobility of objects, it may happen that an object moves from one edge computing node to another. In order to obtain the movement condition of each object between the edge computing nodes, the edge computing nodes can periodically send the object information of the objects which appear in a set time period to the adjacent nodes. Wherein the appearing object refers to an object that appears within the monitoring range of the edge computing node within a set period of time. Wherein two or more edge computing nodes are adjacent to each other due to the adjacent two or more edge computing nodes.
In a specific implementation, the edge computing node periodically transmits object information of objects which appear in a set period of time to the adjacent nodes. The set time period may be 5 minutes, 10 minutes or one hour, or may be other values, and may be specifically set according to the requirement. For example, the edge computing node transmits object information of an object that appears 5 minutes recently to the neighboring node every 5 minutes.
In implementations, the edge computing node may send the object information for which the object was present to the neighboring nodes in a variety of ways.
In one embodiment of the present invention, the edge computing node broadcasts object information of the appearing object in a broadcast manner, and the neighboring node receives the object information of the appearing object by listening to the broadcast. Wherein, a directional broadcasting mode can be adopted, and a non-directional broadcasting mode can also be adopted.
In another embodiment of the present invention, the edge computing node may obtain an identifier of a neighboring edge node, where the edge computing node is communicatively connected to the neighboring node, and send object information of the object that has occurred to the neighboring node according to the identifier of the neighboring node.
It will be appreciated that the edge computing node may also send the object information of the object that has occurred to the neighboring nodes in other ways, which are not illustrated here.
In implementations, object information sent by neighboring edge computing nodes that have an object present may be received. And updating the object information of the object in the edge computing node according to the characteristic information of the object which appears. The object information of the objects appearing is sent to the adjacent nodes for the adjacent nodes to integrate the object information, so that the continuity of the object information of each object is ensured.
Specifically, the feature information of the object that has appeared is compared with the feature information of the object in the present edge computing node. When the comparison result shows that the object identical to the object which appears in the edge computing node exists in the object, combining the object information of the object identical to the object which appears in the edge computing node with the object information of the object which appears in the edge computing node, and taking the combined object information as the object information of the object which appears in the edge computing node.
When the comparison result shows that the object which is the same as the object which appears in the past does not exist in the object of the edge computing node, the object information of the object which appears in the past is added to the object information of the object of the edge computing node.
In a specific implementation, the edge computing node may store, in an object list manner, objects that appear in the monitoring range of the edge computing node and object information of each object. When the appearing object appears in the object list, the object information of the appearing object is merged into the object information of the edge computing node. When the appearing object is not present in the object list, the appearing object is added to the object list of the edge computing node.
For example, when the edge computing node receives the object information of the third sheet, and when the third sheet appears in the monitoring range of the edge computing node, that is, when the third sheet exists in the object list, the received object information of the third sheet is combined with the object information of the third sheet stored in the edge computing node, and the combined object information is used as the object information of the third sheet.
For another example, the edge computing node receives the object information of the third sheet, and when the third sheet does not appear in the monitoring range of the edge computing node, that is, the object list does not have the third sheet, the third sheet is added into the object list, and the object information of the third sheet is stored.
In the prior art, each monitoring management department can only view the monitoring video in the respective monitoring area, and can not acquire the monitoring information in the monitoring management departments of the adjacent areas, so that when personnel tracking is performed, for example, family members can only start from the monitoring management department corresponding to the lost source when searching lost personnel, the monitoring management departments are monitored and checked one by one, the workload is large, and the efficiency is low. In the embodiment of the invention, each monitoring management place is provided with one edge computing node, and the edge computing nodes can establish the association of each object among the edge computing nodes by sending the object information of the object appearing to the adjacent nodes, so that the investigation of the edge computing nodes one by one is not needed, and the tracking efficiency is expected to be improved.
In a specific implementation, when the object information of the appearing object sent by the adjacent node appears in the edge computing node, the object information of the appearing object sent by the adjacent node is combined with the object information of the appearing object obtained by the edge computing node, so that when the edge computing node and the adjacent edge computing node report the object information of the object to the central server respectively, the overlapping part exists between the object information of the appearing object reported by the edge computing node and the object information of the appearing object reported by the adjacent edge computing node. Aiming at the superposition problem, when the central server integrates the auxiliary information of each object from each edge computing node, in order to improve the integrity and the compactness of the integration result and avoid the data redundancy, the de-duplication mode provided by the embodiment can be adopted to execute the de-duplication operation and remove the superposition part, wherein the superposition part mainly comprises the auxiliary information, and the auxiliary information of each object from each edge computing node is integrated according to the time sequence based on the de-duplicated auxiliary information.
In the embodiment of the present invention, the present edge computing node merges the object information of the object that appears in the present edge computing node and the object information of the object that appears in the present edge computing node, which are sent by the adjacent edge computing nodes, according to a time sequence.
In an implementation manner, after receiving the object information of the object which appears once and sent by the adjacent edge computing node, the object information of the object which appears once is discarded after the object which appears once and is not detected to appear in the monitoring range of the edge computing node within a preset time period. If the fact that the appearing object appears in the monitoring range of the edge computing node is not detected within the preset time length, the fact that the appearing object does not enter the monitoring range of the edge computing node is indicated, object information of the appearing object is discarded, and data redundancy can be reduced.
In a specific implementation, after the object information of each object is obtained, the feature information of each object can be compared with the feature information stored in the household registration database, and the identity information of each object is determined according to the comparison result, wherein the object information comprises the identity information.
The identity information may include an identification number, and may further include one or more of an address, a name, a gender, and the like.
In the embodiment of the invention, the central server can be connected with the household registration system to acquire household registration information, and the identity information of each object is determined, so that the object information can comprise not only the characteristic information of the object but also the identity information. When personnel tracking is performed, even if an image of a tracking object is not available, the tracking of the tracking object can be realized by tracking the identity information such as the identity card number, the name and the like of the object.
In implementations, a tracking request is received, the tracking request including characteristic information of a tracked object. Comparing the characteristic information of the tracking object with the characteristic information of all the stored objects; and finding out the tracking object from all the stored objects according to the comparison result, and acquiring the object information of the tracking object.
When personnel tracking is carried out, only the images of the personnel to be tracked and the characteristic information of each stored object are required to be compared, and compared with the prior art that the images of the personnel to be tracked and the stored hundred million-level videos or images are required to be compared one by one, the accuracy of the tracking result and the tracking timeliness can be effectively improved.
By adopting the monitoring video processing method provided by the embodiment of the invention, the center server in the prior art is used for carrying out urban billions-level retrieval comparison processing, and the method is divided into thousands of levels of retrieval comparison at the edge side (edge computing node). For example, in the prior art, the complexity of calculation and search at the center server side is N 2 By adopting the video monitoring method provided by the embodiment of the invention, the complexity of calculation and retrieval can be reduced to Nlog N, so that the calculation and processing timeliness of the central server can be improved.
In addition, the calculation and the processing process of the object information are carried out on the edge calculation nodes, so that the object information is processed in parallel, the real-time performance, the accuracy and the like of the object information processing can be improved, and the dependence on a central server is reduced.
In implementations, when identity information is included in the object information, the tracking request may also include the identity information. Therefore, even if the image information of the tracked object is not available, the tracked object can be tracked through the monitoring system according to the identity information of the tracked object.
In order to facilitate better understanding and implementation of the embodiments of the present invention by those skilled in the art, the embodiment of the present invention further provides a monitoring video processing apparatus.
Referring to fig. 3, which is a schematic structural diagram of a surveillance video processing apparatus in an embodiment of the present invention, a surveillance video processing apparatus 30 may include:
the receiving unit 31 is configured to receive object information of objects within respective monitoring ranges sent by respective edge computing nodes, where the object information includes: characteristic information and auxiliary information;
the classifying unit 32 is configured to classify all the objects according to the feature information of each object, so as to obtain the auxiliary information of each object from each edge computing node;
The processing unit 33 is configured to integrate the auxiliary information of each object from the edge computing nodes, and use the integrated auxiliary information as the auxiliary information of each object;
the storage unit 34 stores object information of each object.
In a specific implementation, the specific working principle and workflow of the surveillance video processing apparatus may refer to the description in the surveillance video processing method provided in the foregoing embodiment of the present invention, and will not be repeated herein.
The embodiment of the invention also provides a monitoring system, which comprises: center server, edge compute node and camera, wherein:
the monitoring range of the edge computing node is internally provided with a plurality of cameras, the edge computing node obtains object information of objects in the monitoring range according to monitoring videos reported by the cameras and reports the object information to the central server, and the object information comprises: characteristic information and auxiliary information;
the central server is used for classifying all the objects according to the characteristic information of each object to obtain the auxiliary information of each object from each edge computing node, integrating the auxiliary information of each object from each edge computing node, taking the integrated auxiliary information as the auxiliary information of each object, and storing the object information of each object.
In implementations, the monitoring system may be built based on edge calculations in the context of the fifth generation mobile communication system (5G). The edge computing node can adopt an edge server to improve the data computing efficiency of the server on the edge computing node and the central server, the data transmission and response efficiency between the central server and the edge computing node, and the like.
In specific implementation, the specific working principle and workflow of the monitoring system may refer to the description in the monitoring video processing method provided in the foregoing embodiment of the present invention, and will not be repeated here.
The embodiment of the invention also provides a storage medium, wherein the computer readable storage medium is a nonvolatile storage medium or a non-transient storage medium, and a computer program is stored on the storage medium, and the computer program is executed by a processor to execute the steps of the monitoring video processing method in any embodiment.
The embodiment of the invention also provides a terminal, which comprises a memory and a processor, wherein the memory stores a computer program capable of running on the processor, and the processor executes the steps of the monitoring video processing method in any embodiment when running the computer program.
Those of ordinary skill in the art will appreciate that all or part of the steps in the various methods of the above embodiments may be implemented by a program to instruct related hardware, the program may be stored in any computer readable storage medium, and the storage medium may include: ROM, RAM, magnetic or optical disks, etc.
Although the present invention is disclosed above, the present invention is not limited thereto. Various changes and modifications may be made by one skilled in the art without departing from the spirit and scope of the invention, and the scope of the invention should be defined in the appended claims.