CN111787280A - Video real-time target tracking method and device based on edge calculation - Google Patents

Video real-time target tracking method and device based on edge calculation Download PDF

Info

Publication number
CN111787280A
CN111787280A CN202010621736.6A CN202010621736A CN111787280A CN 111787280 A CN111787280 A CN 111787280A CN 202010621736 A CN202010621736 A CN 202010621736A CN 111787280 A CN111787280 A CN 111787280A
Authority
CN
China
Prior art keywords
frame
calibrated
target
edge
tracking
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010621736.6A
Other languages
Chinese (zh)
Inventor
杨铮
赵毅
贺骁武
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tsinghua University
Original Assignee
Tsinghua University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tsinghua University filed Critical Tsinghua University
Priority to CN202010621736.6A priority Critical patent/CN111787280A/en
Publication of CN111787280A publication Critical patent/CN111787280A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/18Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content

Abstract

The embodiment of the invention provides a video real-time target tracking method and a video real-time target tracking device based on edge calculation, wherein the method comprises the following steps: selecting a frame to be calibrated from the collected video frames according to a preset rule, sending the frame to be calibrated to an edge end, and carrying out target identification on the frame to be calibrated after the edge end receives the frame to be calibrated; and if an identification result of the target position of the frame to be calibrated sent by the edge end is received, determining the target position of the current frame according to the identification result, and tracking the target based on the newly determined target position of the current frame. The method comprises the steps that a frame to be calibrated is selected from collected video frames and sent to an edge end, so that after the edge end receives the frame to be calibrated, target recognition is conducted on the frame to be calibrated, and the problem of time delay caused by cloud-based calibration is solved.

Description

Video real-time target tracking method and device based on edge calculation
Technical Field
The invention relates to the field of edge calculation, in particular to a video real-time target tracking method and device based on edge calculation.
Background
The real-time video analysis processing at the mobile terminal refers to real-time analysis of videos collected by a camera on mobile equipment such as a mobile phone and an augmented reality head display, such as target detection and tracking, semantic segmentation and the like. Is a key technology required by applications such as smart home, augmented reality, intelligent monitoring and the like.
These tasks often have high requirements on real-time performance, and the high delay can seriously affect the use experience of the user and even cause potential safety hazards. Due to the fact that the computing power and the memory resources of the mobile terminal equipment are insufficient, the target detection algorithm based on deep learning is difficult to directly run in real time. Due to the long physical distance between the cloud end and the device end, the communication between the cloud end and the device end has large delay. Therefore, the model calculation task is directly distributed to the cloud, and the real-time requirement of the application is difficult to meet.
Disclosure of Invention
In order to solve the above problems, embodiments of the present invention provide a method and an apparatus for tracking a video real-time target based on edge calculation.
In a first aspect, an embodiment of the present invention provides a video real-time target tracking method based on edge calculation, including: continuously selecting a frame to be calibrated from the collected video frames according to a preset rule, and sending the frame to be calibrated to an edge end so as to perform target identification on the frame to be calibrated after the edge end receives the frame to be calibrated; and if an identification result of the target position of the frame to be calibrated sent by the edge end is received, determining the target position of the current frame according to the identification result, and tracking the target based on the newly determined target position of the current frame.
Further, after selecting a frame to be calibrated from the collected video frames according to a preset rule and sending the frame to the edge end, the method further comprises the following steps: storing the video frame from the frame to be calibrated to the current frame; correspondingly, determining the target position of the current frame according to the identification result comprises the following steps: and tracking the target position of the current frame by frame from the frame to be calibrated according to the stored frame to be calibrated and the received identification result.
Further, after receiving the frame to be calibrated, the edge terminal further sends the frame to be calibrated to a cloud server for target identification, and accordingly: if the edge terminal does not receive the calibration result of the cloud server after sending the frame to be calibrated to the cloud server, the identification result is sent out after the edge terminal carries out local target tracking; and if the edge terminal receives the calibration result of the frame to be calibrated by the cloud server, the identification result is sent out after the edge terminal calibrates according to the calibration result of the cloud server.
In a second aspect, an embodiment of the present invention provides a method for tracking a video real-time target based on edge calculation, including: if a frame to be calibrated sent by a mobile terminal is received, performing target identification on the frame to be calibrated to obtain an identification result of the frame to be calibrated; and sending the identification result of the local calculation to the mobile terminal so that the mobile terminal can determine the target position of the current frame according to the identification result and track the target based on the latest determined target position of the current frame.
Further, after receiving the frame to be calibrated sent by the mobile terminal, the method further includes: and sending the frame to be calibrated to a cloud server, and updating a local target tracking result according to the received calibration result if the calibration result of the frame to be calibrated returned by the cloud server is received.
Further, the sending the calculated recognition result to the mobile terminal includes: and sending the target tracking result of the frame to be calibrated and the characteristic points for target tracking to the mobile terminal.
In a third aspect, an embodiment of the present invention provides an edge-computation-based video real-time target tracking apparatus, including: the acquisition module is used for continuously selecting a frame to be calibrated from the acquired video frames according to a preset rule and sending the frame to be calibrated to the edge end so as to perform target identification on the frame to be calibrated after the edge end receives the frame to be calibrated; and the processing module is used for determining the target position of the current frame according to the identification result and tracking the target based on the newly determined target position of the current frame if the identification result of the target position of the frame to be calibrated sent by the edge end is received.
In a fourth aspect, an embodiment of the present invention provides an apparatus for tracking a video real-time target based on edge calculation, including: the calibration module is used for carrying out target identification on the frame to be calibrated if the frame to be calibrated sent by the mobile terminal is received, so as to obtain an identification result of the frame to be calibrated; and the sending module is used for sending the locally calculated identification result to the mobile terminal so that the mobile terminal can determine the target position of the current frame according to the identification result and track the target based on the newly determined target position of the current frame.
In a fifth aspect, an embodiment of the present invention provides an electronic device, which includes a memory, a processor, and a computer program stored in the memory and executable on the processor, where the processor executes the computer program to implement the steps of the video real-time object tracking method based on edge calculation according to the first aspect or the second aspect of the present invention.
In a sixth aspect, the present invention provides a non-transitory computer readable storage medium, on which a computer program is stored, where the computer program is executed by a processor to implement the steps of the video real-time target tracking method based on edge calculation according to the first aspect or the second aspect of the present invention.
According to the method and the device for tracking the video real-time target based on the edge calculation, the frame to be calibrated is selected from the collected video frames and sent to the edge end, so that the edge end can identify the target of the frame to be calibrated after receiving the frame to be calibrated.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, and it is obvious that the drawings in the following description are some embodiments of the present invention, and those skilled in the art can also obtain other drawings according to the drawings without creative efforts.
Fig. 1 is a flowchart of a video real-time target tracking method based on edge calculation according to an embodiment of the present invention;
fig. 2 is a flowchart of a mobile terminal processing method according to an embodiment of the present invention;
fig. 3 is a flowchart of an edge processing method according to an embodiment of the present invention;
FIG. 4 is a block diagram of an embodiment of a video real-time target tracking device based on edge calculation;
FIG. 5 is a block diagram of an apparatus for real-time tracking of video objects based on edge calculation according to another embodiment of the present invention;
fig. 6 is a schematic physical structure diagram of an electronic device according to an embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present invention clearer, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, but not all, embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
In recent years, edge computing technology is gradually becoming a new computing method in order to solve the problem of large communication delay between a cloud end and a device end. The edge computing aims to directly provide low-delay and low-energy-consumption computing service by utilizing computing resources closer to a device terminal, and the dependence of applications on a public network environment is reduced. Ideally, the edge computing device needs to be able to run the deep learning model needed by the application in real-time, thereby providing a stable low-latency service. However, high-end deep learning servers or edge clusters are expensive, only suitable for enterprises and production environments, and burdensome for individual and home users.
In fact, in personal and home scenarios, there are many weak edge computing devices with diverse architectures, such as laptops, desktops, etc. In addition, there are also inexpensive edge compute nodes at present. While these computing devices are not powerful enough to support the entire real-time video analysis task, their computing resources are still far superior to that of common mobile-end devices.
The invention aims to overcome the defects in the prior art, and provides a household mobile terminal real-time video analysis system based on cloud terminal, edge terminal and mobile terminal cooperation by using weak edge computing equipment which is easily obtained in a household scene. The embodiment of the present invention first describes the method by using a mobile terminal as an execution subject of the method, fig. 1 is a flowchart of a video real-time target tracking method based on edge calculation according to the embodiment of the present invention, as shown in fig. 1, the embodiment of the present invention provides a video real-time target tracking method based on edge calculation, including:
101. and continuously selecting a frame to be calibrated from the acquired video frames according to a preset rule, and sending the frame to be calibrated to the edge end so as to perform target identification on the frame to be calibrated after the edge end receives the frame to be calibrated. The mobile terminal is responsible for collecting video frames, if the computing power of the mobile terminal is insufficient, the target detection algorithm cannot be directly operated, the initial position of the target can be obtained through the target frame which is returned by the edge terminal after the target is identified, and then local real-time tracking is carried out. That is, when the system starts operating, the mobile terminal waits until it does not acquire the target position from the edge terminal, and starts tracking the edge terminal after the target position returned by the edge terminal is acquired.
The mobile terminal can also locally and preliminarily render the target position, so that local target tracking of the mobile terminal is carried out. In any case, after the target recognition result of the first frame to be calibrated is obtained, the mobile device needs to track the lightweight target of each frame in the video stream in real time, so that the mobile device can smoothly provide the accurate position of the target before the next edge end returns the result. The mobile terminal is communicated with the edge terminal, a certain video frame is continuously selected from the collected video frames according to a preset rule and sent to the edge terminal, and an identification result of the target position of the frame to be calibrated is generated, so that continuous calibration of target tracking of the mobile terminal is realized. The edge terminal of the embodiment of the invention comprises computing equipment which is arranged at the near end of the mobile terminal and can run an object tracking program, wherein the computing equipment comprises a notebook computer, a desktop host and edge computing equipment which is specially arranged and is used for cooperating with the mobile terminal to track the object. If the edge terminal equipment is respectively arranged at the base station, the mobile terminal sends the frame to be calibrated to the edge terminal equipment at the base station corresponding to the coverage range.
The method comprises the steps of selecting a frame to be calibrated from collected video frames according to a preset rule and sending the frame to an edge end, wherein each frame can be sent specifically, but the cost of the edge end is obviously increased, the time delay of the edge end in target tracking processing and the sending time delay of a mobile terminal are not negligible, and the sending of each frame is not an optimal scheme. And sending a frame at intervals of a preset frame number, and also comprehensively determining the processing delay of the edge terminal and the sending delay of the mobile terminal by fully considering.
For example, selecting a frame to be calibrated from acquired video frames according to a preset rule and sending the frame to be calibrated to an edge end includes: when a new video frame is collected, the video frame is put into a sending queue, the latest frame is sent to an edge end in the sending queue every time, and the previous frame is discarded. Similarly, when the calibration result is received from the edge end, the calibration result is put into a receiving queue, the latest calibration result is selected from the receiving queue for tracking each time, and the outdated result is discarded.
And after receiving the frame to be calibrated, the edge terminal performs target identification on the frame to be calibrated. The edge terminal has higher calculation precision than the mobile terminal, so that the identification result is sent to the mobile terminal as the calibration result of the mobile terminal.
102. And if the identification result of the target position of the frame to be calibrated sent by the edge end is received, determining the target position of the current frame according to the identification result, and tracking the target based on the newly determined target position of the current frame.
And if the identification result of the frame to be calibrated, which is sent by the edge end and is related to the target position, is received, if the target frame of the target to be tracked in the frame to be calibrated is received, calibrating the frame to be calibrated according to the identification result. Considering the processing delay and the transmission delay of the edge end, the video frame acquired by the mobile terminal at this time is a frame after the frame to be calibrated. Therefore, the latest current frame needs to be updated according to the identification result and the corresponding frame to be verified, the target position calibration is performed, and the target tracking is performed based on the latest determined current frame target position. After the latest determined target position of the current frame is obtained and before the verification result of the next frame to be verified is received, the mobile equipment tracks the lightweight target of each frame starting from the current frame in real time, so that the accurate position of the target is smoothly provided before the mobile equipment returns the result at the next edge end.
According to the method provided by the embodiment of the invention, the frame to be calibrated is selected from the collected video frames and sent to the edge end so as to be used for carrying out target identification on the frame to be calibrated after the edge end receives the frame to be calibrated, and the problem of time delay caused by cloud-based calibration is solved because the channel transmission time delay between the edge end and the mobile terminal is low. Based on the existing cloud calibration mode, after the calibration result of the cloud is received, the calibration is carried out based on the result of the cloud.
Based on the content of the foregoing embodiment, as an optional embodiment, after selecting a frame to be calibrated from a captured video frame according to a preset rule and sending the frame to an edge, the method further includes: storing the video frame from the frame to be calibrated to the current frame; correspondingly, determining the target position of the current frame according to the recognition result comprises the following steps: and tracking the target position of the current frame by frame from the frame to be calibrated according to the stored frame to be calibrated and the received identification result.
It is contemplated that after receiving the calibration frame, the current video frame has been updated to a number of frames later. After the mobile terminal sends the video frame to be calibrated to the edge terminal, all the following frames need to be cached. After the edge end returns the result, the mobile end firstly tracks the result to the current frame by frame through the cached video frame in the background, and then carries out the calibration of the current target frame.
Fig. 2 is a processing flow chart of a mobile terminal according to an embodiment of the present invention, as shown in fig. 2, which includes two real-time optical flow trackers and a communication module connected to an edge terminal.
Whenever a new video frame is captured by the camera, the optical flow tracker 1 updates the stored corresponding target frame from the previous frame to the current frame. In order to ensure real-time operation on the mobile terminal, the optical flow tracker 1 adopts a fast tracking algorithm based on optical flow. Specifically, the algorithm selects a plurality of key points in each target frame, tracks the key points by using an optical flow method, and updates a new target frame by using new key point positions.
The optical flow tracker 2 is responsible for processing the results returned by the edge end. Due to the communication delay of the edge terminal, the returned result belongs to frames before the current frame. Therefore, the optical flow tracker 2 needs to track the target frame to the current frame by using the previously buffered intermediate frames, and then correct the target frame displayed by the moving end. In order to avoid influencing the moving end to display the target frame in real time, the optical flow trackers 1 and 2 adopt the same algorithm, but run in different threads respectively.
Specifically, assuming that the mobile terminal uploads the ith frame to the edge terminal, due to communication and computation delay, when the edge terminal returns the result of the ith frame, the (i + 5) th frame may be reached currently. And a background tracker thread of the mobile terminal tracks the result from the i-th frame to i +1, i +2, i. Since the background tracking also takes time, it is possible that when the background tracker tracks i +5 frames, the video has already reached i +6 frames or even i +7 frames, and therefore the background tracker needs to continue tracking frame by frame until the result is tracked to the frame currently being displayed in real time. Meanwhile, a front-end real-time tracker tracks the result from the previous frame to the current frame based on an optical flow method in real time. And if and only when the background tracker tracks the current frame, the mobile terminal corrects and updates the target frame corresponding to the current frame into a background tracking result. Then, the front-end real-time tracker tracks in real time through the corrected result; the background tracker repeats the above process when the edge returns the result.
The target position of the current frame is tracked frame by frame from the frame to be calibrated according to the stored frame to be calibrated and the received identification result, so that the accuracy of acquiring the target position of the current frame is facilitated.
Based on the content of the foregoing embodiment, as an optional embodiment, after receiving the frame to be calibrated, the edge further sends the frame to be calibrated to the cloud server for target identification, and accordingly: if the edge terminal does not receive the calibration result of the cloud server after sending the frame to be calibrated to the cloud server, the identification result is sent out after the edge terminal carries out local target tracking; and if the edge terminal receives the calibration result of the frame to be calibrated by the cloud server, the identification result is sent out after the edge terminal is calibrated according to the calibration result of the cloud server.
In the embodiment of the invention, the edge terminal divides the longer delay between the original cloud terminal and the mobile terminal into two parts, namely the edge terminal-mobile terminal delay and the cloud terminal-edge terminal delay. The mobile end only needs to face short delay of returning the result from uploading the video frame to the edge end, and the edge end is responsible for stably updating the result in a long time period of returning the result from the cloud end. Fig. 3 is a flowchart of an edge processing method according to an embodiment of the present invention, which mainly includes an edge tracker module and a communication module between the edge tracker module and the cloud and the mobile terminal.
When the mobile terminal uploads a new video frame, the edge terminal tracker needs to update the target frame to the new video frame and return a corresponding calibration result to the mobile terminal. When the cloud returns the video target detection result, the edge needs to initialize (calibrate) the tracker again, and then when processing a video frame newly uploaded by the mobile terminal, the latest result returned by the cloud is directly utilized. Unlike the mobile terminal which can only run simple tracking based on the optical flow method, the edge terminal can choose to run many more advanced target tracking algorithms. The specific edge tracking algorithm is determined by the edge tracker selection algorithm.
In terms of the communication module, the edge terminal needs to be responsible for three functions. On one hand, the edge terminal is responsible for returning the latest tracking result and the key point to the mobile terminal. And on the other hand, the edge end is responsible for receiving the detection result returned by the cloud end and recalibrating the tracker. In addition, the edge end directly uploads the video frame transmitted by the mobile end to the cloud end for target detection in an independent thread, so that the influence of the processing speed of the tracker on the uploading of the video frame to the cloud end is avoided.
According to the method provided by the embodiment of the invention, if the edge terminal does not receive the calibration result of the cloud server after sending the frame to be calibrated to the cloud server, the identification result is sent out after the edge terminal carries out local target tracking, and the real-time property of the edge terminal is fully utilized when the calibration result of the cloud server is not received. When the calibration result of the cloud server is received, the calibration result is sent after the edge terminal is calibrated according to the calibration result of the cloud server, accuracy of the cloud end is fully utilized, and therefore the real-time performance of target tracking of the mobile terminal can be remarkably improved, and the accuracy is considered.
The embodiment of the present invention is further described with an edge end as an execution subject, and an embodiment of the present invention provides a video real-time target tracking method based on edge calculation, including: if a frame to be calibrated sent by the mobile terminal is received, performing target identification on the frame to be calibrated to obtain an identification result of the frame to be calibrated; and sending the calculated identification result to the mobile terminal, so that the mobile terminal determines the target position of the current frame according to the identification result and tracks the target based on the newly determined target position of the current frame.
Based on the content of the foregoing embodiment, as an optional embodiment, after receiving a frame to be calibrated sent by a mobile terminal, the method further includes: and sending the frame to be calibrated to the cloud server, and updating a local target tracking result according to the received calibration result if the calibration result of the frame to be calibrated returned by the cloud server is received.
For the embodiment using the edge terminal as the execution main body, reference may be made to the above-mentioned embodiment using the mobile terminal as the execution main body, and details are not described here.
Based on the content of the foregoing embodiment, as an optional embodiment, sending the locally-calculated identification result to the mobile terminal includes: and sending the target tracking result of the frame to be calibrated and the characteristic points for target tracking to the mobile terminal.
In the embodiment of the invention, the characteristic points of the target frame are also sent to the mobile terminal, and the mobile terminal can identify the target according to the received characteristic points to realize target tracking, so that the calculation cost of the characteristic points can be reduced, and the calculation speed of the mobile terminal is further improved.
Fig. 4 is a block diagram of an edge-based real-time video target tracking device according to an embodiment of the present invention, as shown in fig. 4, the edge-based real-time video target tracking device includes: an acquisition module 401 and a processing module 402. The acquisition module 401 is configured to continuously select a frame to be calibrated from acquired video frames according to a preset rule and send the frame to the edge, so that the edge performs target identification on the frame to be calibrated after receiving the frame to be calibrated; the processing module 402 is configured to, if an identification result of the target position of the frame to be calibrated sent by the edge end is received, determine the target position of the current frame according to the identification result, and perform target tracking based on the latest determined target position of the current frame.
Fig. 5 is a block diagram of an edge-based real-time video target tracking device according to another embodiment of the present invention, as shown in fig. 5, the edge-based real-time video target tracking device includes: a calibration module 501 and a transmission module 502. The calibration module 501 is configured to, if a frame to be calibrated sent by the mobile terminal is received, perform target identification on the frame to be calibrated to obtain an identification result of the frame to be calibrated; the sending module 502 is configured to send the calculated identification result to the mobile terminal, so that the mobile terminal determines the target position of the current frame according to the identification result, and performs target tracking based on the newly determined target position of the current frame.
The device embodiment provided in the embodiments of the present invention is for implementing the above method embodiments, and for details of the process and the details, reference is made to the above method embodiments, which are not described herein again.
The video real-time target tracking device based on the edge calculation provided by the embodiment of the invention selects the frame to be calibrated from the collected video frames and sends the frame to be calibrated to the edge end, so that the edge end carries out target identification on the frame to be calibrated after receiving the frame to be calibrated, and the problem of time delay caused by cloud-based calibration is solved. Based on the existing cloud calibration mode, after the calibration result of the cloud is received, the calibration is carried out based on the result of the cloud.
Fig. 6 is a schematic entity structure diagram of an electronic device according to an embodiment of the present invention, and as shown in fig. 6, the electronic device may include: a processor 601, a communication Interface 602, a memory 603 and a bus 604, wherein the processor 601, the communication Interface 602 and the memory 603 complete communication with each other through the bus 604. The communication interface 602 may be used for information transfer of an electronic device. The processor 601 may call logic instructions in the memory 603 to perform a method comprising: selecting a frame to be calibrated from the collected video frames according to a preset rule, sending the frame to be calibrated to an edge end, and carrying out target identification on the frame to be calibrated after the edge end receives the frame to be calibrated; and if the identification result of the target position of the frame to be calibrated sent by the edge end is received, determining the target position of the current frame according to the identification result, and tracking the target based on the newly determined target position of the current frame.
In addition, the logic instructions in the memory 603 may be implemented in the form of software functional units and stored in a computer readable storage medium when the logic instructions are sold or used as independent products. Based on such understanding, the technical solution of the present invention may be embodied in the form of a software product, which is stored in a storage medium and includes instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the above-described method embodiments of the present invention. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and other various media capable of storing program codes.
In another aspect, an embodiment of the present invention further provides a non-transitory computer-readable storage medium, on which a computer program is stored, where the computer program is implemented to perform the transmission method provided in the foregoing embodiments when executed by a processor, and for example, the method includes: continuously selecting a frame to be calibrated from the collected video frames according to a preset rule, sending the frame to be calibrated to the edge end, and carrying out target identification on the frame to be calibrated after the edge end receives the frame to be calibrated; and if the identification result of the target position of the frame to be calibrated sent by the edge end is received, determining the target position of the current frame according to the identification result, and tracking the target based on the newly determined target position of the current frame.
The above-described embodiments of the apparatus are merely illustrative, and the units described as separate parts may or may not be physically separate, and the parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of the present embodiment. One of ordinary skill in the art can understand and implement it without inventive effort.
Through the above description of the embodiments, those skilled in the art will clearly understand that each embodiment can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware. With this understanding in mind, the above-described technical solutions may be embodied in the form of a software product, which can be stored in a computer-readable storage medium, such as ROM/RAM, magnetic disk, optical disk, etc., and includes instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) to execute the methods of the various embodiments or some parts of the embodiments.
Finally, it should be noted that: the above examples are only intended to illustrate the technical solution of the present invention, but not to limit it; although the present invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; and such modifications or substitutions do not depart from the spirit and scope of the corresponding technical solutions of the embodiments of the present invention.

Claims (10)

1. A video real-time target tracking method based on edge calculation is characterized by comprising the following steps:
continuously selecting a frame to be calibrated from the collected video frames according to a preset rule, and sending the frame to be calibrated to an edge end so as to perform target identification on the frame to be calibrated after the edge end receives the frame to be calibrated;
and if an identification result of the target position of the frame to be calibrated sent by the edge end is received, determining the target position of the current frame according to the identification result, and tracking the target based on the newly determined target position of the current frame.
2. The method for tracking the video real-time target based on the edge calculation according to claim 1, wherein after selecting the frame to be calibrated from the collected video frames according to the preset rule and sending the frame to the edge terminal, the method further comprises:
storing the video frame from the frame to be calibrated to the current frame;
correspondingly, determining the target position of the current frame according to the identification result comprises the following steps:
and tracking the target position of the current frame by frame from the frame to be calibrated according to the stored frame to be calibrated and the received identification result.
3. The real-time video target tracking method based on edge computing according to claim 1, wherein after receiving the frame to be calibrated, the edge terminal further sends the frame to be calibrated to a cloud server for target recognition, and accordingly:
if the edge terminal does not receive the calibration result of the cloud server after sending the frame to be calibrated to the cloud server, the identification result is sent out after the edge terminal carries out local target tracking; and if the edge terminal receives the calibration result of the frame to be calibrated by the cloud server, the identification result is sent out after the edge terminal calibrates according to the calibration result of the cloud server.
4. A video real-time target tracking method based on edge calculation is characterized by comprising the following steps:
if a frame to be calibrated sent by a mobile terminal is received, performing target identification on the frame to be calibrated to obtain an identification result of the frame to be calibrated;
and sending the identification result of the local calculation to the mobile terminal so that the mobile terminal can determine the target position of the current frame according to the identification result and track the target based on the latest determined target position of the current frame.
5. The method for tracking the video real-time target based on the edge calculation according to claim 4, after receiving the frame to be calibrated sent by the mobile terminal, further comprising:
and sending the frame to be calibrated to a cloud server, and updating a local target tracking result according to the received calibration result if the calibration result of the frame to be calibrated returned by the cloud server is received.
6. The real-time video target tracking method based on edge calculation according to claim 4, wherein the sending the locally calculated recognition result to the mobile terminal includes:
and sending the target tracking result of the frame to be calibrated and the characteristic points for target tracking to the mobile terminal.
7. An edge-computation-based video real-time target tracking device, comprising:
the acquisition module is used for continuously selecting a frame to be calibrated from the acquired video frames according to a preset rule and sending the frame to be calibrated to the edge end so as to perform target identification on the frame to be calibrated after the edge end receives the frame to be calibrated;
and the processing module is used for determining the target position of the current frame according to the identification result and tracking the target based on the newly determined target position of the current frame if the identification result of the target position of the frame to be calibrated sent by the edge end is received.
8. An edge-computation-based video real-time target tracking device, comprising:
the calibration module is used for carrying out target identification on the frame to be calibrated if the frame to be calibrated sent by the mobile terminal is received, so as to obtain an identification result of the frame to be calibrated;
and the sending module is used for sending the locally calculated identification result to the mobile terminal so that the mobile terminal can determine the target position of the current frame according to the identification result and track the target based on the newly determined target position of the current frame.
9. An electronic device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, wherein the processor when executing the program performs the steps of the method for edge-based computation-based real-time object tracking of video according to any of claims 1 to 6.
10. A non-transitory computer readable storage medium, having stored thereon a computer program, wherein the computer program, when being executed by a processor, implements the steps of the method for real-time object tracking of video based on edge computation according to any one of claims 1 to 6.
CN202010621736.6A 2020-06-30 2020-06-30 Video real-time target tracking method and device based on edge calculation Pending CN111787280A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010621736.6A CN111787280A (en) 2020-06-30 2020-06-30 Video real-time target tracking method and device based on edge calculation

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010621736.6A CN111787280A (en) 2020-06-30 2020-06-30 Video real-time target tracking method and device based on edge calculation

Publications (1)

Publication Number Publication Date
CN111787280A true CN111787280A (en) 2020-10-16

Family

ID=72761629

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010621736.6A Pending CN111787280A (en) 2020-06-30 2020-06-30 Video real-time target tracking method and device based on edge calculation

Country Status (1)

Country Link
CN (1) CN111787280A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112287803A (en) * 2020-10-26 2021-01-29 清华大学 Edge cooperative target detection method and device based on RoI coding
CN114154018A (en) * 2022-02-08 2022-03-08 中国电子科技集团公司第二十八研究所 Cloud-edge collaborative video stream processing method and system for unmanned system
CN114241002A (en) * 2021-12-14 2022-03-25 中国电信股份有限公司 Target tracking method, system, device and medium based on cloud edge cooperation

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104346811A (en) * 2014-09-30 2015-02-11 深圳市华尊科技有限公司 Video-image-based target real-time tracking method and device
CN109919033A (en) * 2019-01-31 2019-06-21 中山大学 A kind of adaptive Yingcheng City looking-for-person method based on edge calculations
US20190236793A1 (en) * 2018-01-26 2019-08-01 SagaDigits Limited Visual and geolocation analytic system and method
CN110795595A (en) * 2019-09-10 2020-02-14 安徽南瑞继远电网技术有限公司 Video structured storage method, device, equipment and medium based on edge calculation
CN110996058A (en) * 2019-12-03 2020-04-10 中国电子科技集团公司第五十四研究所 Intelligent monitoring system based on edge calculation
CN111212264A (en) * 2019-12-27 2020-05-29 中移(杭州)信息技术有限公司 Image processing method and device based on edge calculation and storage medium

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104346811A (en) * 2014-09-30 2015-02-11 深圳市华尊科技有限公司 Video-image-based target real-time tracking method and device
US20190236793A1 (en) * 2018-01-26 2019-08-01 SagaDigits Limited Visual and geolocation analytic system and method
CN109919033A (en) * 2019-01-31 2019-06-21 中山大学 A kind of adaptive Yingcheng City looking-for-person method based on edge calculations
CN110795595A (en) * 2019-09-10 2020-02-14 安徽南瑞继远电网技术有限公司 Video structured storage method, device, equipment and medium based on edge calculation
CN110996058A (en) * 2019-12-03 2020-04-10 中国电子科技集团公司第五十四研究所 Intelligent monitoring system based on edge calculation
CN111212264A (en) * 2019-12-27 2020-05-29 中移(杭州)信息技术有限公司 Image processing method and device based on edge calculation and storage medium

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112287803A (en) * 2020-10-26 2021-01-29 清华大学 Edge cooperative target detection method and device based on RoI coding
CN114241002A (en) * 2021-12-14 2022-03-25 中国电信股份有限公司 Target tracking method, system, device and medium based on cloud edge cooperation
CN114241002B (en) * 2021-12-14 2024-02-02 中国电信股份有限公司 Target tracking method, system, equipment and medium based on cloud edge cooperation
CN114154018A (en) * 2022-02-08 2022-03-08 中国电子科技集团公司第二十八研究所 Cloud-edge collaborative video stream processing method and system for unmanned system
CN114154018B (en) * 2022-02-08 2022-05-10 中国电子科技集团公司第二十八研究所 Cloud-edge collaborative video stream processing method and system for unmanned system

Similar Documents

Publication Publication Date Title
US20210201147A1 (en) Model training method, machine translation method, computer device, and storage medium
US10970854B2 (en) Visual target tracking method and apparatus based on deep adversarial training
CN110310326B (en) Visual positioning data processing method and device, terminal and computer readable storage medium
CN108830235B (en) Method and apparatus for generating information
CN111787280A (en) Video real-time target tracking method and device based on edge calculation
CN109829432B (en) Method and apparatus for generating information
CN110059623B (en) Method and apparatus for generating information
JP7273129B2 (en) Lane detection method, device, electronic device, storage medium and vehicle
EP3985610A1 (en) Audio collection device positioning method and apparatus, and speaker recognition method and system
US11270126B2 (en) Person tracking method, device, electronic device, and computer readable medium
CN111862987B (en) Speech recognition method and device
CN113436100B (en) Method, apparatus, device, medium, and article for repairing video
CN111784776A (en) Visual positioning method and device, computer readable medium and electronic equipment
CN110097004B (en) Facial expression recognition method and device
CN103871073A (en) Target tracking method, equipment and system based on augmented reality
CN112994980B (en) Time delay test method, device, electronic equipment and storage medium
CN109034085B (en) Method and apparatus for generating information
CN111027495A (en) Method and device for detecting key points of human body
US20220351495A1 (en) Method for matching image feature point, electronic device and storage medium
CN113784217A (en) Video playing method, device, equipment and storage medium
CN111768443A (en) Image processing method and device based on mobile camera
CN114443900A (en) Video annotation method, client, server and system
CN113361519A (en) Target processing method, training method of target processing model and device thereof
CN110087145B (en) Method and apparatus for processing video
CN113591709B (en) Motion recognition method, apparatus, device, medium, and product

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20201016

RJ01 Rejection of invention patent application after publication