CN117765031B - Image multi-target pre-tracking method and system for edge intelligent equipment - Google Patents

Image multi-target pre-tracking method and system for edge intelligent equipment Download PDF

Info

Publication number
CN117765031B
CN117765031B CN202410191085.XA CN202410191085A CN117765031B CN 117765031 B CN117765031 B CN 117765031B CN 202410191085 A CN202410191085 A CN 202410191085A CN 117765031 B CN117765031 B CN 117765031B
Authority
CN
China
Prior art keywords
target
tracking
candidate
locked
data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202410191085.XA
Other languages
Chinese (zh)
Other versions
CN117765031A (en
Inventor
田虎
包灵
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Sichuan Angxin Technology Co ltd
Original Assignee
Sichuan Angxin Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Sichuan Angxin Technology Co ltd filed Critical Sichuan Angxin Technology Co ltd
Priority to CN202410191085.XA priority Critical patent/CN117765031B/en
Publication of CN117765031A publication Critical patent/CN117765031A/en
Application granted granted Critical
Publication of CN117765031B publication Critical patent/CN117765031B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Image Analysis (AREA)

Abstract

The invention discloses an image multi-target pre-tracking method and system of edge intelligent equipment, and belongs to the technical field of edge calculation and artificial intelligence. The method comprises the following steps: s1: acquiring a photoelectric image; s2: processing the photoelectric image by the multi-target detection model; s3: obtaining tracking target data by using a multi-target association tracking method; s4: the history tracking management module stores tracking target data; at the latest, when the tracking target data is input to the history tracking management module, inputting the target ID to be locked or the candidate region coordinate data to the history tracking management module; when the input is the target ID to be locked, executing an ID locking current frame target strategy to lock the current frame target; and when the input is candidate region coordinate data, executing a candidate region locking current frame target strategy to perform current frame candidate target inquiry. The method has the advantages of few tracking objects, few data volume needing wireless transmission and no need of multiple wireless communication between the edge intelligent equipment end and the remote control end to lock the target.

Description

Image multi-target pre-tracking method and system for edge intelligent equipment
Technical Field
The invention relates to the technical field of edge calculation and artificial intelligence, in particular to an image multi-target pre-tracking method and system of edge intelligent equipment.
Background
The edge intelligent device is an important component of the intelligent unmanned plane and the intelligent remote device, such as: the on-board edge intelligent device is an important load of the intelligent unmanned aerial vehicle and is commonly used for wide-area reconnaissance monitoring. In order to realize the data interaction between the edge intelligent device and the remote control device, the photoelectric image data is transmitted through a radio link or a satellite communication link in a scene where a long-distance and visual field is blocked, and the transmission data amount is relatively large and the transmission delay is usually more than 1 s. The operator of the remote control equipment cannot stably control the interested target in the photoelectric image of the edge intelligent equipment due to the existence of large transmission delay, so that the control difficulty is increased.
The existing solution mainly adopts a characteristic point set and a strategy of multiple man-machine interaction screening to solve the problem of difficult control of an interested target caused by large delay. However, the method not only depends on hundreds of feature point sets, but also can determine the target through multiple times of human-computer interaction screening, and has the defects of large feature point data transmission capacity and time-consuming control.
Disclosure of Invention
The invention aims to overcome the defects of the prior art and provides an image multi-target pre-tracking method and system of edge intelligent equipment.
The aim of the invention is realized by the following technical scheme: the first aspect of the present invention provides: an image multi-target pre-tracking method of an edge intelligent device comprises the following steps:
S1: acquiring a photoelectric image and inputting the photoelectric image into a multi-target detection model;
S2: processing the photoelectric image by the multi-target detection model to obtain target boxes, target class cls, target confidence level conf and image timestamp of a plurality of detection targets in the photoelectric image as detection target data to be output;
S3: using a multi-target association tracking method to associate detection targets at the previous time t-1 and the current time t, assigning a unique target ID to each detection target, adding the target ID into the detection target data to obtain tracking target data, and inputting the tracking target data into a history tracking management module;
s4: the history tracking management module stores tracking target data; at the latest, when the tracking target data is input to the history tracking management module, inputting the target ID to be locked or the candidate region coordinate data to the history tracking management module; when the input is the target ID to be locked, executing an ID locking current frame target strategy to lock the current frame target; and when the input is candidate region coordinate data, executing a candidate region locking current frame target strategy to perform current frame candidate target inquiry.
Preferably, the target box is (x, y, w, h), x represents the abscissa of the center of the target box, y represents the ordinate of the center of the target box, w represents the width of the target box, and h represents the height of the target box; the multi-target detection model is FASTERRCNN model or YOLO model or DETR model.
Preferably, the multi-target association tracking method uses the Sort algorithm or DeepSort algorithm or ByteTrack algorithm or BoT-SORT algorithm.
Preferably, the history tracking management module includes a dictionary, the dictionary includes a key and a value, the value of the key is a target ID, the value is a list for storing tracking target data, and the dictionary can add tracking target data, query tracking target data and delete tracking target data.
Preferably, the current frame target policy includes the following steps:
Inquiring whether an object ID to be locked exists in the tracking object data or not in the history tracking management module, if so, marking the object ID to be locked as the locked object ID, and outputting a locked image time stamp, a locked photoelectric image, the locked object ID, a locked object frame, a locked object category and a locked object confidence; if there is no target ID to be locked, the following steps are performed:
inquiring the image time stamp in the tracking target data at the loss moment of the target ID to be locked as a lost image time stamp;
Calculating the time difference between the image time stamp to be locked and the lost image time stamp, calculating the maximum moving distance of the object to be locked in the time difference, generating a candidate area by taking the lost object frame as the circle center and the maximum moving distance as the radius, inquiring whether the object ID to be locked exists in the candidate area, marking the object ID to be locked as the candidate object ID if the object ID to be locked exists, and outputting the candidate image time stamp, the candidate photoelectric image, the candidate object ID, the candidate object frame, the candidate object category and the candidate object confidence.
Preferably, the candidate region locking current frame target strategy comprises the following steps:
obtaining a candidate region according to the candidate region coordinate data, judging whether a candidate target ID exists in the candidate region, and outputting a candidate image time stamp, a candidate photoelectric image, the candidate target ID, a candidate target frame, a candidate target category and a candidate target confidence coefficient if the candidate target ID exists in the candidate region;
If there is no candidate target ID, the following steps are performed:
And cutting out candidate region images from the photoelectric image according to the candidate region coordinate data, sending the candidate region images into a multi-target detection model to obtain a plurality of candidate detection target data, and then restoring the candidate detection target data according to the coordinates of the photoelectric image and marking the restored candidate detection target data as detection target data to execute S3.
A second aspect of the invention provides: an image multi-target pre-tracking system of an edge intelligent device, which is used for realizing the image multi-target pre-tracking method of any one of the edge intelligent devices, comprises the following steps:
The edge intelligent equipment end is connected with the ground control end; the edge intelligent equipment end comprises photoelectric equipment which is used for shooting photoelectric images and is connected with a multi-target pre-tracking module; the multi-target pre-tracking module is used for processing the photoelectric image, can receive target ID to be locked or candidate region coordinate data input by a ground control end to lock a current frame target or search the current frame candidate target, and is connected with the transmission data processing module; the transmission data processing module is used for transmitting the data output by the ground control end to the multi-target pre-tracking module and transmitting the data output by the multi-target pre-tracking module to the first wireless communication equipment A according to the wireless communication signal level, and the transmission data processing module is connected with the first wireless communication equipment A; the first wireless communication equipment A is used for carrying out data interaction between the transmission data processing module and the ground control terminal through a wireless communication link;
The ground control end comprises a second wireless communication device B which is used for carrying out data interaction with the edge intelligent device end through a wireless communication link, and the second wireless communication device B is connected with a man-machine interaction display module; the human-computer interaction display module is used for visualizing data transmitted by the edge intelligent equipment end, wherein a first category a represents an unselected target, a second category b represents a target selected by human-computer interaction, a third category c represents a candidate target, a fourth category d represents a locked target, and a fifth category e represents a target which is not detected in the photoelectric image, and the human-computer interaction display module is connected with the human-computer interaction equipment; the man-machine interaction equipment is a mouse and/or a keyboard and/or an unmanned aerial vehicle control handle, and a target to be locked is selected by clicking, selecting a frame or pre-defining a numerical key in a quick selection mode.
Preferably, the wireless communication signal level is divided into a first signal level, a second signal level, a third signal level and a fourth signal level, and 1080P photoelectric images and tracking target data are transmitted when the wireless communication signal level is the first signal level; transmitting 640P the photoelectric image and tracking target data when the signal is at the second signal level; when the signal is at the third signal level, only the tracking target data is transmitted; when the fourth signal level is set, the tracking target data is not transmitted.
Preferably, the clicking is to click a target to be locked by using a mouse; the frame selection is to select a target to be locked by using a mouse drawing frame, and if the frame selection area does not detect the target to be locked, candidate area coordinate data is output to an edge intelligent equipment end; the predefined number key quick selection is to number the target to be locked, and the input of the number corresponding to the target to be locked represents the selection of the corresponding target to be locked.
The beneficial effects of the invention are as follows:
1) Compared with the pre-tracking method based on the feature points, the method has the advantages that the tracking objects are fewer, the data volume required for wireless transmission is less, and the targets can be locked without the need of carrying out multiple wireless communication between the edge intelligent equipment end and the remote control end.
2) According to the wireless communication signal grade division, wireless transmission of different grade data volume is realized, and the problems of transmission data loss or delay under the condition of poor signal or signal interference can be well overcome.
3) The target ID and the target locking strategy of the candidate region 2 current frames are supported, and the target can be quickly and stably locked through the man-machine interaction equipment very simply.
Drawings
FIG. 1 is a block diagram of an image multi-target pre-tracking system for an edge smart device;
FIG. 2 is a flow chart of an image multi-target pre-tracking method of an edge smart device;
Fig. 3 is a man-machine interaction display UI diagram of the ground control end.
Detailed Description
The technical solutions of the present invention will be clearly and completely described below with reference to the embodiments, and it is apparent that the described embodiments are only some embodiments of the present invention, but not all embodiments. All other embodiments, which can be made by a person skilled in the art without any inventive effort, are intended to be within the scope of the present invention, based on the embodiments of the present invention.
Noun interpretation: edge intelligent equipment end: unmanned aerial vehicle or remote device with optoelectronic pod and computing unit; remote control end: remote control device with display and man-machine interaction device.
As shown in fig. 1, input 1: camera image streams from optoelectronic devices. Input 2: and the target ID to be locked or the candidate region coordinate data are manually input from the ground control terminal. Output 1: locked image timestamp, locked optoelectronic image, locked object ID, locked object box, locked object category, and locked object confidence. Output 2: candidate image time stamp, candidate photo image, candidate object ID, candidate object box, candidate object category, and candidate object confidence. And when the number of the locked target IDs and the number of the candidate target IDs are one or more, the data output in the output 1 and the output 2 are all sets.
Referring to fig. 1-3, a first aspect of the present invention provides: an image multi-target pre-tracking method of an edge intelligent device comprises the following steps:
S1: acquiring a photoelectric image and inputting the photoelectric image into a multi-target detection model;
S2: processing the photoelectric image by the multi-target detection model to obtain target boxes, target class cls, target confidence level conf and image timestamp of a plurality of detection targets in the photoelectric image as detection target data to be output;
S3: using a multi-target association tracking method to associate detection targets at the previous time t-1 and the current time t, assigning a unique target ID to each detection target, adding the target ID into the detection target data to obtain tracking target data, and inputting the tracking target data into a history tracking management module;
s4: the history tracking management module stores tracking target data; at the latest, when the tracking target data is input to the history tracking management module, inputting the target ID to be locked or the candidate region coordinate data to the history tracking management module; when the input is the target ID to be locked, executing an ID locking current frame target strategy to lock the current frame target; and when the input is candidate region coordinate data, executing a candidate region locking current frame target strategy to perform current frame candidate target inquiry.
In some embodiments, the target box is (x, y, w, h), where x represents the abscissa of the center of the target box, y represents the ordinate of the center of the target box, w represents the width of the target box, and h represents the height of the target box; the multi-target detection model is FASTERRCNN model or YOLO model or DETR model.
In some embodiments, the multi-objective association tracking method uses the Sort algorithm or DeepSort algorithm or ByteTrack algorithm or BoT-Sort algorithm.
In some embodiments, the history tracking management module includes a dictionary including keys whose values are object IDs and a value which is a list for storing tracking object data, the dictionary being capable of adding tracking object data, querying tracking object data, and deleting tracking object data.
And the tracking target data are stored in the value according to time sequence.
In some embodiments, the current frame target policy includes the steps of:
Inquiring whether an object ID to be locked exists in the tracking object data or not in the history tracking management module, if so, marking the object ID to be locked as the locked object ID, and outputting a locked image time stamp, a locked photoelectric image, the locked object ID, a locked object frame, a locked object category and a locked object confidence; if there is no target ID to be locked, the following steps are performed:
inquiring the image time stamp in the tracking target data at the loss moment of the target ID to be locked as a lost image time stamp;
Calculating the time difference between the image time stamp to be locked and the lost image time stamp, calculating the maximum moving distance of the object to be locked in the time difference, generating a candidate area by taking the lost object frame as the circle center and the maximum moving distance as the radius, inquiring whether the object ID to be locked exists in the candidate area, marking the object ID to be locked as the candidate object ID if the object ID to be locked exists, and outputting the candidate image time stamp, the candidate photoelectric image, the candidate object ID, the candidate object frame, the candidate object category and the candidate object confidence.
In specific implementation, assuming that the time difference is 0.1s and the maximum moving distance of the target to be locked in the photoelectric image is 50 pixels within 0.1s, the target to be locked moves in a circular candidate area taking the lost target frame as the center of a circle and 50 pixels as the radius at most within the time difference.
In some embodiments, the candidate region locking current frame target strategy comprises the following steps:
obtaining a candidate region according to the candidate region coordinate data, judging whether a candidate target ID exists in the candidate region, and outputting a candidate image time stamp, a candidate photoelectric image, the candidate target ID, a candidate target frame, a candidate target category and a candidate target confidence coefficient if the candidate target ID exists in the candidate region;
If there is no candidate target ID, the following steps are performed:
And cutting out candidate region images from the photoelectric image according to the candidate region coordinate data, sending the candidate region images into a multi-target detection model to obtain a plurality of candidate detection target data, and then restoring the candidate detection target data according to the coordinates of the photoelectric image and marking the restored candidate detection target data as detection target data to execute S3.
Because the candidate region image is obtained by cutting from the photoelectric image, when the candidate region image is sent into a multi-target detection model with fixed input size, the candidate region image is enlarged relative to the photoelectric image, so that the recognition of a tiny target can be realized, the recognition accuracy is improved, and the candidate region image is restored according to the coordinates of the photoelectric image after the recognition is finished, so that the follow-up steps are executed.
A second aspect of the invention provides: an image multi-target pre-tracking system of an edge intelligent device, which is used for realizing the image multi-target pre-tracking method of any one of the edge intelligent devices, comprises the following steps:
The edge intelligent equipment end is connected with the ground control end; the edge intelligent equipment end comprises photoelectric equipment which is used for shooting photoelectric images and is connected with a multi-target pre-tracking module; the multi-target pre-tracking module is used for processing the photoelectric image, can receive target ID to be locked or candidate region coordinate data input by a ground control end to lock a current frame target or search the current frame candidate target, and is connected with the transmission data processing module; the transmission data processing module is used for transmitting the data output by the ground control end to the multi-target pre-tracking module and transmitting the data output by the multi-target pre-tracking module to the first wireless communication equipment A according to the wireless communication signal level, and the transmission data processing module is connected with the first wireless communication equipment A; the first wireless communication equipment A is used for carrying out data interaction between the transmission data processing module and the ground control terminal through a wireless communication link;
The ground control end comprises a second wireless communication device B which is used for carrying out data interaction with the edge intelligent device end through a wireless communication link, and the second wireless communication device B is connected with a man-machine interaction display module; the human-computer interaction display module is used for visualizing data transmitted by the edge intelligent equipment end, wherein a first category a represents an unselected target, a second category b represents a target selected by human-computer interaction, a third category c represents a candidate target, a fourth category d represents a locked target, and a fifth category e represents a target which is not detected in the photoelectric image, and the human-computer interaction display module is connected with the human-computer interaction equipment; the man-machine interaction equipment is a mouse and/or a keyboard and/or an unmanned aerial vehicle control handle, and a target to be locked is selected by clicking, selecting a frame or pre-defining a numerical key in a quick selection mode.
In some embodiments, the wireless communication signal level is divided into a first signal level, a second signal level, a third signal level and a fourth signal level, and 1080P photoelectric image and tracking target data are transmitted when the wireless communication signal level is the first signal level; transmitting 640P the photoelectric image and tracking target data when the signal is at the second signal level; when the signal is at the third signal level, only the tracking target data is transmitted; when the fourth signal level is set, the tracking target data is not transmitted.
The transmission data processing module is responsible for the transmission data processing of the multi-target pre-tracking module and the first wireless communication device A. Specifically, the module acquires, through the first wireless communication device a, man-machine interaction data transmitted by the second wireless communication device B at the ground control end, such as an image timestamp of man-machine interaction time, an ID of a target to be locked or coordinates of a candidate frame, and sends the received man-machine interaction data to the multi-target pre-tracking module as input, such as input 2 in fig. 2. Meanwhile, the module transmits the output 1 and the output 2 of the multi-target pre-tracking module to the first wireless communication equipment A, and the first wireless communication equipment A transmits the output 1 and the output 2 to the second wireless communication equipment B of the ground control end through wireless communication. Because the edge intelligent equipment end needs to transmit photoelectric images and track target data, the transmission data volume is large, and the problems of data loss, large delay and the like can be caused when radio link signals are poor or radio interference occurs. The module is mainly characterized in that the signals are divided into 4 levels of a first signal level, a second signal level, a third signal level and a fourth signal level according to the intensity of radio communication signals, and in the specific implementation, dBm (milliwatt decibel) is used as a measurement unit, and the 4 levels are as follows: first signal level: -40-0 dbm; second signal level: -80dBm to-40 dBm; third signal level: -120dBm to-80 dBm; fourth signal level: and < 120dBm.
In some embodiments, the clicking is selecting the target to be locked using a mouse click; the frame selection is to select a target to be locked by using a mouse drawing frame, and if the frame selection area does not detect the target to be locked, candidate area coordinate data is output to an edge intelligent equipment end; the predefined number key quick selection is to number the target to be locked, and the input of the number corresponding to the target to be locked represents the selection of the corresponding target to be locked.
The human-computer interaction module realizes the display of the photoelectric image of the edge intelligent equipment end and the target to be locked, and realizes the quick selection of the human-computer interaction equipment (such as a mouse and a keyboard) by three modes of clicking, selecting by a frame and quickly selecting the predefined number keys (0-9); the ground control end displays a UI in a man-machine interaction manner, as shown in figure 3. The clicking is carried out: with a mouse click implementation, it takes the mouse click position P as the center coordinate and w as the side length (in the implementation, w takes 100 pixels), and generates a candidate frame, such as a mouse click in fig. 3, which selects the target 9 to be locked.
The frame selection: the selection is realized by using a mouse frame, the starting point coordinate is O, and the end point coordinate is E, as in B1 mouse frame selection and B2 mouse frame selection in FIG. 3. B1 mouse frame selection represents selecting and selecting targets by utilizing the mouse frame, and the targets 6 to be locked and the targets 8 to be locked are selected in the representation of FIG. 3; b2 mouse boxes represent mouse boxes and unselected targets, but because the box-selected region is not detected, the candidate region coordinate data is output to the edge intelligent device, assuming that the O2 coordinate is (x 1, y 1) and the left side of E2 is (x 2, y 2), the candidate region coordinate is (bx, by, bw, bh), wherein bw=x2-x 1, bh=y2-y 1, bx=x1+bw/2, by=y1+bh/2.
The predefined number keys are selected quickly, the number of targets to be locked is 10 at most from large to small according to the area, and the numbers are 0-9. The numeral key is predefined as C in fig. 3, which represents that numeral 7 is input at the time of man-machine interaction, that is, represents that the object to be locked with numeral 7 is selected.
The foregoing is merely a preferred embodiment of the invention, and it is to be understood that the invention is not limited to the form disclosed herein but is not to be construed as excluding other embodiments, but is capable of numerous other combinations, modifications and environments and is capable of modifications within the scope of the inventive concept, either as taught or as a matter of routine skill or knowledge in the relevant art. And that modifications and variations which do not depart from the spirit and scope of the invention are intended to be within the scope of the appended claims.

Claims (7)

1. An image multi-target pre-tracking method of an edge intelligent device is characterized by comprising the following steps of: the method comprises the following steps:
S1: acquiring a photoelectric image and inputting the photoelectric image into a multi-target detection model;
S2: processing the photoelectric image by the multi-target detection model to obtain target boxes, target class cls, target confidence level conf and image timestamp of a plurality of detection targets in the photoelectric image as detection target data to be output;
S3: using a multi-target association tracking method to associate detection targets at the previous time t-1 and the current time t, assigning a unique target ID to each detection target, adding the target ID into the detection target data to obtain tracking target data, and inputting the tracking target data into a history tracking management module;
S4: the history tracking management module stores tracking target data; at the latest, when the tracking target data is input to the history tracking management module, inputting the target ID to be locked or the candidate region coordinate data to the history tracking management module; when the input is the target ID to be locked, executing an ID locking current frame target strategy to lock the current frame target; when the input is candidate region coordinate data, executing a candidate region locking current frame target strategy to inquire a current frame candidate target;
the ID locking current frame target strategy comprises the following steps:
Inquiring whether an object ID to be locked exists in the tracking object data or not in the history tracking management module, if so, marking the object ID to be locked as the locked object ID, and outputting a locked image time stamp, a locked photoelectric image, the locked object ID, a locked object frame, a locked object category and a locked object confidence; if there is no target ID to be locked, the following steps are performed:
inquiring the image time stamp in the tracking target data at the loss moment of the target ID to be locked as a lost image time stamp;
Calculating a time difference between an image time stamp to be locked and a lost image time stamp, calculating a maximum moving distance of an object to be locked in the time difference, generating a candidate area by taking a lost object frame as a circle center and the maximum moving distance as a radius, inquiring whether an object ID to be locked exists in the candidate area, marking the object ID to be locked as a candidate object ID if the object ID to be locked exists, and outputting the candidate image time stamp, a candidate photoelectric image, the candidate object ID, a candidate object frame, a candidate object category and a candidate object confidence;
the candidate area locking current frame target strategy comprises the following steps:
obtaining a candidate region according to the candidate region coordinate data, judging whether a candidate target ID exists in the candidate region, and outputting a candidate image time stamp, a candidate photoelectric image, the candidate target ID, a candidate target frame, a candidate target category and a candidate target confidence coefficient if the candidate target ID exists in the candidate region;
If there is no candidate target ID, the following steps are performed:
And cutting out candidate region images from the photoelectric image according to the candidate region coordinate data, sending the candidate region images into a multi-target detection model to obtain a plurality of candidate detection target data, and then restoring the candidate detection target data according to the coordinates of the photoelectric image and marking the restored candidate detection target data as detection target data to execute S3.
2. The image multi-target pre-tracking method of an edge intelligent device according to claim 1, wherein the method comprises the following steps: the target box is (x, y, w, h), wherein x represents the abscissa of the center of the target box, y represents the ordinate of the center of the target box, w represents the width of the target box, and h represents the height of the target box; the multi-target detection model is FASTERRCNN model or YOLO model or DETR model.
3. The image multi-target pre-tracking method of an edge intelligent device according to claim 1, wherein the method comprises the following steps: the multi-target association tracking method uses the Sort algorithm or DeepSort algorithm or ByteTrack algorithm or BoT-Sort algorithm.
4. The image multi-target pre-tracking method of an edge intelligent device according to claim 1, wherein the method comprises the following steps: the history tracking management module comprises a dictionary, the dictionary comprises a key and a value, the value of the key is a target ID, the value is a list used for storing tracking target data, and the dictionary can be used for adding the tracking target data, inquiring the tracking target data and deleting the tracking target data.
5. An image multi-target pre-tracking system of an edge intelligent device is characterized in that: an image multi-target pre-tracking method for implementing the edge smart device of any of claims 1-4, comprising:
The edge intelligent equipment end is connected with the ground control end; the edge intelligent equipment end comprises photoelectric equipment which is used for shooting photoelectric images and is connected with a multi-target pre-tracking module; the multi-target pre-tracking module is used for processing the photoelectric image, can receive target ID to be locked or candidate region coordinate data input by a ground control end to lock a current frame target or search the current frame candidate target, and is connected with the transmission data processing module; the transmission data processing module is used for transmitting the data output by the ground control end to the multi-target pre-tracking module and transmitting the data output by the multi-target pre-tracking module to the first wireless communication equipment A according to the wireless communication signal level, and the transmission data processing module is connected with the first wireless communication equipment A; the first wireless communication equipment A is used for carrying out data interaction between the transmission data processing module and the ground control terminal through a wireless communication link;
The ground control end comprises a second wireless communication device B which is used for carrying out data interaction with the edge intelligent device end through a wireless communication link, and the second wireless communication device B is connected with a man-machine interaction display module; the human-computer interaction display module is used for visualizing data transmitted by the edge intelligent equipment end, wherein a first category a represents an unselected target, a second category b represents a target selected by human-computer interaction, a third category c represents a candidate target, a fourth category d represents a locked target, and a fifth category e represents a target which is not detected in the photoelectric image, and the human-computer interaction display module is connected with the human-computer interaction equipment; the man-machine interaction equipment is a mouse and/or a keyboard and/or an unmanned aerial vehicle control handle, and a target to be locked is selected by clicking, selecting a frame or pre-defining a numerical key in a quick selection mode.
6. The image multi-target pre-tracking system of an edge smart device of claim 5, wherein: the wireless communication signal grade is divided into a first signal grade, a second signal grade, a third signal grade and a fourth signal grade, and 1080P photoelectric images and tracking target data are transmitted when the wireless communication signal grade is the first signal grade; transmitting 640P the photoelectric image and tracking target data when the signal is at the second signal level; when the signal is at the third signal level, only the tracking target data is transmitted; when the fourth signal level is set, the tracking target data is not transmitted.
7. The image multi-target pre-tracking system of an edge smart device of claim 5, wherein: the clicking is to click a target to be locked by using a mouse; the frame selection is to select a target to be locked by using a mouse drawing frame, and if the frame selection area does not detect the target to be locked, candidate area coordinate data is output to an edge intelligent equipment end; the predefined number key quick selection is to number the target to be locked, and the input of the number corresponding to the target to be locked represents the selection of the corresponding target to be locked.
CN202410191085.XA 2024-02-21 2024-02-21 Image multi-target pre-tracking method and system for edge intelligent equipment Active CN117765031B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202410191085.XA CN117765031B (en) 2024-02-21 2024-02-21 Image multi-target pre-tracking method and system for edge intelligent equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202410191085.XA CN117765031B (en) 2024-02-21 2024-02-21 Image multi-target pre-tracking method and system for edge intelligent equipment

Publications (2)

Publication Number Publication Date
CN117765031A CN117765031A (en) 2024-03-26
CN117765031B true CN117765031B (en) 2024-05-03

Family

ID=90322264

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202410191085.XA Active CN117765031B (en) 2024-02-21 2024-02-21 Image multi-target pre-tracking method and system for edge intelligent equipment

Country Status (1)

Country Link
CN (1) CN117765031B (en)

Citations (19)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104243901A (en) * 2013-06-21 2014-12-24 中兴通讯股份有限公司 Multi-target tracking method based on intelligent video analysis platform and system of multi-target tracking method
CN105338248A (en) * 2015-11-20 2016-02-17 成都因纳伟盛科技股份有限公司 Intelligent multi-target active tracking monitoring method and system
CN107564034A (en) * 2017-07-27 2018-01-09 华南理工大学 The pedestrian detection and tracking of multiple target in a kind of monitor video
CN109903312A (en) * 2019-01-25 2019-06-18 北京工业大学 A kind of football sportsman based on video multi-target tracking runs distance statistics method
CN110428447A (en) * 2019-07-15 2019-11-08 杭州电子科技大学 A kind of method for tracking target and system based on Policy-Gradient
CN111260689A (en) * 2020-01-16 2020-06-09 东华大学 Effective confidence enhancement correlation filtering visual tracking algorithm
CN111696128A (en) * 2020-05-27 2020-09-22 南京博雅集智智能技术有限公司 High-speed multi-target detection tracking and target image optimization method and storage medium
WO2020187095A1 (en) * 2019-03-20 2020-09-24 深圳市道通智能航空技术有限公司 Target tracking method and apparatus, and unmanned aerial vehicle
CN112163473A (en) * 2020-09-15 2021-01-01 郑州金惠计算机系统工程有限公司 Multi-target tracking method and device, electronic equipment and computer storage medium
CN112285698A (en) * 2020-12-25 2021-01-29 四川写正智能科技有限公司 Multi-target tracking device and method based on radar sensor
WO2021073528A1 (en) * 2019-10-18 2021-04-22 华中光电技术研究所(中国船舶重工集团有限公司第七一七研究所) Intelligent decision-making method and system for unmanned surface vehicle
WO2021145784A1 (en) * 2020-01-13 2021-07-22 Юрий Александрович ГАБЛИЯ Hierarchical architecture of a security system for restricted areas of a site
CN113525370A (en) * 2021-08-06 2021-10-22 郑州高识智能科技有限公司 Multi-target tracking system based on vehicle high beam snapshot binocular vision and satellite navigation data
CN113793260A (en) * 2021-07-30 2021-12-14 武汉高德红外股份有限公司 Method and device for semi-automatically correcting target tracking frame and electronic equipment
CN114581954A (en) * 2022-03-15 2022-06-03 沈阳航空航天大学 Cross-domain retrieval and target tracking method based on pedestrian features
CN114972818A (en) * 2022-05-07 2022-08-30 浙江理工大学 Target locking system based on deep learning and mixed reality technology
CN115272816A (en) * 2022-06-23 2022-11-01 江苏嘉和天盛信息科技有限公司 Road traffic target tracking method based on deep convolutional neural network
WO2023077754A1 (en) * 2021-11-05 2023-05-11 北京小米移动软件有限公司 Target tracking method and apparatus, and storage medium
WO2024032091A1 (en) * 2022-08-12 2024-02-15 亿航智能设备(广州)有限公司 Target tracking method and device, and computer-readable storage medium

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112119427A (en) * 2019-06-28 2020-12-22 深圳市大疆创新科技有限公司 Method, system, readable storage medium and movable platform for object following

Patent Citations (19)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104243901A (en) * 2013-06-21 2014-12-24 中兴通讯股份有限公司 Multi-target tracking method based on intelligent video analysis platform and system of multi-target tracking method
CN105338248A (en) * 2015-11-20 2016-02-17 成都因纳伟盛科技股份有限公司 Intelligent multi-target active tracking monitoring method and system
CN107564034A (en) * 2017-07-27 2018-01-09 华南理工大学 The pedestrian detection and tracking of multiple target in a kind of monitor video
CN109903312A (en) * 2019-01-25 2019-06-18 北京工业大学 A kind of football sportsman based on video multi-target tracking runs distance statistics method
WO2020187095A1 (en) * 2019-03-20 2020-09-24 深圳市道通智能航空技术有限公司 Target tracking method and apparatus, and unmanned aerial vehicle
CN110428447A (en) * 2019-07-15 2019-11-08 杭州电子科技大学 A kind of method for tracking target and system based on Policy-Gradient
WO2021073528A1 (en) * 2019-10-18 2021-04-22 华中光电技术研究所(中国船舶重工集团有限公司第七一七研究所) Intelligent decision-making method and system for unmanned surface vehicle
WO2021145784A1 (en) * 2020-01-13 2021-07-22 Юрий Александрович ГАБЛИЯ Hierarchical architecture of a security system for restricted areas of a site
CN111260689A (en) * 2020-01-16 2020-06-09 东华大学 Effective confidence enhancement correlation filtering visual tracking algorithm
CN111696128A (en) * 2020-05-27 2020-09-22 南京博雅集智智能技术有限公司 High-speed multi-target detection tracking and target image optimization method and storage medium
CN112163473A (en) * 2020-09-15 2021-01-01 郑州金惠计算机系统工程有限公司 Multi-target tracking method and device, electronic equipment and computer storage medium
CN112285698A (en) * 2020-12-25 2021-01-29 四川写正智能科技有限公司 Multi-target tracking device and method based on radar sensor
CN113793260A (en) * 2021-07-30 2021-12-14 武汉高德红外股份有限公司 Method and device for semi-automatically correcting target tracking frame and electronic equipment
CN113525370A (en) * 2021-08-06 2021-10-22 郑州高识智能科技有限公司 Multi-target tracking system based on vehicle high beam snapshot binocular vision and satellite navigation data
WO2023077754A1 (en) * 2021-11-05 2023-05-11 北京小米移动软件有限公司 Target tracking method and apparatus, and storage medium
CN114581954A (en) * 2022-03-15 2022-06-03 沈阳航空航天大学 Cross-domain retrieval and target tracking method based on pedestrian features
CN114972818A (en) * 2022-05-07 2022-08-30 浙江理工大学 Target locking system based on deep learning and mixed reality technology
CN115272816A (en) * 2022-06-23 2022-11-01 江苏嘉和天盛信息科技有限公司 Road traffic target tracking method based on deep convolutional neural network
WO2024032091A1 (en) * 2022-08-12 2024-02-15 亿航智能设备(广州)有限公司 Target tracking method and device, and computer-readable storage medium

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
"Ensuring Port Safety: Improved YOLOX and DeepSORT for Accurate Detection and Tracking of Trucks and Truck Drivers";Xueqin Zheng等;《Proceedings of the 42nd Chinese Control Conference》;20230726;7693-7698 *
"卫星视频多目标跟踪算法研究";崔浩文;《中国优秀硕士学位论文全文数据库工程科技Ⅱ辑》;20240215;C028-280 *

Also Published As

Publication number Publication date
CN117765031A (en) 2024-03-26

Similar Documents

Publication Publication Date Title
US11322011B2 (en) Systems, methods, and devices for automatic signal detection based on power distribution by frequency over time
CN110398720B (en) Anti-unmanned aerial vehicle detection tracking interference system and working method of photoelectric tracking system
CN115542318B (en) Unmanned aerial vehicle group target-oriented air-ground combined multi-domain detection system and method
CN113064117B (en) Radiation source positioning method and device based on deep learning
CN110084094B (en) Unmanned aerial vehicle target identification and classification method based on deep learning
CN102752574A (en) Video monitoring system and method
WO2021076084A1 (en) A system providing prediction of communication channel parameters
CN115393681A (en) Target fusion method and device, electronic equipment and storage medium
KR20210102122A (en) Light color identifying method and apparatus of signal light, and roadside device
CN115471805A (en) Point cloud processing and deep learning model training method and device and automatic driving vehicle
Darlis et al. Performance Analysis of 77 GHz mmWave Radar Based Object Behavior.
CN117765031B (en) Image multi-target pre-tracking method and system for edge intelligent equipment
Guo et al. [Retracted] Automatic Parking System Based on Improved Neural Network Algorithm and Intelligent Image Analysis
CN113177980B (en) Target object speed determining method and device for automatic driving and electronic equipment
CN112917467A (en) Robot positioning and map building method and device and terminal equipment
Liu et al. Internet of Things technology in mineral remote sensing monitoring
CN117520466A (en) Geographic information acquisition system capable of comparing similar data
CN102044171B (en) Device and method for screen tracking display of flight information
CN116243273A (en) Photon counting laser radar data filtering method and device
Bai et al. An improved ransac algorithm based on adaptive threshold for indoor positioning
CN114913657A (en) Control method of forest floating communication relay platform
CN111830529B (en) Laser SLAM method and device based on lamplight calibration information fusion
CN110502979B (en) Laser radar waveform signal classification method based on decision tree
Gao et al. Intelligent video surveillance of sand mining based on object detection and tracking
CN114740970B (en) Millimeter wave gesture recognition method and system based on federal learning

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant