CN108875683B - Robot vision tracking method and system - Google Patents
Robot vision tracking method and system Download PDFInfo
- Publication number
- CN108875683B CN108875683B CN201810702506.5A CN201810702506A CN108875683B CN 108875683 B CN108875683 B CN 108875683B CN 201810702506 A CN201810702506 A CN 201810702506A CN 108875683 B CN108875683 B CN 108875683B
- Authority
- CN
- China
- Prior art keywords
- target
- tracking
- frame
- image
- module
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/40—Scenes; Scene-specific elements in video content
- G06V20/41—Higher-level, semantic clustering, classification or understanding of video scenes, e.g. detection, labelling or Markovian modelling of sport events or news items
- G06V20/42—Higher-level, semantic clustering, classification or understanding of video scenes, e.g. detection, labelling or Markovian modelling of sport events or news items of sport video content
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/56—Extraction of image or video features relating to colour
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/40—Scenes; Scene-specific elements in video content
- G06V20/46—Extracting features or characteristics from the video content, e.g. video fingerprints, representative shots or key frames
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Multimedia (AREA)
- Theoretical Computer Science (AREA)
- Computational Linguistics (AREA)
- Software Systems (AREA)
- Image Analysis (AREA)
- Manipulator (AREA)
Abstract
The invention provides a robot vision tracking method, which comprises the following steps: selecting a tracking target frame, and establishing a coordinate position (x) by using the center point of a rectangular frame0,y0) (ii) a Predicting a target center point, calculating the position of a tracking target in a coordinate system through motion prediction in a next frame of image, wherein the maximum width of the image is R, the maximum height of the image is C, the target center coordinate in the x coordinate direction (width direction) of the nth frame is x (n), and the coordinate position in the y direction (height direction) is y (n); detecting a tracking target, namely detecting the tracking target along the x direction and the y direction respectively according to the current predicted central point position; tracking a target, generating a displacement command according to the moving conditions of the target in the x direction and the y direction, and informing the robot or the camera to move according to corresponding displacement; the robot vision tracking method has the advantages of high running speed, good tracking effect, no dependence on any third-party library, strong transportability and capability of running on a common processor. Thereby greatly reducing the platform cost of products in the fields of robots and the like.
Description
Technical Field
The invention relates to the technical field of image processing and image recognition, in particular to a robot vision tracking method and a system thereof.
Background
At present, artificial intelligence algorithms, particularly computer vision algorithms, are developing more and more rapidly and are increasingly being used in numerous industries and fields. For example, visual tracking is adopted, and the application requirements in the fields of robot navigation, automatic driving, intelligent security and the like are great. However, for the visual tracking algorithm, no matter the traditional visual tracking algorithm or the emerging computer visual algorithm based on deep learning, the algorithm computation amount is very large, the GPU or the FPGA processor array is usually used for computation, and the requirement on the processor performance is high. For some common intelligent requirements, especially for scenes needing to complete visual tracking processing at a terminal side, a high-end processor is high in cost, high in power consumption, large in size and difficult to produce; for example, some intelligent toy robots, educational robots, etc., the related algorithms need to be directly completed at the terminal, and the algorithm processing amount is required to be small and the real-time performance is high.
Generally, some existing visual tracking algorithms have large processing capacity and low efficiency, and need to be operated on high-end processors such as a GPU or an FPGA, and the tracking effect is poor.
Disclosure of Invention
Therefore, the invention provides a robot vision tracking method and a system thereof, which are used for solving the problems of large processing capacity, low efficiency and poor tracking effect caused by the fact that some existing vision tracking algorithms need to be operated on high-end processors such as a GPU (graphics processing Unit) or an FPGA (field programmable gate array).
The technical scheme of the invention is realized as follows: a method of robotic visual tracking, the method comprising:
selecting a tracking target frame, firstly selecting a target to be tracked in the image frame in a rectangular frame mode, and establishing a coordinate position (x) by using the central point of the rectangular frame0,y0);
Predicting a target center point, calculating the position of a tracking target in a coordinate system through motion prediction in a next frame of image, wherein the maximum width of the image is R, the maximum height of the image is C, the target center coordinate in the x coordinate direction (width direction) of the nth frame is x (n), and the coordinate position in the y direction (height direction) is y (n);
detecting a tracking target, namely detecting the tracking target along the x direction and the y direction respectively according to the current predicted central point position; converting the RGB value of each pixel into HSV value, comparing H, S, V three components with the average value of H, S, V components of the tracking target of the previous frame pixel by pixel, judging that the tracking target is positive if the difference value is less than the threshold, namely the pixel belongs to the tracking target;
and target tracking, namely comparing the coordinates (x (n), y (n)) of the central point of the tracking target of the current frame with the coordinates (x (n-1), y (n-1)) of the central point of the tracking target of the previous frame to obtain the moving conditions of the tracking target in the x direction and the y direction, generating a displacement command, and informing a robot or a camera to move according to the corresponding displacement.
Further, after the tracking target is selected, a coordinate system is established by a rectangular frame, the height H and the width W of the rectangular frame of the selected tracking target are determined, and the coordinate position (x) is used0,y0) The current rectangular frame position is a standard position, and the height H and the width W of the current rectangular frame are taken as reference dimensions.
Further, the target center point prediction performs linear prediction according to motion continuity to obtain the target center coordinate of the nth frame:
x(n)=x(n-1)±(x(n-1)-x(n-2))
y(n)=y(n-1)±(y(n-1)-y(n-2))
can obtain
x(n)∈[max(x(n-1)-(x(n-1)-x(n-2)),0),
min(x(n-1)+(x(n-1)-x(n-2)),R-1)]
y(n)∈[max(y(n-1)-(y(n-1)-y(n-2)),0),
min(y(n-1)+(y(n-1)-y(n-2)),C-1)]。
Without loss of generality, x (0) may be set to x (1) to x0,y(0)=y(1)=y0。
Further, in the tracking target detection, the maximum detection width and height are the image width and height, and when M pixels are continuously detected to be out of the threshold range (where M is a preset threshold value), it is determined that the first pixel out of the threshold range is a target contour edge point.
Further, by the method, the tracking target is separated from the background, and the specific rule is as follows:
wherein Hk,SkAnd V k1 at the same time means that the pixel k belongs to the tracking target; wherein Hn(k),Sn(k),Vn(k) H, S, V component values for k pixels of the nth frame,and tracking the mean value of each component of the target HSV color space for the n-1 frames respectively. Ht,St,VtRespectively, are preset threshold values.
Further, it is characterized byThe target tracking specifically comprises the step of acquiring the height H of a rectangular frame of the nth frame tracking target according to the detected target outline framenAnd width WnAnd according to HnAnd WnAnd the coordinate position of the outline frame in the image, revising the coordinates (x (n), y (n)) of the central point of the tracking target, and ensuring that (x (n), y (n)) are in the actually detected outline center of the target.
Further, the target tracking further comprises the step of comparing the tracking target central point coordinates (x (n), y (n)) of the current frame with the tracking target central point coordinates (x (n-1), y (n-1)) of the previous frame, so as to obtain the moving conditions of the tracking target in the x direction and the y direction, and further obtain whether the current frame of the tracking target moves to the left or the right relative to the previous frame, and whether the target moves up or down.
Further, tracking the current frame to the contour size H of the targetnAnd WnAnd comparing the profile size with the reference profile sizes H and W, when the profile size is smaller than the reference profile size of the tracked target, indicating that the tracked target moves forwards and is far away from the camera, and when the profile size of the current frame is larger than the reference profile size, indicating that the tracked target moves backwards and approaches the camera.
Further, the target tracking also comprises the step of generating a displacement command according to the obtained motion direction of the tracking target, informing the robot or the camera to move according to the corresponding displacement, and ensuring that the tracking target is kept near the reference position in the camera view field.
A robot vision tracking system comprises an image acquisition module, an image preprocessing module, a target detection module, a target tracking module and a movement control module; the image acquisition module is connected with the image preprocessing module, the image preprocessing module is connected with the target detection module, the target detection module is connected with the target tracking module, and the target tracking module is connected with the mobile control module;
the image acquisition module is used for acquiring an image of a tracking target, comprises a camera and image processing, acquires a tracking target video according to a set resolution and sends the video to the image preprocessing module according to frames;
the image preprocessing module is used for converting the received image from an RGB color space into an HSV format;
the target detection module is used for detecting a tracking target and directly extracting a contour of the tracking target from a background image by an HSV (hue, saturation and value) binary classification method;
the target tracking module is used for comparing the target position of the previous frame with the target reference size according to the tracking target contour extracted by the target detection module, judging the motion direction of the tracking target, generating a displacement instruction and informing the movement control module;
the mobile control module is used for controlling the robot to track the target tracked by the target tracking module according to the displacement instruction generated by the target tracking module;
the camera is controlled to move or the robot is controlled to move through the motor, so that a tracking target is kept in the visual field of the camera, and the robot can track the target.
Through the disclosure, the beneficial effects of the invention are as follows: the robot vision tracking method directly extracts the color features of the tracked target by extracting and quickly detecting and tracking the color features of the tracked target, converts the color space from RGB into RSV, eliminates the influence of illumination on the color, directly extracts the target contour in a binary classification mode, does not need to obtain the target contour through algorithms such as camshift and the like, has lower processing capacity, low requirement on platform processing capacity, high running speed and high real-time performance, can run on a common Arm processor, and can greatly reduce the product cost in the fields of robots and the like.
Drawings
Fig. 1 is a schematic flow chart of a robot vision tracking method according to the present invention.
Fig. 2 is a block diagram of a robot vision tracking system according to the present invention.
Fig. 3 is a schematic coordinate diagram of an embodiment of the robot vision tracking system of the invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be described clearly and completely with reference to the accompanying drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments of the present invention without making any creative effort, shall fall within the protection scope of the present invention.
It should be noted that all the directional indicators (such as upper, lower, left, right, front and rear … …) in the embodiment of the present invention are only used to explain the relative position relationship between the components, the movement situation, etc. in a specific posture (as shown in the drawing), and if the specific posture is changed, the directional indicator is changed accordingly.
In addition, the descriptions related to "first", "second", etc. in the present invention are for descriptive purposes only and are not to be construed as indicating or implying relative importance or implicitly indicating the number of technical features indicated. Thus, a feature defined as "first" or "second" may explicitly or implicitly include at least one such feature. In addition, technical solutions between various embodiments may be combined with each other, but must be realized by a person skilled in the art, and when the technical solutions are contradictory or cannot be realized, such a combination should not be considered to exist, and is not within the protection scope of the present invention.
The invention provides a robot vision tracking method and a system thereof.
Referring to fig. 1, a robot vision tracking method, the method comprising:
firstly, selecting a tracking target frame, starting tracking, and firstly selecting a target to be tracked in a frame in an image in a rectangular frame mode. Establishing a coordinate system, determining the height H and the width W of a rectangular frame of the framed tracking target, and determining the coordinate position (x) of the central point of the rectangular frame0,y0) And taking the current rectangular frame position as a standard position and taking the height H and the width W of the current rectangular frame as reference dimensions.
Step two, predicting a target central point, calculating the position of a tracking target in a coordinate system in the next frame of image through motion prediction, wherein the maximum width of the image is R, the maximum height of the image is C, the target central coordinate in the x coordinate direction (width direction) of the nth frame is x (n), the coordinate position in the y direction (height direction) is y (n), and then performing linear prediction according to motion continuity to obtain the target central coordinate of the nth frame:
x(n)=x(n-1)±(x(n-1)-x(n-2)) (1)
y(n)=y(n-1)±(y(n-1)-y(n-2)) (2)
can obtain
x(n)∈[max(x(n-1)-(x(n-1)-x(n-2)),0),
min(x(n-1)+(x(n-1)-x(n-2)),R-1)] (3)
y(n)∈[max(y(n-1)-(y(n-1)-y(n-2)),0),
min(y(n-1)+(y(n-1)-y(n-2)),C-1)] (4)
Without loss of generality, x (0) may be set to x (1) to x0,y(0)=y(1)=y0。
Step three, tracking target detection and contour extraction: detecting the tracking target along the x direction and the y direction respectively according to the current predicted central point position; and converting the RGB value of each pixel into HSV values, comparing the H, S, V three components with the average value of H, S, V components of the tracking target of the previous frame pixel by pixel, judging that the tracking target is positive if the difference is smaller than a threshold, namely the pixel belongs to the tracking target. And when M pixels are continuously detected to be not in the threshold range (wherein M is a preset threshold value), judging that the first pixel which is not in the threshold range is the edge point of the target contour. By the method, the tracking target is separated from the background, and the specific rule is as follows:
wherein Hk,SkAnd V k1 at the same time means that the pixel k belongs to the tracking target; wherein Hn(k),Sn(k),Vn(k) H, S, V component values for k pixels of the nth frame,and tracking the mean value of each component of the target HSV color space for the n-1 frames respectively. Ht,St,VtRespectively, are preset threshold values.
Step four, according to the detected outline frame of the target, the height H of the rectangular frame of the nth frame tracking target is obtainednAnd width WnAnd according to HnAnd WnAnd the coordinate position of the outline frame in the image, revising the coordinates (x (n), y (n)) of the central point of the tracking target, and ensuring that (x (n), y (n)) are in the actually detected outline center of the target.
Comparing the coordinates (x (n), y (n)) of the central point of the tracking target of the current frame with the coordinates (x (n-1), y (n-1)) of the central point of the tracking target of the previous frame, thereby obtaining the moving conditions of the tracking target in the x direction and the y direction, and further obtaining whether the current frame of the tracking target moves leftwards or rightwards relative to the previous frame and whether the target moves upwards or downwards;
step six, tracking the contour size H of the target of the current framenAnd WnAnd comparing the profile size with the reference profile sizes H and W, when the profile size is smaller than the reference profile size of the tracked target, indicating that the tracked target moves forwards and is far away from the camera, and when the profile size of the current frame is larger than the reference profile size, indicating that the tracked target moves backwards and approaches the camera.
And step seven, generating a displacement command according to the motion direction of the tracking target obtained in the step five and the step six, informing the robot or the camera to move according to the corresponding displacement, and ensuring that the tracking target is kept near the reference position in the camera view.
In specific implementation, as shown in the graph of fig. 3, the image size is 320 × 240(R is 320, C is 240), and the first time the target moves isThe position of the n-2 frame central point is (x)n-2,yn-2) The position of the center point of the n-1 th frame is (x)n-1,yn-1) From the motion prediction, with reference to the foregoing calculation formula, the center point position (x) of the target of the nth frame can be obtainedn,yn) With xn,ynFor expanding the central point outwards, converting the RGB color value of each pixel into HSV, comparing the HSV color value with the average value of the corresponding components of the HSV of the previous frame, and finding out the coordinates (x) of 4 corner points A, B, C and D at the outermost side of the target frame in such a wayA,yA),(xB,yB),(xC,yC),(xD,yD) So as to obtain the width and height of the tracking target according to the coordinates of the 4 angular points, and calculate a new central position coordinate;
Wn=xC-xB
Hn=yA-yB
xn=xB+(xC-xB)/2
yn=yB+(yA-yB)/2
and taking the new coordinate as the coordinate of the central point of the current frame target. So far, the target of the current frame is detected from the background, and then the displacement direction can be judged by comparing the target with the previous frame according to the central coordinate and the size of the rectangular frame, so that tracking is carried out.
The calculation formula for converting the RGB color space into the HSV color space is as follows:
V=max
referring to fig. 2, a robot vision tracking system includes an image acquisition module, an image preprocessing module, a target detection module, a target tracking module, and a movement control module; the image acquisition module is connected with the image preprocessing module, the image preprocessing module is connected with the target detection module, the target detection module is connected with the target tracking module, and the target tracking module is connected with the mobile control module;
the image acquisition module is used for acquiring an image of a tracking target, comprises a camera and image processing, acquires a tracking target video according to a set resolution and sends the video to the image preprocessing module according to frames;
the image preprocessing module is used for converting the received image from an RGB color space into an HSV format;
the target detection module is used for completing the detection of the tracking target and directly extracting the contour of the tracking target from the background image by an HSV (hue, saturation and value) binary classification method;
the target tracking module is used for comparing the target position of the previous frame with the target reference size according to the tracking target contour extracted by the target detection module, judging the motion direction of the tracking target, generating a displacement instruction and informing the movement control module;
the mobile control module is used for generating a displacement instruction according to the target tracking module and controlling the robot to track the target tracked by the target tracking module;
the camera is controlled to move or the robot is controlled to move through the motor, so that a tracking target is kept in the visual field of the camera, and the robot can track the target.
Finally, the above embodiments are only for illustrating the technical solutions of the present invention and not for limiting, although the present invention has been described in detail with reference to the preferred embodiments, it should be understood by those skilled in the art that modifications or equivalent substitutions may be made to the technical solutions of the present invention without departing from the spirit and scope of the technical solutions of the present invention, and all of them should be covered in the claims of the present invention.
Claims (9)
1. A method of robotic visual tracking, the method comprising: selecting a tracking target frame, namely selecting a target to be tracked in a frame in an image in a rectangular frame mode, and establishing a coordinate position (x0, y0) according to the center point of the rectangular frame;
predicting a target central point, calculating the position of a tracking target in a coordinate system through motion prediction in the next frame of image, wherein the maximum width of the image is R, the maximum height of the image is C, the target central coordinate in the x coordinate direction of the nth frame is x (n), the coordinate position in the y direction is y (n), and the target central point is predicted to perform linear prediction according to motion continuity to obtain the target central coordinate of the nth frame: x (n) ± (x (n-1) -x (n-2)) y (n) ± (n-1) -x (n-2)) y (n) ± (y (n-1) -y (n-2)) may be obtained
x(n)∈[max(x(n-1)-(x(n-1)-x(n-2)),0),min(x(n-1)+(x(n-1)-x(n-2)),R-1)]
y(n)∈[max(y(n-1)-(y(n-1)-y(n-2)),0),min(y(n-1)+(y(n-1)-y(n-2)),C-1)],
Without loss of generality, x (0) ═ x (1) ═ x0, y (0) ═ y (1) ═ y 0; detecting a tracking target, namely detecting the tracking target along the x direction and the y direction respectively according to the current predicted central point position; converting the RGB value of each pixel into HSV value, comparing H, S, V three components with the average value of H, S, V components of the tracking target of the previous frame pixel by pixel, judging that the tracking target is positive if the difference value is less than the threshold, namely the pixel belongs to the tracking target;
and target tracking, namely comparing the coordinates (x (n), y (n)) of the central point of the tracking target of the current frame with the coordinates (x (n-1), y (n-1)) of the central point of the tracking target of the previous frame to obtain the moving conditions of the tracking target in the x direction and the y direction, generating a displacement command, and informing a robot or a camera to move according to the corresponding displacement.
2. The robot vision tracking method of claim 1, wherein: after the tracking target is selected, a coordinate system is established by using a rectangular frame, the height H and the width W of the rectangular frame of the selected tracking target are determined, the position of the current rectangular frame with the central coordinate position of (x0, y0) is used as a standard position, and the height H and the width W of the current rectangular frame are used as reference dimensions.
3. The robot vision tracking method of claim 2, wherein: in the tracking target detection, the maximum detection width and the maximum detection height are the image width and the image height, and when M pixels are continuously detected to be out of the threshold range, wherein M is a preset threshold value, the first pixel out of the threshold range is judged to be a target contour edge point.
4. A robot vision tracking method according to claim 3, characterized in that: by the method, the tracking target is separated from the background, and the specific rule is as follows:
wherein Hk, Sk and Vk are simultaneously 1, which means that the pixel k belongs to the tracking target; where Hn (k), Sn (k), Vn (k) are H, S, V component values of k pixels of the nth frame,respectively tracking the mean value of each component of the target HSV color space for n-1 frames; ht, St, and Vt are respectively preset threshold values.
5. The robot vision tracking method of claim 4, wherein: the target tracking specifically comprises the steps of obtaining the height Hn and the width Wn of a rectangular frame of the nth frame tracking target according to the detected target contour frame, revising the coordinates (x (n), y (n)) of the center point of the tracking target according to the Hn, the Wn and the coordinate position of the contour frame in the image, and ensuring that (x (n), y (n)) are in the actually detected target contour center.
6. The robot vision tracking method of claim 5, wherein: the target tracking further comprises the step of comparing the tracking target central point coordinates (x (n), y (n)) of the current frame with the tracking target central point coordinates (x (n-1), y (n1)) of the previous frame, so as to obtain the movement conditions of the tracking target in the x direction and the y direction, and further obtain whether the current frame of the tracking target moves to the left or the right relative to the previous frame, and whether the target moves up or down.
7. The robot vision tracking method of claim 6, wherein: and comparing the contour sizes Hn and Wn of the current frame tracking target with the reference contour sizes H and W, when the contour size is smaller than the reference contour size of the tracking target, indicating that the target moves forwards and is far away from the camera, and when the contour size of the current frame is larger than the reference contour size, indicating that the tracking target moves backwards and approaches the camera.
8. The robot vision tracking method of claim 7, wherein: and the target tracking also comprises the step of generating a displacement command according to the obtained motion direction of the tracked target, informing the robot or the camera to move according to corresponding displacement, and ensuring that the tracked target is kept near the reference position in the camera view.
9. A robotic vision tracking system, characterized by: the system comprises an image acquisition module, an image preprocessing module, a target detection module, a target tracking module and a movement control module; the image acquisition module is connected with the image preprocessing module, the image preprocessing module is connected with the target detection module, the target detection module is connected with the target tracking module, and the target tracking module is connected with the mobile control module;
the image acquisition module is used for acquiring images of the tracking target, comprises a camera and image processing, and is
Acquiring a tracking target video according to a set resolution, and sending the video to an image preprocessing module according to frames;
the image preprocessing module is used for converting the received image from an RGB color space into an HSV format;
the target detection module is used for completing the detection of the tracking target and directly extracting the contour of the tracking target from the background image;
the target tracking module is used for comparing the target position of the previous frame with the target reference size according to the tracking target contour extracted by the target detection module, judging the motion direction of the tracking target, generating a displacement instruction and informing the movement control module;
the mobile control module is used for controlling the robot to aim at the target according to the displacement instruction generated by the target tracking module
Tracking the target tracked by the target tracking module;
the camera is controlled to move or the robot is controlled to move through the motor, so that a tracking target is kept in the visual field of the camera, and the robot can track the target.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810702506.5A CN108875683B (en) | 2018-06-30 | 2018-06-30 | Robot vision tracking method and system |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810702506.5A CN108875683B (en) | 2018-06-30 | 2018-06-30 | Robot vision tracking method and system |
Publications (2)
Publication Number | Publication Date |
---|---|
CN108875683A CN108875683A (en) | 2018-11-23 |
CN108875683B true CN108875683B (en) | 2022-05-13 |
Family
ID=64297507
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201810702506.5A Active CN108875683B (en) | 2018-06-30 | 2018-06-30 | Robot vision tracking method and system |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN108875683B (en) |
Families Citing this family (15)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109572855B (en) * | 2018-12-13 | 2020-08-21 | 杭州申昊科技股份有限公司 | Climbing robot |
CN109670462B (en) * | 2018-12-24 | 2019-11-01 | 北京天睿空间科技股份有限公司 | Continue tracking across panorama based on the aircraft of location information |
CN110415273B (en) * | 2019-07-29 | 2020-09-01 | 肇庆学院 | Robot efficient motion tracking method and system based on visual saliency |
CN111308993B (en) * | 2020-02-13 | 2022-04-01 | 青岛联合创智科技有限公司 | Human body target following method based on monocular vision |
CN113538504B (en) * | 2020-04-16 | 2023-07-14 | 杭州海康威视数字技术股份有限公司 | Tracking target display method in target uniform-speed moving scene and electronic equipment |
CN111552292B (en) * | 2020-05-09 | 2023-11-10 | 沈阳建筑大学 | Vision-based mobile robot path generation and dynamic target tracking method |
CN111798496B (en) * | 2020-06-15 | 2021-11-02 | 博雅工道(北京)机器人科技有限公司 | Visual locking method and device |
CN111862154B (en) * | 2020-07-13 | 2024-03-01 | 中移(杭州)信息技术有限公司 | Robot vision tracking method and device, robot and storage medium |
CN112634356B (en) * | 2020-12-30 | 2024-08-06 | 欧普照明股份有限公司 | Tracking method and system and electronic equipment |
CN112819706B (en) * | 2021-01-14 | 2024-05-14 | 杭州睿影科技有限公司 | Method for determining identification frame of superimposed display, readable storage medium and electronic device |
CN113286077A (en) * | 2021-04-19 | 2021-08-20 | 瑞泰影像科技(深圳)有限公司 | Full-automatic camera tracking and identifying technology |
CN113452913B (en) * | 2021-06-28 | 2022-05-27 | 北京宙心科技有限公司 | Zooming system and method |
CN113744299B (en) * | 2021-09-02 | 2022-07-12 | 上海安维尔信息科技股份有限公司 | Camera control method and device, electronic equipment and storage medium |
CN114972415B (en) * | 2021-12-28 | 2023-03-28 | 广东东软学院 | Robot vision tracking method, system, electronic device and medium |
CN114500839B (en) * | 2022-01-25 | 2024-06-07 | 青岛根尖智能科技有限公司 | Visual cradle head control method and system based on attention tracking mechanism |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN1402551A (en) * | 2001-08-07 | 2003-03-12 | 三星电子株式会社 | Apparatus and method for automatically tracking mobile object |
CN101916446A (en) * | 2010-07-23 | 2010-12-15 | 北京航空航天大学 | Gray level target tracking algorithm based on marginal information and mean shift |
CN106780539A (en) * | 2016-11-30 | 2017-05-31 | 航天科工智能机器人有限责任公司 | Robot vision tracking |
CN108133491A (en) * | 2017-12-29 | 2018-06-08 | 重庆锐纳达自动化技术有限公司 | A kind of method for realizing dynamic target tracking |
Family Cites Families (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US9390328B2 (en) * | 2014-04-25 | 2016-07-12 | Xerox Corporation | Static occlusion handling using directional pixel replication in regularized motion environments |
-
2018
- 2018-06-30 CN CN201810702506.5A patent/CN108875683B/en active Active
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN1402551A (en) * | 2001-08-07 | 2003-03-12 | 三星电子株式会社 | Apparatus and method for automatically tracking mobile object |
CN101916446A (en) * | 2010-07-23 | 2010-12-15 | 北京航空航天大学 | Gray level target tracking algorithm based on marginal information and mean shift |
CN106780539A (en) * | 2016-11-30 | 2017-05-31 | 航天科工智能机器人有限责任公司 | Robot vision tracking |
CN108133491A (en) * | 2017-12-29 | 2018-06-08 | 重庆锐纳达自动化技术有限公司 | A kind of method for realizing dynamic target tracking |
Also Published As
Publication number | Publication date |
---|---|
CN108875683A (en) | 2018-11-23 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN108875683B (en) | Robot vision tracking method and system | |
US10860882B2 (en) | Apparatus and methods for tracking salient features | |
Zhang et al. | Real-time multiple human perception with color-depth cameras on a mobile robot | |
Zhou et al. | Efficient road detection and tracking for unmanned aerial vehicle | |
CN106682619B (en) | Object tracking method and device | |
Frintrop | General object tracking with a component-based target descriptor | |
Taylor et al. | Fusion of multimodal visual cues for model-based object tracking | |
Saito et al. | People detection and tracking from fish-eye image based on probabilistic appearance model | |
WO2020238073A1 (en) | Method for determining orientation of target object, intelligent driving control method and apparatus, and device | |
Nüchter et al. | Automatic classification of objects in 3d laser range scans | |
US20220277595A1 (en) | Hand gesture detection method and apparatus, and computer storage medium | |
Yin et al. | Removing dynamic 3D objects from point clouds of a moving RGB-D camera | |
NC et al. | HOG-PCA descriptor with optical flow based human detection and tracking | |
Zhao et al. | Robust multiple object tracking in RGB-D camera networks | |
Cela et al. | Lanes detection based on unsupervised and adaptive classifier | |
Mohamed et al. | Asynchronous corner tracking algorithm based on lifetime of events for DAVIS cameras | |
Lin et al. | Robust ground plane region detection using multiple visual cues for obstacle avoidance of a mobile robot | |
CN114972491A (en) | Visual SLAM method, electronic device, storage medium and product | |
Said et al. | Real-time detection and classification of traffic light signals | |
CN114842057A (en) | Distance information complementing method, apparatus, storage medium, and computer program product | |
Yu et al. | An intelligent real-time monocular vision-based agv system for accurate lane detecting | |
Mohamed et al. | Real-time moving objects tracking for mobile-robots using motion information | |
WO2022239300A1 (en) | Information processing device, information processing method, and program | |
Majcher et al. | Multiple-criteria-based object pose tracking in RGB videos | |
Du et al. | Study on 6D Pose Estimation System of Occlusion Targets for the Spherical Amphibious Robot based on Neural Network |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |