CN111308993B - Human body target following method based on monocular vision - Google Patents

Human body target following method based on monocular vision Download PDF

Info

Publication number
CN111308993B
CN111308993B CN202010089860.2A CN202010089860A CN111308993B CN 111308993 B CN111308993 B CN 111308993B CN 202010089860 A CN202010089860 A CN 202010089860A CN 111308993 B CN111308993 B CN 111308993B
Authority
CN
China
Prior art keywords
target
frame
tracking
human body
robot
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010089860.2A
Other languages
Chinese (zh)
Other versions
CN111308993A (en
Inventor
纪刚
杨丰拓
安帅
朱慧
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Qingdao Lianhe Chuangzhi Technology Co ltd
Original Assignee
Qingdao Lianhe Chuangzhi Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Qingdao Lianhe Chuangzhi Technology Co ltd filed Critical Qingdao Lianhe Chuangzhi Technology Co ltd
Priority to CN202010089860.2A priority Critical patent/CN111308993B/en
Publication of CN111308993A publication Critical patent/CN111308993A/en
Application granted granted Critical
Publication of CN111308993B publication Critical patent/CN111308993B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/0094Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots involving pointing a payload, e.g. camera, weapon, sensor, towards a fixed or moving target
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05BCONTROL OR REGULATING SYSTEMS IN GENERAL; FUNCTIONAL ELEMENTS OF SUCH SYSTEMS; MONITORING OR TESTING ARRANGEMENTS FOR SUCH SYSTEMS OR ELEMENTS
    • G05B11/00Automatic controllers
    • G05B11/01Automatic controllers electric
    • G05B11/36Automatic controllers electric with provision for obtaining particular characteristics, e.g. proportional, integral, differential
    • G05B11/42Automatic controllers electric with provision for obtaining particular characteristics, e.g. proportional, integral, differential for obtaining a characteristic which is both proportional and time-dependent, e.g. P. I., P. I. D.
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/12Target-seeking control
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/223Analysis of motion using block-matching
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30196Human being; Person

Landscapes

  • Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • Automation & Control Theory (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Aviation & Aerospace Engineering (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Human Computer Interaction (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Manipulator (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a human body target following method based on monocular vision, which comprises the following steps: starting a following system of the monocular vision robot under the voice control, and starting a following function of the robot; detecting a target right in front, and detecting a plurality of human body targets right in front of the robot by using a human body recognition algorithm; selecting a target to be followed, and selecting a human body target which best meets the requirements from the obtained human body detection frames as the target to be followed; setting a tracker state, and acquiring tracking information of a target to be followed in a mode of combining a main tracker and an auxiliary tracker; motion following control, namely setting a corresponding following strategy according to the relation between the target information and the following information; and updating the following target information, and using the tracking information at the current moment as the target information at the next moment to realize continuous tracking. The method disclosed by the invention adopts a monocular camera to perform an algorithm of human body target following, and has the characteristics of strong real-time performance, high detection rate and quicker response.

Description

Human body target following method based on monocular vision
Technical Field
The invention relates to the field of computer vision and robot motion control, in particular to a human body target following method based on monocular vision.
Background
At present, the human body target following technology of the robot is mostly developed based on a 3D depth sensor or based on a combination of monocular and distance sensors. Compared with a monocular sensor, the 3D sensor has the advantages of high cost which is about 10 times higher, large volume and incapability of being applied to a miniature robot. Moreover, the technical difficulty of data fusion of the monocular camera and the distance sensor is high, the constructed system is redundant, and the use of a small robot cannot be met.
When a robot monocular vision human body following algorithm is developed, the following problems need to be solved:
1. determining a proper target detection algorithm to guarantee the real-time performance of target detection and the difficult problem of retrieving the target again after the target is shielded;
2. developing a motion control following algorithm, and determining a motion control method and control parameters after the target moves;
3. and realizing the linkage of the voice recognition and the following algorithm.
Disclosure of Invention
In order to solve the technical problems, the invention provides a human body target following method based on monocular vision, an algorithm for following the human body target can be carried out by adopting a monocular camera, and the method has the characteristics of strong real-time performance, high detection rate and quicker response.
In order to achieve the purpose, the technical scheme of the invention is as follows:
a human body target following method based on monocular vision comprises the following steps:
step one, starting a following system of the monocular vision robot through voice control, and starting a following function of the robot;
detecting a target right in front, and detecting a plurality of human body targets right in front of the robot by using a human body recognition algorithm;
selecting a target to be followed, and selecting a human body target which best meets the requirements from the obtained human body detection frames as the target to be followed;
setting the state of a tracker, and acquiring tracking information of the target to be followed by using a mode of combining a main tracker and an auxiliary tracker;
step five, motion following control, namely setting a corresponding following strategy according to the relation between the target information and the following information;
and step six, updating the following target information, and using the tracking information at the current moment as the target information at the next moment to realize continuous tracking.
In the above scheme, the specific method of the second step is as follows:
(1) obtaining a human target detection frame using a human recognition algorithm
The robot issues voice, shoots a front part by using a monocular camera, detects each shot frame, extracts posture information of each human body to construct a rectangular human body target detection frame, the coordinates of the upper left corner of the detection frame are (x, y), and the width and the height of the rectangular frame are w and h respectively;
(2) detection frame for rejecting errors
The width of the image shot by the monocular camera is W, the height of the image is H, the wrong detection frame is removed according to the relative size of the detection frame in the shot image, and the human body target detection frame is preliminarily screened.
Further, only the width w of the rectangular frame is kept larger than
Figure BDA0002383306540000021
And is less than
Figure BDA0002383306540000022
The human target detection frame.
In the above scheme, the specific method of the third step is as follows:
selecting a target detection frame with a larger area and a center coordinate close to the center coordinate of the image from a plurality of rectangular detection frames, and calculating a score S for each rectangular detection frame:
Figure BDA0002383306540000023
wherein σ2In order to self-define the coefficient,
selecting a rectangular detection frame with the highest score S as a target to be followed;
detecting whether the target to be followed is right in front of the robot when
Figure BDA0002383306540000024
And then, issuing a voice prompt to remind the target to move to the central position, and recording the rectangular frame information of the target to be tracked.
In the above scheme, the specific method of the fourth step is as follows:
the human body tracking function is realized by combining a main tracker MEDIANFLOW with a stable anti-loss auxiliary tracker MOSSE;
(1) tracking state detection
Detecting a tracking target in the updated image frame, setting the tracking state as true when the tracking target is found in the updated image frame, and drawing a tracking frame;
(2) drawing tracking frame
Drawing a tracking frame r on a new image frame F, wherein a main tracker corresponds to a main tracking frame r _ m, and an auxiliary tracker corresponds to an auxiliary tracking frame r _ a; drawing only one tracking frame on the updated image frame, using a main tracking frame r _ m when the main tracker is available, and using an auxiliary tracking frame r _ a when the main tracker is unavailable;
(3) detecting incorrect tracking information and resetting the tracker
Comparing the previous time information of the target to be tracked with the current information calculated by the tracker, and judging the tracking state; the information at the previous moment is represented on the image as a target frame, and the information tracked at the current moment is represented as a tracking frame; the coordinates of the top left vertex of the target frame in the image are (x, y), the width is w, the height is h, and the coordinates of the center point of the target frame are
Figure BDA0002383306540000031
Drawing the coordinate of the upper left vertex of the tracking frame of the moving target in the image as (x) by using the trackerr,yr) And has a width of wrHeight of hrThe coordinate of the central point of the tracking frame is
Figure BDA0002383306540000032
Calculating the distance D between the center point of the target frame and the center point of the tracking frame:
Figure BDA0002383306540000033
when in use
Figure BDA0002383306540000034
In time, the tracker is reset when it is considered to be in error.
In the above scheme, the concrete method of the fifth step is as follows:
(1) proximity detection
When tracking the height h of the framerWhen the height difference with the rectangular frame detected by the human body detection algorithm at the beginning is smaller than a set threshold value, the robot state is set to be closer to the following target, and the robot does not move forwards;
(2) normal tracking
The rotational speed rw of the rotational movement of the robot is calculated as follows:
Figure BDA0002383306540000035
where W is the width of the image, σ1To define the parameters, rwmaxIs a preset maximum rotation angular velocity;
the linear velocity lv calculation formula of the robot is as follows:
Figure BDA0002383306540000041
where W is the width of the image, σ2To define the parameters, lvmaxIs a preset maximum linear velocity;
when tracking the height h of the framerWhen the height h between the robot and the target frame is smaller than a micro movement threshold value, only calculating the rotation angular velocity of the robot, and calculating the parameters of a PID controller;
when tracking the height h of the framerAnd when the height h of the target frame is larger than the micro movement threshold, simultaneously calculating the rotation angular velocity and the displacement linear velocity of the robot, and calculating the parameters of the PID controller.
(3) Lost finding
When tracing the abscissa x of the top left vertex of the framer<k1Or
Figure BDA0002383306540000042
Judging the current state as lost, adjusting the motion state of the robot to be a pure rotation state at the moment, calculating the parameters of a PID (proportion integration differentiation) controller, and if the parameters are more than a direction control threshold thrpThe angular velocity is set to rotate counterclockwise, if less than a direction control threshold such as thrnThen the angular velocity is set to rotate clockwise, at which time the robot rotates to retrieve the following target.
Through the technical scheme, the human body target following method based on monocular vision has the following beneficial effects:
1. setting a following function of the specific voice awakening robot, and simplifying man-machine interaction;
2. selecting and combining proper human target detection algorithms, and simultaneously ensuring the accuracy and real-time performance of human target detection;
3. the algorithm of the tracking system is optimized, and the human body target can be followed by only one camera, so that the coupling degree of the system is simplified, and the fault tolerance of the system is increased;
4. and a tracking strategy is optimized, so that the target can be tracked in time when the human target moves, and the target cannot be lost when the vehicle turns.
5. The motion control algorithm is optimized, the optimal PID parameters are calculated, and the rapidness, the accuracy and the stability of motion control are improved.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below.
FIG. 1 is a flowchart of a monocular vision-based human target following method according to an embodiment of the present invention;
fig. 2 is a schematic diagram of a target detection block.
Detailed Description
The technical solution in the embodiments of the present invention will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present invention.
The invention provides a human body target following method based on monocular vision, as shown in figure 1, the specific embodiment is as follows:
a human body target following method based on monocular vision comprises the following steps:
step one, starting a following system of the monocular vision robot through voice control, and starting a following function of the robot;
the user carries out voice conversation with the robot, and when the robot recognizes the voice which is set by the system and is related to the following instruction, the following system is started.
Detecting a target right in front, and detecting a plurality of human body targets right in front of the robot by using a human body recognition algorithm;
(1) obtaining a human detection box using a human recognition algorithm
The robot issues voice of 'please stand in front of me' and shoots the front right by using a monocular camera, detects each frame shot, extracts posture information of each human body in the frame to construct a rectangular human body target detection frame, as shown in fig. 2, the coordinates of the upper left corner of the human body target detection frame are (x, y), and the width and the height of the detection frame are w and h respectively;
(2) detection frame for rejecting errors
The width of an image shot by the monocular camera is W, the height of the image is H, wrong detection frames are removed according to the relative size of the human body detection frames in the shot image, and the human body target detection frames are preliminarily screened. Only the width w of the rectangular frame is kept larger than
Figure BDA0002383306540000051
And is less than
Figure BDA0002383306540000052
The detection frame of (1).
Selecting a target to be followed, and selecting a human body target which best meets the requirements from the obtained human body detection frames as the target to be followed;
selecting a target detection frame with a larger area and a center coordinate close to the center coordinate of the image from a plurality of rectangular detection frames, and calculating a score S for each rectangular detection frame:
Figure BDA0002383306540000053
wherein σ2In order to self-define the coefficient,
selecting a rectangular detection frame with the highest score S as a target to be followed;
detecting whether the target to be followed is right in front of the robot when
Figure BDA0002383306540000054
And then, issuing a voice prompt to remind the target to move to the central position, and recording the rectangular frame information of the target to be tracked.
Setting the state of a tracker, and acquiring tracking information of the target to be followed by using a mode of combining a main tracker and an auxiliary tracker;
the human body tracking function is realized by combining a main tracker MEDIANFLOW with a stable anti-loss auxiliary tracker MOSSE;
(1) tracking state detection
Detecting a tracking target in the updated image frame, setting the tracking state as true when the tracking target is found in the updated image frame, and drawing a tracking frame;
(2) drawing tracking frame
Drawing a tracking frame r on a new image frame F, wherein a main tracker corresponds to a main tracking frame r _ m, and an auxiliary tracker corresponds to an auxiliary tracking frame r _ a; drawing only one tracking frame on the updated image frame, using a main tracking frame r _ m when the main tracker is available, and using an auxiliary tracking frame r _ a when the main tracker is unavailable;
(3) detecting incorrect tracking information and resetting the tracker
Comparing the previous time information of the target to be tracked with the current information calculated by the tracker, and judging the tracking state; the information of the previous moment is represented on the image as a targetThe frame, the information tracked at the present moment is represented as a tracking frame; the coordinates of the top left vertex of the target frame in the image are (x, y), the width is w, the height is h, and the coordinates of the center point of the target frame are
Figure BDA0002383306540000061
Drawing the coordinate of the upper left vertex of the tracking frame of the moving target in the image as (x) by using the trackerr,yr) And has a width of wrHeight of hrThe coordinate of the central point of the tracking frame is
Figure BDA0002383306540000062
Calculating the distance D between the center point of the target frame and the center point of the tracking frame:
Figure BDA0002383306540000063
when in use
Figure BDA0002383306540000064
In time, the tracker is reset when it is considered to be in error.
Step five, motion following control, namely setting a corresponding following strategy according to the relation between the target information and the following information;
(1) proximity detection
When tracking the height h of the framerWhen the height difference with the rectangular frame detected by the human body detection algorithm at the beginning is smaller than a set threshold value, the robot state is set to be closer to the following target, and the robot does not move forwards;
(2) normal tracking
The rotational speed rw of the rotational movement of the robot is calculated as follows:
Figure BDA0002383306540000071
where W is the width of the image, σ1To define the parameters, rwmaxAt a preset maximum rotationAn angular velocity;
the linear velocity lv calculation formula of the robot is as follows:
Figure BDA0002383306540000072
where W is the width of the image, σ2To define the parameters, lvmaxIs a preset maximum linear velocity;
when tracking the height h of the framerWhen the height h of the target frame is less than a micro movement threshold value 10, only calculating the rotation angular velocity of the robot, and calculating the parameters of a PID controller;
when tracking the height h of the framerAnd when the height h of the target frame is greater than the micro movement threshold value 10, simultaneously calculating the rotation angular velocity and the displacement linear velocity of the robot, and calculating the parameters of the PID controller.
(3) Lost finding
When tracing the abscissa x of the top left vertex of the framer<k1Or
Figure BDA0002383306540000073
Judging the current state as lost, adjusting the motion state of the robot to be a pure rotation state at the moment, calculating the parameters of a PID (proportion integration differentiation) controller, and if the parameters are more than a direction control threshold thrpThen the angular velocity is set to rotate counterclockwise, e.g., a rad/s, if less than the directional control threshold, e.g., thrnThen the angular velocity is set to rotate clockwise-a rad/s, at which time the robot rotates to retrieve the following target.
And step six, updating the following target information, and using the tracking information at the current moment as the target information at the next moment to realize continuous tracking.
The previous description of the disclosed embodiments is provided to enable any person skilled in the art to make or use the present invention. Various modifications to these embodiments will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other embodiments without departing from the spirit or scope of the invention. Thus, the present invention is not intended to be limited to the embodiments shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.

Claims (3)

1. A human body target following method based on monocular vision is characterized by comprising the following steps:
step one, starting a following system of the monocular vision robot through voice control, and starting a following function of the robot;
detecting a target right in front, and detecting a plurality of human body targets right in front of the robot by using a human body recognition algorithm;
selecting a target to be followed, and selecting a human body target which best meets the requirements from the obtained human body detection frames as the target to be followed;
setting the state of a tracker, and acquiring tracking information of the target to be followed by using a mode of combining a main tracker and an auxiliary tracker;
step five, motion following control, namely setting a corresponding following strategy according to the relation between the target information and the following information;
step six, updating the following target information, and using the tracking information at the current moment as the target information at the next moment to realize continuous tracking;
the specific method of the second step is as follows:
(1) obtaining a human target detection frame using a human recognition algorithm
The robot issues voice, shoots a front part by using a monocular camera, detects each shot frame, extracts posture information of each human body to construct a rectangular human body target detection frame, the coordinates of the upper left corner of the detection frame are (x, y), and the width and the height of the detection frame are w and h respectively;
(2) detection frame for rejecting errors
The width of an image shot by the monocular camera is W, the height of the image is H, wrong detection frames are removed according to the relative size of the detection frames in the shot image, and human body target detection frames are preliminarily screened;
the concrete method of the third step is as follows:
selecting a target detection frame with a larger area and a center coordinate close to the center coordinate of the image from a plurality of rectangular detection frames, and calculating a score S for each rectangular detection frame:
Figure FDA0003462573140000011
wherein σ2In order to self-define the coefficient,
selecting a rectangular detection frame with the highest score S as a target to be followed;
detecting whether the target to be followed is right in front of the robot when
Figure FDA0003462573140000012
When the target is moved to the central position, issuing voice to remind the target to move to the central position, and recording rectangular frame information of the target to be tracked;
the concrete method of the step five is as follows:
(1) proximity detection
When tracking the height h of the framerWhen the height difference with the rectangular frame detected by the human body detection algorithm at the beginning is smaller than a set threshold value, the robot state is set to be closer to the following target, and the robot does not move forwards;
(2) normal tracking
The rotational speed rw of the rotational movement of the robot is calculated as follows:
Figure FDA0003462573140000021
where W is the width of the image, σ1To define the parameters, rwmaxIs a preset maximum rotation angular velocity;
the linear velocity lv calculation formula of the robot is as follows:
Figure FDA0003462573140000022
where W is the width of the image, σ2To define the parameters, lvmaxIs a preset maximum linear velocity;
when tracking the height h of the framerWhen the height h between the robot and the target frame is smaller than a micro movement threshold value, only calculating the rotation angular velocity of the robot, and calculating the parameters of a PID controller;
when tracking the height h of the framerWhen the height h of the target frame is larger than the micro moving threshold, simultaneously calculating the rotation angular velocity and the displacement linear velocity of the robot, and calculating the parameters of a PID controller;
(3) lost finding
When tracing the abscissa x of the top left vertex of the framer<k1Or
Figure FDA0003462573140000023
Judging the current state as lost, adjusting the motion state of the robot to be a pure rotation state at the moment, calculating the parameters of a PID (proportion integration differentiation) controller, and if the parameters are more than a direction control threshold thrpThe angular velocity is set to rotate counterclockwise, if less than a direction control threshold such as thrnThen the angular velocity is set to rotate clockwise, at which time the robot rotates to retrieve the following target.
2. The human body target following method based on the monocular vision as claimed in claim 1, wherein only the width w of the rectangular frame is kept larger than
Figure FDA0003462573140000024
And is less than
Figure FDA0003462573140000025
The human target detection frame.
3. The human body target following method based on the monocular vision as claimed in claim 1, wherein the specific method of the fourth step is as follows:
the human body tracking function is realized by combining a main tracker MEDIANFLOW with a stable anti-loss auxiliary tracker MOSSE;
(1) tracking state detection
Detecting a tracking target in the updated image frame, setting the tracking state as true when the tracking target is found in the updated image frame, and drawing a tracking frame;
(2) drawing tracking frame
Drawing a tracking frame r on a new image frame F, wherein a main tracker corresponds to a main tracking frame r _ m, and an auxiliary tracker corresponds to an auxiliary tracking frame r _ a; drawing only one tracking frame on the updated image frame, using a main tracking frame r _ m when the main tracker is available, and using an auxiliary tracking frame r _ a when the main tracker is unavailable;
(3) detecting incorrect tracking information and resetting the tracker
Comparing the previous time information of the target to be tracked with the current information calculated by the tracker, and judging the tracking state; the information at the previous moment is represented on the image as a target frame, and the information tracked at the current moment is represented as a tracking frame; the coordinates of the top left vertex of the target frame in the image are (x, y), the width is w, the height is h, and the coordinates of the center point of the target frame are
Figure FDA0003462573140000031
Drawing the coordinate of the upper left vertex of the tracking frame of the moving target in the image as (x) by using the trackerr,yr) And has a width of wrHeight of hrThe coordinate of the central point of the tracking frame is
Figure FDA0003462573140000032
Calculating the distance D between the center point of the target frame and the center point of the tracking frame:
Figure FDA0003462573140000033
when in use
Figure FDA0003462573140000034
In time, the tracker is considered to be in errorThe tracker needs to be reset.
CN202010089860.2A 2020-02-13 2020-02-13 Human body target following method based on monocular vision Active CN111308993B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010089860.2A CN111308993B (en) 2020-02-13 2020-02-13 Human body target following method based on monocular vision

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010089860.2A CN111308993B (en) 2020-02-13 2020-02-13 Human body target following method based on monocular vision

Publications (2)

Publication Number Publication Date
CN111308993A CN111308993A (en) 2020-06-19
CN111308993B true CN111308993B (en) 2022-04-01

Family

ID=71148974

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010089860.2A Active CN111308993B (en) 2020-02-13 2020-02-13 Human body target following method based on monocular vision

Country Status (1)

Country Link
CN (1) CN111308993B (en)

Families Citing this family (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111862154B (en) * 2020-07-13 2024-03-01 中移(杭州)信息技术有限公司 Robot vision tracking method and device, robot and storage medium
CN111881827B (en) * 2020-07-28 2022-04-26 浙江商汤科技开发有限公司 Target detection method and device, electronic equipment and storage medium
CN112207821B (en) * 2020-09-21 2021-10-01 大连遨游智能科技有限公司 Target searching method of visual robot and robot
CN112132864B (en) * 2020-09-21 2024-04-09 大连遨游智能科技有限公司 Vision-based robot following method and following robot
CN112880557B (en) * 2021-01-08 2022-12-09 武汉中观自动化科技有限公司 Multi-mode tracker system
CN113221754A (en) * 2021-05-14 2021-08-06 深圳前海百递网络有限公司 Express waybill image detection method and device, computer equipment and storage medium

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2010073432A1 (en) * 2008-12-24 2010-07-01 株式会社ソニー・コンピュータエンタテインメント Image processing device and image processing method
CN103942672A (en) * 2014-05-19 2014-07-23 北京玛施德利科技有限公司 Tracking device for logistics tracking system as well as application method of tracking device and logistics tracking system
WO2017147792A1 (en) * 2016-03-01 2017-09-08 SZ DJI Technology Co., Ltd. Methods and systems for target tracking
CN108646761A (en) * 2018-07-12 2018-10-12 郑州大学 Robot indoor environment exploration, avoidance and method for tracking target based on ROS
CN108875683A (en) * 2018-06-30 2018-11-23 北京宙心科技有限公司 Robot vision tracking method and system
CN109741369A (en) * 2019-01-03 2019-05-10 北京邮电大学 A kind of method and system for robotic tracking target pedestrian

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2010073432A1 (en) * 2008-12-24 2010-07-01 株式会社ソニー・コンピュータエンタテインメント Image processing device and image processing method
CN103942672A (en) * 2014-05-19 2014-07-23 北京玛施德利科技有限公司 Tracking device for logistics tracking system as well as application method of tracking device and logistics tracking system
WO2017147792A1 (en) * 2016-03-01 2017-09-08 SZ DJI Technology Co., Ltd. Methods and systems for target tracking
CN108875683A (en) * 2018-06-30 2018-11-23 北京宙心科技有限公司 Robot vision tracking method and system
CN108646761A (en) * 2018-07-12 2018-10-12 郑州大学 Robot indoor environment exploration, avoidance and method for tracking target based on ROS
CN109741369A (en) * 2019-01-03 2019-05-10 北京邮电大学 A kind of method and system for robotic tracking target pedestrian

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
基于视觉目标跟踪的侦察机器人导航方法;包加桐等;《东南大学学报(自然科学版)》;20120520(第03期);全文 *

Also Published As

Publication number Publication date
CN111308993A (en) 2020-06-19

Similar Documents

Publication Publication Date Title
CN111308993B (en) Human body target following method based on monocular vision
CN109631896B (en) Parking lot autonomous parking positioning method based on vehicle vision and motion information
EP2460629B1 (en) Control method for localization and navigation of mobile robot and mobile robot using same
JP4931218B2 (en) Imaging apparatus, object detection method, and attitude parameter calculation method
WO2018159168A1 (en) System and method for virtually-augmented visual simultaneous localization and mapping
CN106767833B (en) A kind of robot localization method merging RGBD depth transducer and encoder
CN112669354A (en) Multi-camera motion state estimation method based on vehicle incomplete constraint
Smith et al. Eye-in-hand robotic tasks in uncalibrated environments
CN116030099A (en) PTZ camera-based multi-target tracking method and device
US11080562B1 (en) Key point recognition with uncertainty measurement
Wang et al. A robust 6-D pose tracking approach by fusing a multi-camera tracking device and an AHRS module
CN113447014A (en) Indoor mobile robot, mapping method, positioning method, and mapping positioning device
CN114332158A (en) 3D real-time multi-target tracking method based on camera and laser radar fusion
WO2019113859A1 (en) Machine vision-based virtual wall construction method and device, map construction method, and portable electronic device
CN115761693A (en) Method for detecting vehicle location mark points and tracking and positioning vehicles based on panoramic image
CN102682445B (en) Coordinate extracting algorithm of lacertilian-imitating suborder chamaeleonidae biological vision
WO2022226432A1 (en) Hand gesture detection methods and systems with hand prediction
CN112767482B (en) Indoor and outdoor positioning method and system with multi-sensor fusion
WO2022021132A1 (en) Computer device positioning method and apparatus, computer device, and storage medium
CN114463832A (en) Traffic scene sight tracking method and system based on point cloud
CN112484718A (en) Edge navigation device and method based on environmental map correction
JPH06258028A (en) Method and system for visually recognizing three dimensional position and attitude
CN112509006A (en) Sub-map recovery fusion method and device
TWI780468B (en) Method and system of robot for human following
WO2023130465A1 (en) Aerial vehicle, image processing method and apparatus, and movable platform

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant