WO2017088720A1 - 规划最优跟随路径的方法、装置及计算机存储介质 - Google Patents

规划最优跟随路径的方法、装置及计算机存储介质 Download PDF

Info

Publication number
WO2017088720A1
WO2017088720A1 PCT/CN2016/106689 CN2016106689W WO2017088720A1 WO 2017088720 A1 WO2017088720 A1 WO 2017088720A1 CN 2016106689 W CN2016106689 W CN 2016106689W WO 2017088720 A1 WO2017088720 A1 WO 2017088720A1
Authority
WO
WIPO (PCT)
Prior art keywords
following
follow
followed
optimal
output value
Prior art date
Application number
PCT/CN2016/106689
Other languages
English (en)
French (fr)
Inventor
安宁
廖方波
Original Assignee
纳恩博(北京)科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 纳恩博(北京)科技有限公司 filed Critical 纳恩博(北京)科技有限公司
Publication of WO2017088720A1 publication Critical patent/WO2017088720A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/20Instruments for performing navigational calculations
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course or altitude of land, water, air, or space vehicles, e.g. automatic pilot
    • G05D1/02Control of position or course in two dimensions

Definitions

  • the present invention relates to the field of human-computer interaction technologies, and in particular, to a method, an apparatus, and a computer storage medium for planning an optimal following path.
  • robots From the structured environment of the factory, robots gradually enter the environment of human daily life, such as hospitals, offices, homes and other messy and uncontrollable environments. It is expected that in the future, robots will not only be able to complete their work on their own, but also can work together with people to complete tasks or complete tasks under the guidance of people. Such robots are collectively referred to as service robots. According to the application environment, service robots can be divided into robots that operate in special environments (such as anti-terrorism, disaster relief, exploration and survey, etc.) and robots that serve humans (such as medical rehabilitation, housekeeping services, education and entertainment, etc.).
  • special environments such as anti-terrorism, disaster relief, exploration and survey, etc.
  • robots that serve humans such as medical rehabilitation, housekeeping services, education and entertainment, etc.
  • Service robot human-computer interaction involves multiple technical fields.
  • Target follow-up is a hot issue in the study of in-person interaction. It not only has a wide range of application requirements in the field of service robots, such as robotic wheelchairs that assist disabled people and patients to move, robots that carry luggage and heavy objects behind the owner, etc. Reconnaissance military robots also have application value.
  • the robot may encounter obstacles, or the target may leave the tracked field of view, resulting in the inability to effectively follow the specified target.
  • the embodiment of the invention provides a method for planning an optimal following path, which includes:
  • a corresponding motion control sequence is generated according to the optimal following output value to plan an optimal following path.
  • determining, according to the following restriction condition, that the following subject can follow the different following output values of the physical position of the followed object includes:
  • a method for planning an optimal following path wherein determining, according to a following constraint condition, that the following subject can implement different following output values following the followed object includes:
  • the different following output values of the location include: determining that the following subject can follow the being based on an environmental constraint in the local map of the followed object
  • the output value is followed by the difference in the physical location of the object.
  • a method for planning an optimal following path, the generating a corresponding motion control sequence according to the optimal following output value, to plan an optimal following path includes: generating a corresponding according to the optimal following output value The sequence of motion control to plan a follow path that at least avoids interference with environmental obstacles.
  • a method for planning an optimal following path the determining, according to the following restriction condition, determining that the following subject can follow a different following output of the physical position of the followed object, the method further comprising: setting a desired follow the subject to the relative posture of the object being followed.
  • the determining that the following subject can follow the different following output values of the physical position of the followed object includes: following the constraint condition, the posture of the following subject and the desired position detected in real time Determining the relative pose difference of the following subject to the followed object, determining that the following subject can follow the different following output values of the physical location of the followed object.
  • the method further comprises: determining the position of the last object to be followed before the loss, and the last time before being lost follow the position of the object as the navigation target point;
  • Determining, according to the following constraint condition, that the following subject can implement different following output values following the object being followed, and determining the optimal following output value therefrom includes: according to the established global map and under the constraint condition of the global map, It is determined that the following subject can implement different following output values following the navigation target point and determine the optimal following output value therefrom.
  • the description of the feature determines in real time the image area in which the followed object is located on the image includes:
  • An image region in which the followed object is located on the depth image is determined in real time according to the description of the followed object feature.
  • the tracking the physical location of the followed object according to the feature point of the followed object in the image region comprises: according to the object being followed in the image region
  • the feature points and depth information of the depth image track the physical location of the followed object.
  • the method further comprises: acquiring different feature points that the following object follows the followed object in different postures, and establishing a feature point set of the followed object according to the method; To maintain a high degree of recognition of the object being followed.
  • the embodiment of the invention further provides a following device, which comprises:
  • a processor configured to determine, according to the following constraint condition, that the following subject can follow a different following output value of the physical location of the followed object, and determine an optimal following output value therefrom; generate a corresponding one according to the optimal following output value Motion control sequences to plan the optimal follow path;
  • a controller configured to control the following following objects according to an optimal following path planned by the processor.
  • the processor is configured to determine, according to an environment restriction of the following subject and a motion model of the following subject, as a following constraint condition, determining that the following subject can follow
  • the difference in the physical location of the object follows the output value; wherein the following output value comprises at least one of: following the distance cost, following the time cost, following the attitude change cost.
  • the processor is configured to determine that the follower can follow a different follow-up output of the physical location of the followed object based on an environmental constraint in the local map in which the object is followed. value.
  • the device further includes an image acquisition unit;
  • the image acquisition unit is configured to collect an image when following the object being followed
  • the processor is configured to determine an image region of the followed object on the image in real time according to the description of the followed object feature, and track the physicality of the followed object according to the feature point of the followed object in the image region a position; determining, according to the following constraint condition, a different following output value to a physical position of the followed object and determining an optimal following output value therefrom; generating a corresponding motion control sequence according to the optimal following output value to plan an optimal Follow path
  • the controller is configured to control the following following objects according to an optimal following path planned by the processor.
  • the processor is configured to determine a different follow output value that the follower subject can follow to the physical location of the followed object based on environmental constraints in the local map in which the subject is located.
  • the processor is configured to generate a motion control sequence based on a motion control command corresponding to the optimal follow output value.
  • the processor is configured to pre-set a desired relative pose of the following subject to the followed object.
  • the processor is configured to determine that the following subject can follow according to the following constraint condition, the detected subject's attitude in real time, and the relative attitude difference of the desired follower to the followed object.
  • the output value is followed by the difference to the physical location of the object being followed.
  • the processor is configured to determine the position of the last object to be followed before the loss, and the last object to be followed before the loss, when the followed object exceeds the field of view of the following subject a location as a navigation target point; and configured to determine, according to the established global map and under the constraints of the global map, a follower body capable of achieving different follow-up output values following the navigation target point, and determining an optimality therefrom follow the output value.
  • the processor is configured to acquire a depth image formed by the following target through the depth camera; and determine, in real time, the location of the followed object on the depth image according to the description of the followed object feature Image area.
  • the processor is configured to track the physical location of the followed object based on feature points of the followed object in the image region and depth information of the depth image.
  • the processor is configured to acquire different feature points that the following object follows the followed object in different poses, and accordingly establish a feature point set of the followed object to maintain High recognition of the object being followed.
  • the embodiment of the invention further provides a computer storage medium, wherein the computer storage medium stores computer executable instructions, and the computer executable instructions are configured to execute the method for planning an optimal follow path according to the embodiment of the invention.
  • the technical solution of the embodiment of the present invention has the following advantages: since the following output value that the following subject can follow the physical position of the following object can be determined according to the following constraint condition, and the corresponding motion control sequence is determined therefrom and generated according to the optimal following output value. To plan the optimal follow path, thus effectively following the specified target.
  • FIG. 1 is a schematic flowchart of a method for planning an optimal following path according to an embodiment of the present invention
  • FIG. 2 is a schematic flowchart of a method for planning an optimal following path according to Embodiment 2 of the present invention
  • FIG. 3 is a schematic flowchart of a method for planning an optimal following path according to Embodiment 3 of the present invention
  • FIG. 4 is a schematic flowchart of a method for planning an optimal following path according to Embodiment 4 of the present invention.
  • FIG. 5 is a schematic flowchart of a method for planning an optimal following path according to Embodiment 5 of the present invention.
  • FIG. 6 is a schematic structural diagram of a sixth following device according to an embodiment of the present invention.
  • the present invention is schematically illustrated by taking a robot follower as an example.
  • the robot acts as a follower, and the person follows the object as a followed object.
  • the present invention is not limited to application to robot followers, and can also be applied to robots following other non-human targets.
  • Labor can be achieved, and the following embodiments of the present invention will not be described again.
  • FIG. 1 is a schematic flowchart of a method for planning an optimal following path according to an embodiment of the present invention; as shown in FIG. 1 , the embodiment may include at least:
  • determining, according to the following restriction condition, that the robot can follow the following output value of the physical position of the target person may specifically include: limiting and following the subject according to the environment in which the robot is located Motion model as a follow-up constraint, indeed The robot can follow the follow-up output value of the physical position of the target person.
  • Different following output values can be used to represent the cost of the robot following the physical path of the target person from different paths.
  • the environment limitation of the robot includes the global limitation of the global map and the local map where the object is followed, such as in the home environment; and for the outdoor environment, the global map is difficult to construct, and therefore, the object to be followed is
  • the environment in which it is located is the environmental limitation of the local map.
  • the motion model following the subject may include a non-omnidirectional motion model and an omnidirectional motion model.
  • a non-omnidirectional motion model such as a differential wheel train
  • the path following the body can be based on a direction perpendicular to the axle direction when determining the following output value.
  • an omnidirectional motion model such as the Mecanum wheel
  • the environment limitation of the robot may also include a thickened map such as a thickened partial map or a global map.
  • the so-called thickening may be, for example, thickening the contour of the object, and thickening the object during the tracking process.
  • the contour is used as a reference for the safety distance.
  • the determining that the robot can follow the physical position of the target person according to the environmental limitation of the target person and the motion model of the robot as the following constraint conditions Following the output value includes determining a different follow-up output value that the robot can follow the physical location of the target person based on the environmental constraints in the local map in which the robot is located and the motion model of the robot.
  • the differential robot model and the Mecanum motion model are taken as an example. Since the former cannot move in an omnidirectional direction, the latter can move in an omnidirectional manner. Therefore, when determining the following output value, the corresponding following speed and following angle may be completely Different, such as smoothness, the smoothness of the former may be smaller, and the smoothness of the latter is larger.
  • the following output value may include, but is not limited to, one of the following distance cost, the following time cost, the following medium attitude change cost, or any combination.
  • the following distance cost is used to determine the distance to the target
  • the following time cost is used to determine the length of time to follow the target.
  • step S101 it is determined that the robot can implement
  • the method may specifically include: determining, according to the following constraint condition, at least one of a follow distance cost, a follow time cost, and a follow-up attitude change cost that the robot can follow the physical position of the target person: value.
  • each of the costs can be assigned a different weight, and the combined weight and the corresponding cost are calculated to the final following output value.
  • a cost function when determining the optimal following output value, may be defined, and the output of the cost function may be the distance following the target, the time following the target, and the energy consumption following the target.
  • the input can be one of the environmental limitations of the robot and the motion model of the robot or both.
  • Optimally following the output value means that the robot implementation follows the target person with minimal cost, and the optimal following output value corresponds to generate a motion control sequence.
  • the optimal follow output value corresponds to a motion control command
  • the motion control command corresponds to linear velocity and angular velocity control in time, and a corresponding motion control sequence is generated according to the series of motion control commands.
  • FIG. 2 is a schematic flowchart of a method for planning an optimal following path according to Embodiment 2 of the present invention; as shown in FIG. 2, the embodiment may include at least:
  • the relative posture of the desired robot to the target person includes the posture of the target relative to the target when the robot moves to the front of the target person, such as behind the target person, the front or the side, and the distance and angle with the target person. .
  • determining, according to the following constraint condition, a following output value that the robot can follow the physical position of the target person includes:
  • the robot According to the environmental constraints of the robot and the motion model of the robot as the following constraints, it is determined that the robot can follow the follow-up output value of the physical position of the target person.
  • determining, according to the following constraint condition, determining that the robot can follow the following output value of the target person includes:
  • the following is determined according to an environment limitation of the robot and a motion model of the robot as a following constraint condition, and determining that the robot can follow the different physical positions of the target person.
  • the output value includes determining different follow-up output values that the robot can follow the physical location of the target person based on environmental constraints in the local map in which the robot is located.
  • the target person before the local map is constructed, the target person has been culled, that is, the target person is not used as an element in the local map.
  • the image may be analyzed and color based on the image captured by the camera. Feature analysis, pixel analysis, etc., to determine the feature points of the environment.
  • the generating a corresponding motion control sequence according to the optimal following output value to plan an optimal following path includes: according to the most Preferably, the corresponding motion control sequence is generated following the output value to plan a follow path that at least avoids interference with environmental obstacles.
  • the probability map algorithm, the A* algorithm, and the like can be combined to determine to avoid interference with environmental obstacles.
  • FIG. 3 is a schematic flowchart of a method for planning an optimal following path according to Embodiment 3 of the present invention; as shown in FIG. 3, the embodiment may include at least:
  • S301 When the following target person is beyond the scope of the robot, determine the position of the last target person before the loss, and use the position of the last target person before the loss as the navigation target point.
  • each position of the robot is recorded. Therefore, when the target person exceeds the scope of the robot, the position of the last target person can be used as the navigation target point, and can be executed. Tracking similar to static targets.
  • the global map is constructed, for example, in a global map construction in an indoor environment, and the map can be constructed mainly by robots randomly moving indoors or artificially controlling the robot to move indoors, collecting feature points of all environments in the room.
  • the global map can be based on the real-time local map.
  • Update The updated global map can be used as a constraint to determine that the robot can implement the following output value following the navigation target point and determine the optimal follow output value therefrom.
  • determining, according to the following constraint condition, that the following output value of the robot can follow the physical position of the target person includes: according to an environment limitation of the robot
  • the motion model of the robot acts as a follow-up constraint to determine the follow-up output value that the robot can follow to the physical location of the target person.
  • a method for planning an optimal following path in an embodiment of the present invention According to the following constraint conditions, it is determined that the following output values that the robot can follow to the target person include:
  • the following is determined according to an environment limitation of the robot and a motion model of the robot as a following constraint condition, and determining that the robot can follow the different physical positions of the target person.
  • the output value includes determining different follow-up output values that the robot can follow the physical location of the target person based on environmental constraints in the local map in which the robot is located.
  • the generating a corresponding motion control sequence according to the optimal following output value to plan an optimal following path includes: according to the most Preferably, the corresponding motion control sequence is generated following the output value to plan a follow path that avoids interference with environmental obstacles.
  • Embodiment 4 is a schematic flowchart of a method for planning an optimal following path according to Embodiment 4 of the present invention; as shown in FIG. 4, the embodiment may include at least:
  • S401 Determine, in real time, an image region where the target person is located on the image according to the description of the target person feature.
  • the description of the target person features includes, but is not limited to, a prominent feature on the body such as a human face, a contour of the human body, and a human skeleton of the head.
  • the positioning may be performed by an obvious indication such as a rectangular frame.
  • This rectangular frame can frame all the body types or part of the target person, such as faces, heads, and so on.
  • the determining, in real time, the image area where the target person is located on the image according to the description of the target person feature in step S401 may at least include:
  • the image area in which the target person is located on the depth image is determined in real time according to the description of the target person feature.
  • the distance between the robot and the target person is determined, and the depth data in the depth image information can determine the distance, and the depth image includes the target person.
  • Red, green and blue (RGB) information for the image area is used for the image area. Therefore, when determining the image area of the target person, it is possible to directly based on the depth image, and it is not necessary to use a normal two-dimensional image alone.
  • S402. Track the physical location of the target person according to the feature points of the target person in the image region.
  • tracking a physical location of a target person according to a feature point of the target person in the image region includes: according to characteristics of the target person in the image region The point and the depth information of the depth image track the physical location of the target person.
  • the feature points of the target person may be, but are not limited to, face features, human bone features, and the like.
  • the feature points of the target person are actually the identification ID of the target person.
  • the face feature since the present embodiment is based on the depth image, the face feature includes the two-dimensional face data at the same time, and also includes the depth. Information, so it is more abundant than traditional simple 2D face data, and it is more accurate in the physical location of the target person.
  • the target's feature points are always maintained, thus ensuring the effectiveness of the tracking.
  • the point cloud density of the target area may be statistically processed to obtain an average depth, which is the physical distance of the robot to the target person at the current moment, in other words, the physical state of the target person is determined. position.
  • the average depth it can be based on all feature points of the target area, or some feature points with higher density can be selected. Row.
  • Steps S403 and S404 in this embodiment are not described herein again. For details, refer to the embodiments shown in FIG. 1, FIG. 2, and FIG.
  • FIG. 5 is a schematic flowchart of a method for planning an optimal following path according to Embodiment 5 of the present invention; as shown in FIG. 5, the embodiment may include at least:
  • S501 Determine, in real time, an image area where the target person is located on the image according to the description of the target person feature.
  • the determining, according to the description of the target person feature, the image region where the target person is located on the image includes: acquiring the following target by the depth camera a depth image; determining an image region of the target person on the depth image in real time according to the description of the target person feature.
  • the feature points of the target person may be, but are not limited to, face features, human bone features, etc.; the feature points of the target person actually serve as the identification ID of the target person.
  • the face feature of the target person includes two-dimensional face data, and may also include depth information.
  • the following feature set of the target person is established by following the different feature points obtained by the target person following the different postures.
  • certain criteria can be set to prevent the feature points added to the feature point set from over-expanding; however, if a feature point is added to the feature point set, only a small improvement is provided, considering that the feature set is not suitable. If it is too large, the feature point can be discarded to join the feature point set.
  • tracking a physical location of a target person according to a feature point of the target person in the image region includes: according to characteristics of the target person in the image region The point and the depth information of the depth image track the physical location of the target person.
  • Steps S504 and S505 in this embodiment are not described herein again. For details, refer to the embodiments shown in FIG. 1, FIG. 2, and FIG.
  • FIG. 6 is a schematic structural diagram of a sixth following device according to an embodiment of the present invention. As shown in FIG. 6, the method may include at least a processor 602 and a controller 603, where the controller 603 and the processor 602 are communicably connected;
  • the processor 602 is configured to determine, according to the following constraint condition, that the following subject can follow a different following output value of the physical location of the followed object, and determine an optimal following output value therefrom; according to the optimal following output value Generating a corresponding motion control sequence to plan an optimal follow path;
  • the controller 603 is configured to control to follow the followed object according to the optimal following path planned by the processor 602.
  • the processor 602 is configured to determine, according to the environment restriction of the following subject and the motion model of the following subject, as a following constraint condition, determining that the following subject can follow the physical location of the followed object
  • the different following output values include at least one of: following a distance cost, following a time cost, following a medium attitude change cost.
  • the processor 602 is configured to be according to the followed object In the environmental constraints in the local map, it is determined that the following subject can follow different follow-up output values of the physical location of the followed object.
  • the following device further includes an image capturing unit 601; the image capturing unit 601 is communicably connected to the processor 602; wherein the image capturing unit 601 is configured to collect an image when the target person is followed ;
  • the image acquisition unit 601 can be, but is not limited to, a depth camera, as long as the depth image information can be obtained.
  • the processor 602 is configured to determine an image region where the target person is located on the image according to the description of the target person feature, and track the physical location of the target person according to the feature point of the target person in the image region; the processor 602 is further configured to Determining the following output value of the physical position of the robot to the target person according to the following constraint condition and determining an optimal following output value therefrom; the processor 602 is further configured to generate a corresponding motion control sequence according to the optimal following output value to plan the most Excellent follow path
  • the controller 603 is configured to control the target follower according to the optimal follow path planned by the processor 602.
  • the processor 602 determines the following output value of the physical position of the robot to the target person according to the following constraint condition and determines the optimal follow output value therefrom It can be further configured to determine the following output value of the physical position of the target person according to the environmental limitation of the robot and the motion model of the robot as the following constraint conditions.
  • the processor 602 determines the following output value of the physical position of the robot to the target person according to the following constraint condition and determines the optimal follow output value therefrom And may be further configured to determine, according to the following constraint condition, one or any combination of a following distance cost, a following time cost, and a following attitude change cost of the robot following the physical position of the target person.
  • the processor 602 determines the following output value of the physical position of the robot to the target person according to the following constraint condition and determines the optimal follow output value therefrom And may be further configured to determine different follow-up output values that the robot can follow the physical location of the target person based on environmental constraints in the local map in which the robot is located.
  • the processor 602 may further generate a corresponding motion control sequence according to the optimal following output value to plan an optimal following path. And configuring to generate a corresponding motion control sequence according to the optimal following output value to plan a follow path that at least avoids interference with environmental obstacles.
  • the processor 602 is further configured to preset a relative posture of a desired robot to a target person.
  • the processor 602 determines the following output value of the physical position of the robot to the target person according to the following constraint condition and determines the optimal follow output value therefrom And may be further configured to determine a follow-up output value that the robot can follow the physical position of the target person according to the following constraint condition, the posture of the robot detected in real time, and the relative posture difference of the desired robot to the target person.
  • the processor 602 is further configured to determine the position of the last target person before the loss when the target person being followed exceeds the field of view of the robot, and The position of the last target person before the loss is taken as the navigation target point; when the following output value of the physical position of the robot to the target person is determined according to the following constraint condition and the optimal follow output value is determined therefrom, the configuration may be further configured according to the established global The map and under the constraints of the global map, determine that the robot can implement a follow-up output value following the navigation target point and determine an optimal follow-up output value therefrom.
  • the processor 602 determines an image region where the target person is located on the image in real time according to the description of the target person feature.
  • the image may be further configured to acquire a depth image formed by the target by the depth camera; and determine an image region of the target person on the depth image in real time according to the description of the target person feature.
  • the processor 602 may further configure, according to the physical location of the target person according to the feature point of the target person in the image region, according to the The feature points of the target person in the image area and the depth information of the depth image track the physical location of the target person.
  • the processor 602 may further configure to acquire a follow object when tracking the physical location of the target person according to the feature point of the target person in the image region. Different feature points followed by the target person in different postures, and the feature point set of the target person is established according to the purpose to maintain high recognition degree to the target person.
  • the following device may be implemented by following a machine device in a practical application;
  • the processor 602 in the following device may be a central processing unit (CPU), a digital signal in an actual application.
  • the image acquisition unit 601 in the following device can pass through a camera (
  • a depth camera is implemented;
  • the controller 603 in the following device can be implemented by a mobile platform in an actual application, the mobile platform of the ground following device can be a sports chassis, and the mobile platform of an air following device (such as a drone) can be It is a rotor and so on.
  • the device embodiments described above are merely illustrative, wherein the units described as separate components may or may not be physically separate, and the components displayed as units may or may not be physical units, ie may be located a place, or it can be distributed more On a network unit. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of the embodiment. Those of ordinary skill in the art can understand and implement without deliberate labor.
  • the technical solution of the embodiment of the present invention can determine, according to the following constraint condition, that the following subject can follow the following output value of the physical position of the following object, and determine therefrom and generate a corresponding motion control sequence according to the optimal following output value to plan the most Excellent follow path, thus effectively following the specified target.

Abstract

一种规划最优跟随路径的方法、装置及计算机存储介质。方法包括:在跟随主体跟随位置动态变化的跟随客体时,根据跟随限制条件,确定跟随主体能跟随到被跟随客体物理位置的跟随输出值,并从中确定出最优跟随输出值;根据最优跟随输出值生成对应的运动控制序列,以规划最优的跟随路径。

Description

规划最优跟随路径的方法、装置及计算机存储介质 技术领域
本发明涉及人机交互技术领域,尤其涉及一种规划最优跟随路径的方法、装置及计算机存储介质。
背景技术
随着科学技术日新月异的发展、人民生活水平的不断提高、生命得到更多尊重、老龄化社会问题同趋严峻、新军事变革不断深入,机器人走向工业、社会、家庭和战场为人类服务已成为必然趋势。
机器人从工厂的结构化环境逐步进入人类日常生活的环境,例如医院、办公室、家庭和其它杂乱及不可控环境。人们期望未来机器人不仅能自主完成工作,而且能与人共同协作完成任务或在人指导下完成任务,这类机器人被统称为服务机器人。服务机器人按照应用环境又可分为在特殊环境下作业的机器人(如反恐防暴、抢救灾、勘探勘测等)和服务于人的机器人(如医疗康复、家政服务、教育娱乐等)。
与传统工业机器人相比,服务机器人应用领域更为广泛,但所面向的环境更为复杂,具有极大的不确定性,对机器人的智能性、适应性和灵活性提出了很高的挑战。
服务机器人人机交互性涉及多个技术领域。目标跟随是入机交互研究中的一个热点问题,不仅在服务机器人领域具有广泛的应用需求,例如协助残疾人和病人移动的机器人轮椅、跟在主人身后搬运行李和重物的机器人等,而且对于侦察型军用机器人也具有应用价值。机器人在跟随目标的过程中,可能会遇到障碍物,或者目标脱离跟踪的视野,导致无法有效地跟随指定目标。
发明内容
本发明实施例的目的在于提供一种规划最优跟随路径的方法、装置及计算机存储介质,用以解决/缓解现有技术中上述问题之一或者所有。
本发明实施例采用的技术方案如下:
本发明实施例提供了一种规划最优跟随路径的方法,其包括:
在跟随主体跟随位置动态变化的跟随客体时,根据跟随限制条件,确定所述跟随主体能跟随到被跟随客体物理位置的不同跟随输出值,并从中确定出最优跟随输出值;
根据所述最优跟随输出值生成对应的运动控制序列,以规划最优的跟随路径。
在一本发明实施例规划最优跟随路径的方法中,所述根据跟随限制条件,确定所述跟随主体能跟随到被跟随客体物理位置的不同跟随输出值包括:
根据所述跟随主体所在的环境限制和所述跟随主体的运动模型作为跟随限制条件,确定所述跟随主体能跟随到被跟随客体物理位置的不同跟随输出值。
在一本发明实施例规划最优跟随路径的方法,所述根据跟随限制条件,确定所述跟随主体能实现跟随到被跟随客体的不同跟随输出值包括:
根据跟随限制条件,确定所述跟随主体能跟随到被跟随客体物理位置的跟随距离代价、跟随时间代价、跟随中姿态变化代价中的至少之一为跟随输出值。
在一本发明实施例规划最优跟随路径的方法,所述根据所述跟随主体所在的环境限制和所述跟随主体的运动模型作为跟随限制条件,确定所述跟随主体能跟随到被跟随客体物理位置的不同跟随输出值包括:根据在所述被跟随客体所在局部地图中的环境限制,确定所述跟随主体能跟随到被 跟随客体物理位置的不同跟随输出值。
在一本发明实施例规划最优跟随路径的方法,所述根据所述最优跟随输出值生成对应的运动控制序列,以规划最优的跟随路径包括:根据所述最优跟随输出值生成对应的运动控制序列,以规划出至少可避免与环境障碍物干涉的跟随路径。
在一本发明实施例规划最优跟随路径的方法,所述根据跟随限制条件,确定所述跟随主体能跟随到被跟随客体物理位置的不同跟随输出之前,所述方法还包括:设定期望的跟随主体到被跟随客体的相对姿态。
在一本发明实施例规划最优跟随路径的方法,所述确定跟随主体能跟随到被跟随客体物理位置的不同跟随输出值包括:根据跟随限制条件、实时检测的跟随主体的姿态与期望的所述跟随主体到被跟随客体的相对姿态差值,确定所述跟随主体能跟随到被跟随客体物理位置的不同跟随输出值。
在一本发明实施例规划最优跟随路径的方法,当被跟随客体超出跟随主体的视野范围时,所述方法还包括:确定丢失前最后一次被跟随客体的位置,并将丢失前最后一次被跟随客体的位置作为导航目标点;
所述根据跟随限制条件,确定跟随主体能实现跟随到被跟随客体的不同跟随输出值,并从中确定出最优跟随输出值包括:根据建立的全局地图及在所述全局地图的限制条件下,确定跟随主体能实现跟随到所述导航目标点的不同跟随输出值,并从中确定出最优跟随输出值。
在一本发明实施例规划最优跟随路径的方法还包括:
根据被跟随客体特征的描述实时确定所述被跟随客体在图像上所在的图像区域;
根据所述图像区域内被跟随客体的特征点跟踪所述被跟随客体的物理位置。
在一本发明实施例规划最优跟随路径的方法中,所述根据被跟随客体 特征的描述实时确定所述被跟随客体在图像上所在的图像区域包括:
通过深度摄像头采集跟随目标形成的深度图像;
根据所述被跟随客体特征的描述实时确定被跟随客体在所述深度图像上所在的图像区域。
在一本发明实施例规划最优跟随路径的方法中,所述根据所述图像区域内被跟随客体的特征点跟踪所述被跟随客体的物理位置包括:根据所述图像区域内被跟随客体的特征点以及所述深度图像的深度信息跟踪所述被跟随客体的物理位置。
在一本发明实施例规划最优跟随路径的方法中,所述方法还包括:获取跟随客体在不同的姿态下对被跟随客体跟随的不同特征点,并据此建立被跟随客体的特征点集,以保持对被跟随客体的高识别度。
本发明实施例又提供了一种跟随装置,其包括:
处理器,配置为根据跟随限制条件,确定所述跟随主体能跟随到被跟随客体物理位置的不同跟随输出值,并从中确定出最优跟随输出值;根据所述最优跟随输出值生成对应的运动控制序列,以规划最优的跟随路径;
控制器,配置为控制根据所述处理器规划出的最优跟随路径跟随被跟随客体。
在一本发明实施例的跟随装置中,所述处理器,配置为根据所述跟随主体所在的环境限制和所述跟随主体的运动模型作为跟随限制条件,确定所述跟随主体能跟随到被跟随客体物理位置的不同跟随输出值;其中,所述跟随输出值包括以下至少之一:跟随距离代价、跟随时间代价、跟随中姿态变化代价。
在一本发明实施例的跟随装置中,所述处理器,配置为根据在所述被跟随客体所在局部地图中的环境限制,确定所述跟随主体能跟随到被跟随客体物理位置的不同跟随输出值。
在一本发明实施例的跟随装置中,所述装置还包括图像采集单元;
所述图像采集单元,配置为采集在跟随被跟随客体时的图像;
所述处理器,配置为根据被跟随客体特征的描述实时确定所述被跟随客体在图像上所在的图像区域,并根据所述图像区域内被跟随客体的特征点跟踪所述被跟随客体的物理位置;根据跟随限制条件确定到所述被跟随客体的物理位置的不同跟随输出值并从中确定出最优跟随输出值;根据所述最优跟随输出值生成对应的运动控制序列,以规划最优的跟随路径;
所述控制器,配置为控制根据所述处理器规划出的最优跟随路径跟随被跟随客体。
在一本发明实施例目标的跟随装置中,所述处理器配置为根据在所述跟随主体所在局部地图中的环境限制,确定跟随主体能跟随到被跟随客体物理位置的不同跟随输出值。
在一本发明实施例目标的跟随装置中,所述处理器配置为根据所述最优跟随输出值对应的运动控制指令生成运动控制序列。
在一本发明实施例目标的跟随装置中,所述处理器配置为预先设定期望的跟随主体到被跟随客体的相对姿态。
在一本发明实施例目标的跟随装置中,所述处理器配置为根据跟随限制条件、实时检测的跟随主体的姿态与期望的跟随主体到被跟随客体的相对姿态差值,确定跟随主体能跟随到被跟随客体物理位置的不同跟随输出值。
在一本发明实施例目标的跟随装置中,所述处理器配置为当被跟随客体超出跟随主体的视野范围时,确定丢失前最后一次被跟随客体的位置,并将丢失前最后一次被跟随客体的位置作为导航目标点;以及配置为根据建立的全局地图及在所述全局地图的限制条件下,确定跟随主体能实现跟随到所述导航目标点的不同跟随输出值,并从中确定出最优跟随输出值。
在一本发明实施例目标的跟随装置中,所述处理器配置为通过深度摄像头采集跟随目标形成的深度图像;以及根据被跟随客体特征的描述实时确定被跟随客体在所述深度图像上所在的图像区域。
在一本发明实施例目标的跟随装置中,所述处理器配置为根据所述图像区域内被跟随客体的特征点以及所述深度图像的深度信息跟踪被跟随客体的物理位置。
在一本发明实施例目标的跟随装置中,所述处理器配置为获取跟随客体在不同的姿态下对被跟随客体跟随的不同特征点,并据此建立被跟随客体的特征点集,以保持对被跟随客体的高识别度。
本发明实施例还提供了一种计算机存储介质,所述计算机存储介质中存储有计算机可执行指令,所述计算机可执行指令配置为执行本发明实施例所述的规划最优跟随路径的方法。
本发明实施例的技术方案具有以下优点:由于根据跟随限制条件可以确定出跟随主体能跟随到被跟随客体物理位置的跟随输出值,并从中确定并根据最优跟随输出值生成对应的运动控制序列,以规划最优的跟随路径,从而实现了有效地跟随指定目标。
附图说明
为了更清楚地说明本发明实施例或现有技术中的技术方案,下面将对实施例或现有技术描述中所需要使用的附图作一简单地介绍,显而易见地,下面描述中的附图是本发明的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其他的附图。
图1为本发明实施例一规划最优跟随路径的方法流程示意图;
图2为本发明实施例二规划最优跟随路径的方法流程示意图;
图3为本发明实施例三规划最优跟随路径的方法流程示意图;
图4为本发明实施例四规划最优跟随路径的方法流程示意图;
图5为本发明实施例五规划最优跟随路径的方法流程示意图;
图6为本发明实施例六跟随装置的结构示意图。
具体实施方式
为使本发明实施例的目的、技术方案和优点更加清楚,下面将结合本发明实施例中的附图,对本发明实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例是本发明一部分实施例,而不是全部的实施例。基于本发明中的实施例,本领域普通技术人员在没有作出创造性劳动前提下所获得的所有其他实施例,都属于本发明保护的范围。
下述本发明实施例中,以机器人跟随人为例,对本发明进行示意性说明,机器人做为跟随主体,人作为被跟随客体即跟随目标。但是,需要说明的是,本发明不局限于应用于机器人跟随人,也可以应用于机器人对其他非人体目标的跟随,对于本领域普通技术人员来说,在发明实施例的启发下,无须创造性劳动即可实现,下述本发明实施例将不再赘述。
本发明实施例中,由于根据跟随限制条件可以确定出跟随主体能跟随到被跟随客体物理位置的不同跟随输出值,并从中确定并根据最优跟随输出值生成对应的运动控制序列,以规划最优的跟随路径,从而实现了有效地跟随指定目标。
图1为本发明实施例一规划最优跟随路径的方法流程示意图;如图1所示,本实施例至少可以包括:
S101、根据跟随限制条件,确定机器人能跟随到目标人物理位置的不同跟随输出值,并从中确定出最优跟随输出值。
作为一种实施方式,在本实施例中,步骤S101中在所述根据跟随限制条件,确定机器人能跟随到目标人物理位置的跟随输出值时可以具体包括:根据机器人所在的环境限制和跟随主体的运动模型作为跟随限制条件,确 定机器人能跟随到目标人物理位置的跟随输出值。不同跟随输出值可以用来表示机器人从不同路径跟随到目标人物理位置的代价。
本实施例中,机器人所在的环境限制包括被跟随客体所在全局地图和局部地图的环境限制,比如在家庭环境中;而对于室外环境来说,全局地图构建的难度较大,因此,被跟随客体所在的环境为局部地图的环境限制。
本实施例中,跟随主体的运动模型可以包括非全向运动模型以及全向运动模型。对于非全向运动模型如差分轮系来说,在确定跟随输出值时,可以基于垂直于轮轴方向上跟随主体的路径。而对于全向运动模型如麦克纳姆轮来说,其实质上是全向运动,可以考虑任意方向上跟随主体的路径。
本实施例中,机器人所在的环境限制也还可以包括加厚的地图如加厚的局部地图或者全局地图,所谓加厚比如可以是对物体轮廓加厚,在跟踪过程中,把加厚的物体轮廓作为安全距离的参考。
作为一种实施方式,在一本发明实施例规划最优跟随路径的方法,所述根据目标人所在的环境限制和机器人的运动模型作为跟随限制条件,确定机器人能跟随到目标人物理位置的不同跟随输出值包括:根据在所述机器人所在局部地图中的环境限制和机器人的运动模型,确定机器人能跟随到目标人物理位置的不同跟随输出值。
以室内环境为例,差分机器人模型和麦克纳姆运动模型为例,由于前者不能全向动作,而后者可以全向动作,因此,在确定跟随输出值时,对应的跟随速度和跟随角度可能完全不同,比如平滑度上,前者的平滑度可能较小,后者的平滑度较大。
本实施例中,跟随输出值可以包括但不限于跟随距离代价、跟随时间代价、跟随中姿态变化代价中之一或者任意组合。跟随距离代价用来判断跟随到目标的距离远近,跟随时间代价用来判断跟随到目标的时间长短。
因此,具体地,步骤S101中根据跟随限制条件,确定机器人能实现跟 随到目标人的跟随输出值时,可以具体包括:根据跟随限制条件,确定机器人能跟随到目标人物理位置的跟随距离代价、跟随时间代价、跟随中姿态变化代价中的至少之一为跟随输出值。此时,可以给每种代价赋予不同的权值,综合权值与对应的代价计算的到最终的跟随输出值。
本实施例中,在具体确定最优跟随输出值时,可以定义一代价函数cost function,该代价函数的输出可以是跟随到目标的距离、跟随到目标的时间、跟随到目标的能量消耗,而其输入可以是机器人所在的环境限制和机器人的运动模型之一或者两者均有。
S102、根据所述最优跟随输出值生成对应的运动控制序列,以规划最优的跟随路径。
最优跟随输出值意味着以最小的代价使得机器人实现跟随到目标人,该最优跟随输出值对应生成一运动控制序列。具体地,由于最优跟随输出值对应运动控制指令,该运动控制指令对应在时间上的线速度和角速度控制,根据这一系列的运动控制指令从而生成对应的运动控制序列。
图2为本发明实施例二规划最优跟随路径的方法流程示意图;如图2所示,本实施例至少可以包括:
S201、设定期望的机器人到目标人的相对姿态。
本实施例中,期望的机器人到目标人的相对姿态包括当机器人移动到目标人跟前时,其相对目标人的姿态,比如是在目标人身后、正面或者侧面,以及与目标人的距离和角度。
S202、根据跟随限制条件、实时检测的机器人的姿态与期望的机器人到目标人的相对姿态差值,确定机器人能跟随到目标人物理位置的跟随输出值。
本实施例中,如前所述,对于差分运动模型来说,其并不是全向的;而对于麦克纳姆运动模型来,其全向的,因此,在姿态控制上采用麦克纳 姆运动模型的机器人比采用差分运动模型的机器人更为丰富。
作为一种实施方式,在一本发明实施例规划最优跟随路径的方法中,所述根据跟随限制条件,确定机器人能跟随到目标人物理位置的跟随输出值包括:
根据机器人所在的环境限制和机器人的运动模型作为跟随限制条件,确定机器人能跟随到目标人物理位置的跟随输出值。
作为一种实施方式,在一本发明实施例规划最优跟随路径的方法,所述根据跟随限制条件,确定机器人能实现跟随到目标人的跟随输出值包括:
根据跟随限制条件,确定机器人能跟随到目标人物理位置的跟随距离代价、跟随时间代价、跟随中姿态变化代价中的至少之一为跟随输出值。
作为一种实施方式,在一本发明实施例规划最优跟随路径的方法,所述根据机器人所在的环境限制和机器人的运动模型作为跟随限制条件,确定机器人能跟随到目标人物理位置的不同跟随输出值包括:根据在所述机器人所在局部地图中的环境限制,确定机器人能跟随到目标人物理位置的不同跟随输出值。
本实施例中,局部地图的构建之前把目标人已经剔除,即不把目标人作为局部地图中的一元素,在局部地图构建时,可以基于摄像头采集到的图像,对图像进行轮廓分析、颜色特征分析、像素分析等,从而确定环境的特征点。
S203、根据所述最优跟随输出值生成对应的运动控制序列,以规划最优的跟随路径。
作为一种实施方式,在一本发明实施例规划最优跟随路径的方法,所述根据所述最优跟随输出值生成对应的运动控制序列,以规划最优的跟随路径包括:根据所述最优跟随输出值生成对应的运动控制序列,以规划出至少可避免与环境障碍物干涉的跟随路径。
本实施例中,可以结合概率图算法、A*算法等来确定避免与环境障碍干涉。
图3为本发明实施例三规划最优跟随路径的方法流程示意图;如图3所示,本实施例至少可以包括:
S301、当被跟随的目标人超出机器人的视野范围时,确定丢失前最后一次目标人的位置,并将丢失前最后一次目标人的位置作为导航目标点。
本实施例中,机器人在跟随目标的过程中,每一次机器人的位置点都被记载,因此,当目标人超出机器人视野范围内时,把最后一次目标人的位置可以作为导航目标点,可以执行类似静态目标的跟踪。
S302、根据建立的全局地图及在所述全局地图的限制条件下,确定机器人能实现跟随到所述导航目标点的跟随输出值,并从中确定出最优跟随输出值。
本实施例中,全局地图的构建比如在室内环境中的全局地图构建,主要可通过机器人随机在室内运动或者人为控制机器人在室内运动,采集室内所有环境的特征点,从而构建的地图。
在跟踪导航目标点过程中,由于在全局和局部条件下,对环境的识别可能存在差异,比如局部地图中环境发生了相对于之前构建全局地图时的变化,因此可以基于实时局部地图对全局地图进行更新。可以把更新后的全局地图作为限制条件,确定机器人能实现跟随到所述导航目标点的跟随输出值,并从中确定出最优跟随输出值。
作为一种实施方式,在一本发明实施例规划最优跟随路径的方法中,所述根据跟随限制条件,确定机器人能跟随到目标人物理位置的跟随输出值包括:根据机器人所在的环境限制和机器人的运动模型作为跟随限制条件,确定机器人能跟随到目标人物理位置的跟随输出值。
作为一种实施方式,在一本发明实施例规划最优跟随路径的方法,所 述根据跟随限制条件,确定机器人能实现跟随到目标人的跟随输出值包括:
根据跟随限制条件,确定机器人能跟随到目标人物理位置的跟随距离代价、跟随时间代价、跟随中姿态变化代价中的至少之一为跟随输出值。
作为一种实施方式,在一本发明实施例规划最优跟随路径的方法,所述根据机器人所在的环境限制和机器人的运动模型作为跟随限制条件,确定机器人能跟随到目标人物理位置的不同跟随输出值包括:根据在所述机器人所在局部地图中的环境限制,确定机器人能跟随到目标人物理位置的不同跟随输出值。
S303、根据所述最优跟随输出值生成对应的运动控制序列,以规划最优的跟随路径。
作为一种实施方式,在一本发明实施例规划最优跟随路径的方法,所述根据所述最优跟随输出值生成对应的运动控制序列,以规划最优的跟随路径包括:根据所述最优跟随输出值生成对应的运动控制序列,以规划出可避免与环境障碍物干涉的跟随路径。
图4为本发明实施例四规划最优跟随路径的方法流程示意图;如图4所示,本实施例至少可以包括:
S401、根据目标人特征的描述实时确定目标人在图像上所在的图像区域。
本实施例中,目标人特征的描述包括但不限于人脸、人体的轮廓、头部人体骨骼等身体上的显著特点。
本实施例中,当确定出目标人在图像上所在的图像区域时,可以用一个明显的标示比如矩形框进行定位。这个矩形框可以框住目标人的全部体型或者部分体型,比如人脸、头部等。
作为一种实施方式,在一本发明实施例规划最优跟随路径的方法中, 步骤S401中根据目标人特征的描述实时确定目标人在图像上所在的图像区域至少可以包括:
通过深度摄像头采集跟随目标形成的深度图像;
根据目标人特征的描述实时确定目标人在所述深度图像上所在的图像区域。
由于后续要实现对目标人的跟踪,即要确定出机器人与目标人之间的距离,而深度图像信息中的深度数据即可实现距离的确定,而深度图像中包括了可确定出目标人的图像区域的红绿蓝(RGB)信息。因此,在确定目标人的图像区域时,可直接基于深度图像,无须单独采用普通的二维图像。
S402、根据所述图像区域内目标人的特征点跟踪目标人的物理位置。
作为一种实施方式,在一本发明实施例规划最优跟随路径的方法中,根据所述图像区域内目标人的特征点跟踪目标人的物理位置包括:根据所述图像区域内目标人的特征点以及所述深度图像的深度信息跟踪目标人的物理位置。
本实施例中,目标人的特征点可以但不限于人脸特征、人体骨骼特征等。目标人的这些特征点实际上作为目标人的辨识ID,以人脸特征为例,由于本实施例中是基于深度图像,因此人脸特征同时包括了二维人脸数据,同时还包括了深度信息,所以比传统单纯的二维人脸数据更为丰富,在目标人的物理位置定位上更为准确。而且,在跟踪的过程中,目标人的特征点一直会保持,从而确保跟踪的有效性。
具体的,可以对目标区域的点云密度即特征点的密度做一些统计上的处理,得到其平均深度,这个深度就是当前时刻机器人到目标人的物理距离,换言之也就确定了目标人的物理位置。在确定平均深度时,可以基于目标区域的所有特征点进行,也可以选择部分具有较高密集度的特征点进 行。
S403、根据跟随限制条件,确定机器人到目标人的物理位置的跟随输出值,并从中确定出最优跟随输出值。
S404、根据所述最优跟随输出值生成对应的运动控制序列,以规划最优的跟随路径。
本实施例中的步骤S403、S404在此不再赘述,详细请参见上述图1、图2、图3所示的实施例。
图5为本发明实施例五规划最优跟随路径的方法流程示意图;如图5所示,本实施例至少可以包括:
S501、根据目标人特征的描述实时确定目标人在图像上所在的图像区域。
作为一种实施方式,在一本发明实施例规划最优跟随路径的方法中,所述根据目标人特征的描述实时确定目标人在图像上所在的图像区域包括:通过深度摄像头采集跟随目标形成的深度图像;根据目标人特征的描述实时确定目标人在所述深度图像上所在的图像区域。
S502、获取跟随客体在不同的姿态下对目标人跟随的不同特征点,并据此建立目标人的特征点集,以保持对目标人的高识别度。
本实施例中,目标人的特征点可以但不限于人脸特征、人体骨骼特征等;目标人的这些特征点实际上作为目标人的辨识ID。以人脸特征为例,目标人的人脸特征包括二维人脸数据,还可以包括深度信息。跟随客体在不同的姿态下对目标人跟随获得的不同特征点建立所述目标人的特征点集。在建立特征点集时可以设定一定的准则,防止加入到特征点集中的特征点过度膨胀;但是如果某个特征点加入特征点集后,仅仅提供了很少的改善,考虑到特征集不宜过大,则可以放弃该特征点加入所述特征点集。
S503、根据所述图像区域内目标人的特征点跟踪目标人的物理位置。
作为一种实施方式,在一本发明实施例规划最优跟随路径的方法中,根据所述图像区域内目标人的特征点跟踪目标人的物理位置包括:根据所述图像区域内目标人的特征点以及所述深度图像的深度信息跟踪目标人的物理位置。
S504、根据跟随限制条件,确定机器人到目标人的物理位置的跟随输出值,并从中确定出最优跟随输出值。
S505、根据所述最优跟随输出值生成对应的运动控制序列,以规划最优的跟随路径。
本实施例中的步骤S504、S505在此不再赘述,详细请参见上述图1、图2、图3所示的实施例。
图6为本发明实施例六跟随装置的结构示意图;如图6所示,其至少可以包括:处理器602以及控制器603,控制器603与处理器602可通讯连接;其中:
所述处理器602,配置为根据跟随限制条件,确定所述跟随主体能跟随到被跟随客体物理位置的不同跟随输出值,并从中确定出最优跟随输出值;根据所述最优跟随输出值生成对应的运动控制序列,以规划最优的跟随路径;
所述控制器603,配置为控制根据所述处理器602规划出的最优跟随路径跟随被跟随客体。
作为一种实施方式,所述处理器602,配置为根据所述跟随主体所在的环境限制和所述跟随主体的运动模型作为跟随限制条件,确定所述跟随主体能跟随到被跟随客体物理位置的不同跟随输出值;其中,所述跟随输出值包括以下至少之一:跟随距离代价、跟随时间代价、跟随中姿态变化代价。
作为一种实施方式,所述处理器602,配置为根据在所述被跟随客体所 在局部地图中的环境限制,确定所述跟随主体能跟随到被跟随客体物理位置的不同跟随输出值。
本实施例中,所述跟随装置还包括图像采集单元601;所述图像采集单元601与所述处理器602可通讯连接;其中,所述图像采集单元601配置为采集在跟随目标人时的图像;
本实施例中,图像采集单元601可以但不局限于为深度摄像头,只要可以获得深度图像信息即可。
所述处理器602配置为根据目标人特征的描述实时确定目标人在图像上所在的图像区域,并根据所述图像区域内目标人的特征点跟踪目标人的物理位置;处理器602还配置为根据跟随限制条件确定机器人到目标人的物理位置的跟随输出值并从中确定出最优跟随输出值;处理器602还配置为根据所述最优跟随输出值生成对应的运动控制序列,以规划最优的跟随路径;
控制器603配置为控制根据所述处理器602规划出的最优跟随路径跟随目标人。
作为一种实施方式,在一本发明实施例目标的跟随装置中,所述处理器602在根据跟随限制条件确定机器人到目标人的物理位置的跟随输出值并从中确定出最优跟随输出值时,可以进一步配置为根据机器人所在的环境限制和机器人的运动模型作为跟随限制条件,确定机器人能跟随到目标人物理位置的跟随输出值。
作为一种实施方式,在一本发明实施例目标的跟随装置中,所述处理器602在根据跟随限制条件确定机器人到目标人的物理位置的跟随输出值并从中确定出最优跟随输出值时,可以进一步配置为根据跟随限制条件,确定机器人能跟随到目标人物理位置的跟随距离代价、跟随时间代价、跟随中姿态变化代价中之一或者任意组合。
作为一种实施方式,在一本发明实施例目标的跟随装置中,所述处理器602在根据跟随限制条件确定机器人到目标人的物理位置的跟随输出值并从中确定出最优跟随输出值时,可以进一步配置为根据在所述机器人所在局部地图中的环境限制,确定机器人能跟随到目标人物理位置的不同跟随输出值。
作为一种实施方式,在一本发明实施例目标的跟随装置中,所述处理器602在根据所述最优跟随输出值生成对应的运动控制序列,以规划最优的跟随路径时,可以进一步配置为根据所述最优跟随输出值生成对应的运动控制序列,以规划出至少可避免与环境障碍物干涉的跟随路径。
作为一种实施方式,在一本发明实施例目标的跟随装置中,所述处理器602进一步配置为预先设定期望的机器人到目标人的相对姿态。
作为一种实施方式,在一本发明实施例目标的跟随装置中,所述处理器602在根据跟随限制条件确定机器人到目标人的物理位置的跟随输出值并从中确定出最优跟随输出值时,可以进一步配置为根据跟随限制条件、实时检测的机器人的姿态与期望的机器人到目标人的相对姿态差值,确定机器人能跟随到目标人物理位置的跟随输出值。
作为一种实施方式,在一本发明实施例目标的跟随装置中,所述处理器602进一步配置为当被跟随的目标人超出机器人的视野范围时,确定丢失前最后一次目标人的位置,并将丢失前最后一次目标人的位置作为导航目标点;在根据跟随限制条件确定机器人到目标人的物理位置的跟随输出值并从中确定出最优跟随输出值时,可以进一步配置为根据建立的全局地图及在所述全局地图的限制条件下,确定机器人能实现跟随到所述导航目标点的跟随输出值,并从中确定出最优跟随输出值。
作为一种实施方式,在一本发明实施例目标的跟随装置中,所述处理器602在根据目标人特征的描述实时确定目标人在图像上所在的图像区域 时,可以进一步配置为通过深度摄像头采集跟随目标形成的深度图像;以及根据目标人特征的描述实时确定目标人在所述深度图像上所在的图像区域。
作为一种实施方式,在一本发明实施例目标的跟随装置中,所述处理器602在根据所述图像区域内目标人的特征点跟踪目标人的物理位置时,可以进一步配置为根据所述图像区域内目标人的特征点以及所述深度图像的深度信息跟踪目标人的物理位置。
作为一种实施方式,在一本发明实施例目标的跟随装置中,所述处理器602在根据所述图像区域内目标人的特征点跟踪目标人的物理位置时,可以进一步配置为获取跟随客体在不同的姿态下对目标人跟随的不同特征点,并据此建立目标人的特征点集,以保持对目标人的高识别度。
本发明实施例中,所述跟随装置在实际应用中,可通过跟随机器设备实现;所述跟随装置中的处理器602,在实际应用中可由中央处理器(CPU,Central Processing Unit)、数字信号处理器(DSP,Digital Signal Processor)、微控制单元(MCU,Microcontroller Unit)或可编程门阵列(FPGA,Field-Programmable Gate Array)实现;所述跟随装置中的图像采集单元601,可通过摄像头(例如深度摄像头)实现;所述跟随装置中的控制器603,在实际应用中可由移动平台来实现,地面跟随装置的移动平台可以是运动底盘,空中跟随装置(如无人机)的移动平台可以是旋翼等等。
需要说明的是,本发明上述实施例的技术方案可以应用于足球机器人、家用机器人以及服务类机器人上。具体的机器人形态并不做特殊限定,比如目前市场上出现的类似segway的自平衡两轮车也可以。
以上所描述的装置实施例仅仅是示意性的,其中所述作为分离部件说明的单元可以是或者也可以不是物理上分开的,作为单元显示的部件可以是或者也可以不是物理单元,即可以位于一个地方,或者也可以分布到多 个网络单元上。可以根据实际的需要选择其中的部分或者全部模块来实现本实施例方案的目的。本领域普通技术人员在不付出创造性的劳动的情况下,即可以理解并实施。
通过以上的实施方式的描述,本领域的技术人员可以清楚地了解到各实施方式可借助软件加必需的通用硬件平台的方式来实现,当然也可以通过硬件。基于这样的理解,上述技术方案本质上或者说对现有技术做出贡献的部分可以以软件产品的形式体现出来,该计算机软件产品可以存储在计算机可读存储介质中,如ROM/RAM、磁碟、光盘等,包括若干指令用以使得一台计算机设备(可以是个人计算机,服务器,或者网络设备等)执行各个实施例或者实施例的某些部分所述的方法。
最后应说明的是:以上实施例仅用以说明本发明的技术方案,而非对其限制;尽管参照前述实施例对本发明进行了详细的说明,本领域的普通技术人员应当理解:其依然可以对前述各实施例所记载的技术方案进行修改,或者对其中部分技术特征进行等同替换;而这些修改或者替换,并不使相应技术方案的本质脱离本发明各实施例技术方案的精神和范围。
工业实用性
本发明实施例的技术方案由于根据跟随限制条件可以确定出跟随主体能跟随到被跟随客体物理位置的跟随输出值,并从中确定并根据最优跟随输出值生成对应的运动控制序列,以规划最优的跟随路径,从而实现了有效地跟随指定目标。

Claims (17)

  1. 一种规划最优跟随路径的方法,所述方法包括:
    在跟随主体跟随位置动态变化的跟随客体时,根据跟随限制条件,确定所述跟随主体能跟随到被跟随客体物理位置的不同跟随输出值,并从中确定出最优跟随输出值;
    根据所述最优跟随输出值生成对应的运动控制序列,以规划最优的跟随路径。
  2. 根据权利要求1所述的方法,其中,所述根据跟随限制条件,确定所述跟随主体能跟随到被跟随客体物理位置的不同跟随输出值包括:
    根据所述跟随主体所在的环境限制和所述跟随主体的运动模型作为跟随限制条件,确定所述跟随主体能跟随到被跟随客体物理位置的不同跟随输出值。
  3. 根据权利要求1所述的方法,其中,所述根据跟随限制条件,确定所述跟随主体能实现跟随到被跟随客体的不同跟随输出值包括:
    根据跟随限制条件,确定所述跟随主体能跟随到被跟随客体物理位置的跟随距离代价、跟随时间代价、跟随中姿态变化代价中的至少之一为跟随输出值。
  4. 根据权利要求2所述的方法,其中,所述根据所述跟随主体所在的环境限制和所述跟随主体的运动模型作为跟随限制条件,确定所述跟随主体能跟随到被跟随客体物理位置的不同跟随输出值包括:根据在所述被跟随客体所在局部地图中的环境限制,确定所述跟随主体能跟随到被跟随客体物理位置的不同跟随输出值。
  5. 根据权利要求1所述的方法,其中,所述根据所述最优跟随输出值生成对应的运动控制序列,以规划最优的跟随路径包括:根据所述最优跟随输出值对应的运动控制指令生成对应的运动控制序列。
  6. 根据权利要求1所述的方法,其中,所述根据跟随限制条件,确定所述跟随主体能跟随到被跟随客体物理位置的不同跟随输出之前,所述方法还包括:设定期望的跟随主体到被跟随客体的相对姿态。
  7. 根据权利要求6所述的方法,其中,所述确定跟随主体能跟随到被跟随客体物理位置的不同跟随输出值包括:根据跟随限制条件、实时检测的跟随主体的姿态与期望的所述跟随主体到被跟随客体的相对姿态差值,确定所述跟随主体能跟随到被跟随客体物理位置的不同跟随输出值。
  8. 根据权利要求1所述的方法,其中,当被跟随客体超出跟随主体的视野范围时,所述方法还包括:确定丢失前最后一次被跟随客体的位置,并将丢失前最后一次被跟随客体的位置作为导航目标点;
    所述根据跟随限制条件,确定跟随主体能实现跟随到被跟随客体的跟随输出值,并从中确定出最优跟随输出值包括:根据建立的全局地图及在所述全局地图的限制条件下,确定跟随主体能实现跟随到所述导航目标点的不同跟随输出值,并从中确定出最优跟随输出值。
  9. 根据权利要求1所述的方法,其中,所述方法还包括:
    根据被跟随客体特征的描述实时确定所述被跟随客体在图像上所在的图像区域;
    根据所述图像区域内被跟随客体的特征点跟踪所述被跟随客体的物理位置。
  10. 根据权利要求9所述的方法,其中,所述根据被跟随客体特征的描述实时确定所述被跟随客体在图像上所在的图像区域包括:
    通过深度摄像头采集跟随目标形成的深度图像;
    根据所述被跟随客体特征的描述实时确定所述被跟随客体在所述深度图像上所在的图像区域。
  11. 根据权利要求10所述的方法,其中,所述根据所述图像区域内被 跟随客体的特征点跟踪所述被跟随客体的物理位置包括:根据所述图像区域内被跟随客体的特征点以及所述深度图像的深度信息跟踪所述被跟随客体的物理位置。
  12. 根据权利要求9所述的方法,其中,所述方法还包括:获取跟随客体在不同的姿态下对被跟随客体跟随的不同特征点,并据此建立被跟随客体的特征点集,以保持对被跟随客体的高识别度。
  13. 一种跟随装置,包括:
    处理器,配置为根据跟随限制条件,确定所述跟随主体能跟随到被跟随客体物理位置的不同跟随输出值,并从中确定出最优跟随输出值;根据所述最优跟随输出值生成对应的运动控制序列,以规划最优的跟随路径;
    控制器,配置为控制根据所述处理器规划出的最优跟随路径跟随被跟随客体。
  14. 根据权利要求13所述的跟随装置,其中,
    所述处理器,配置为根据所述跟随主体所在的环境限制和所述跟随主体的运动模型作为跟随限制条件,确定所述跟随主体能跟随到被跟随客体物理位置的不同跟随输出值;其中,所述跟随输出值包括以下至少之一:跟随距离代价、跟随时间代价、跟随中姿态变化代价。
  15. 根据权利要求14所述的跟随装置,其中,所述处理器,配置为根据在所述被跟随客体所在局部地图中的环境限制,确定所述跟随主体能跟随到被跟随客体物理位置的不同跟随输出值。
  16. 根据权利要求13所述的跟随装置,其中,所述装置还包括图像采集单元;
    所述图像采集单元,配置为采集在跟随被跟随客体时的图像;
    所述处理器,配置为根据被跟随客体特征的描述实时确定所述被跟随客体在图像上所在的图像区域,并根据所述图像区域内被跟随客体的特征 点跟踪所述被跟随客体的物理位置;根据跟随限制条件确定到所述被跟随客体的物理位置的不同跟随输出值并从中确定出最优跟随输出值;根据所述最优跟随输出值生成对应的运动控制序列,以规划最优的跟随路径;
    所述控制器,配置为控制根据所述处理器规划出的最优跟随路径跟随被跟随客体。
  17. 一种计算机存储介质,所述计算机存储介质中存储有计算机可执行指令,所述计算机可执行指令配置为执行权利要求1至12任一项所述的规划最优跟随路径的方法。
PCT/CN2016/106689 2015-11-26 2016-11-21 规划最优跟随路径的方法、装置及计算机存储介质 WO2017088720A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201510846422.5A CN105425795B (zh) 2015-11-26 2015-11-26 规划最优跟随路径的方法及装置
CN201510846422.5 2015-11-26

Publications (1)

Publication Number Publication Date
WO2017088720A1 true WO2017088720A1 (zh) 2017-06-01

Family

ID=55504064

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2016/106689 WO2017088720A1 (zh) 2015-11-26 2016-11-21 规划最优跟随路径的方法、装置及计算机存储介质

Country Status (2)

Country Link
CN (1) CN105425795B (zh)
WO (1) WO2017088720A1 (zh)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111982134A (zh) * 2020-08-10 2020-11-24 北京轩宇空间科技有限公司 适应未知动态空间的路径跟随控制方法、装置及存储介质
CN115565057A (zh) * 2021-07-02 2023-01-03 北京小米移动软件有限公司 地图生成方法、装置、足式机器人及存储介质

Families Citing this family (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105425795B (zh) * 2015-11-26 2020-04-14 纳恩博(北京)科技有限公司 规划最优跟随路径的方法及装置
CN105843225B (zh) * 2016-03-31 2022-01-25 纳恩博(北京)科技有限公司 一种数据处理方法及设备
CN106096573A (zh) * 2016-06-23 2016-11-09 乐视控股(北京)有限公司 目标跟踪方法、装置、系统及远程监控系统
CN106094875B (zh) * 2016-06-27 2019-01-22 南京邮电大学 一种移动机器人的目标跟随控制方法
CN106774301B (zh) * 2016-10-25 2020-04-24 纳恩博(北京)科技有限公司 一种避障跟随方法和电子设备
CN107097256B (zh) * 2017-04-21 2019-05-10 河海大学常州校区 基于视觉非完整机器人在极坐标下的无模型目标跟踪方法
CN108107913A (zh) * 2017-10-31 2018-06-01 深圳市博鑫创科科技有限公司 一种平衡车前置跟踪方法和系统
CN107943072B (zh) * 2017-11-13 2021-04-09 深圳大学 无人机飞行路径生成方法、装置、存储介质及设备
CN108107884A (zh) * 2017-11-20 2018-06-01 北京理工华汇智能科技有限公司 机器人跟随导航的数据处理方法及其智能装置
CN108170166A (zh) * 2017-11-20 2018-06-15 北京理工华汇智能科技有限公司 机器人的跟随控制方法及其智能装置
CN107807652A (zh) * 2017-12-08 2018-03-16 灵动科技(北京)有限公司 物流机器人、用于其的方法和控制器及计算机可读介质
CN109447337B (zh) * 2018-10-23 2022-04-15 重庆扬升信息技术有限公司 一种智慧云会议数据共享交换平台路径优化方法
CN110488874A (zh) * 2019-08-29 2019-11-22 五邑大学 一种教育辅助机器人及其控制方法
CN111290406B (zh) * 2020-03-30 2023-03-17 达闼机器人股份有限公司 一种路径规划的方法、机器人及存储介质
CN112788238A (zh) * 2021-01-05 2021-05-11 中国工商银行股份有限公司 用于机器人跟随的控制方法和装置

Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2010015194A (ja) * 2008-06-30 2010-01-21 Ihi Corp 自律移動ロボット装置及び自律移動ロボット装置の制御方法
CN102411371A (zh) * 2011-11-18 2012-04-11 浙江大学 一种基于多传感器服务机器人跟随系统和方法
CN102411368A (zh) * 2011-07-22 2012-04-11 北京大学 机器人的主动视觉人脸跟踪方法和跟踪系统
CN102895092A (zh) * 2011-12-13 2013-01-30 冷春涛 一种基于多传感器融合的助行机器人三维环境识别系统
CN103558856A (zh) * 2013-11-21 2014-02-05 东南大学 动态环境下服务动机器人导航方法
CN103901895A (zh) * 2014-04-18 2014-07-02 江苏久祥汽车电器集团有限公司 一种基于无极FastSLAM算法和匹配优化目标定位方法及机器人
CN104637198A (zh) * 2013-11-14 2015-05-20 复旦大学 一种具有自主跟踪功能的智能结算购物车系统
CN104732222A (zh) * 2015-04-07 2015-06-24 中国科学技术大学 一种基于深度摄像头的多特征人体识别方法
US20150205301A1 (en) * 2013-03-15 2015-07-23 Ashley A. Gilmore Digital tethering for tracking with autonomous aerial robot
CN104834309A (zh) * 2015-04-10 2015-08-12 浙江工业大学 基于目标跟踪控制策略的单移动机器人最优巡回控制方法
CN104950887A (zh) * 2015-06-19 2015-09-30 重庆大学 基于机器人视觉系统和自主跟踪系统的运输装置
CN105425795A (zh) * 2015-11-26 2016-03-23 纳恩博(北京)科技有限公司 规划最优跟随路径的方法及装置

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102207736B (zh) * 2010-03-31 2013-01-02 中国科学院自动化研究所 基于贝塞尔曲线的机器人路径规划方法及装置
CN102895093A (zh) * 2011-12-13 2013-01-30 冷春涛 一种基于rgb-d传感器的助行机器人跟踪系统及方法
CN103268616B (zh) * 2013-04-18 2015-11-25 北京工业大学 多特征多传感器的移动机器人运动人体跟踪方法
CN103471592A (zh) * 2013-06-08 2013-12-25 哈尔滨工程大学 一种基于蜂群协同觅食算法的多无人机航迹规划方法
CN105043376B (zh) * 2015-06-04 2018-02-13 上海物景智能科技有限公司 一种适用于非全向移动车辆的智能导航方法及系统
CN104960522B (zh) * 2015-06-18 2018-09-21 奇瑞汽车股份有限公司 自动跟车系统及其控制方法

Patent Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2010015194A (ja) * 2008-06-30 2010-01-21 Ihi Corp 自律移動ロボット装置及び自律移動ロボット装置の制御方法
CN102411368A (zh) * 2011-07-22 2012-04-11 北京大学 机器人的主动视觉人脸跟踪方法和跟踪系统
CN102411371A (zh) * 2011-11-18 2012-04-11 浙江大学 一种基于多传感器服务机器人跟随系统和方法
CN102895092A (zh) * 2011-12-13 2013-01-30 冷春涛 一种基于多传感器融合的助行机器人三维环境识别系统
US20150205301A1 (en) * 2013-03-15 2015-07-23 Ashley A. Gilmore Digital tethering for tracking with autonomous aerial robot
CN104637198A (zh) * 2013-11-14 2015-05-20 复旦大学 一种具有自主跟踪功能的智能结算购物车系统
CN103558856A (zh) * 2013-11-21 2014-02-05 东南大学 动态环境下服务动机器人导航方法
CN103901895A (zh) * 2014-04-18 2014-07-02 江苏久祥汽车电器集团有限公司 一种基于无极FastSLAM算法和匹配优化目标定位方法及机器人
CN104732222A (zh) * 2015-04-07 2015-06-24 中国科学技术大学 一种基于深度摄像头的多特征人体识别方法
CN104834309A (zh) * 2015-04-10 2015-08-12 浙江工业大学 基于目标跟踪控制策略的单移动机器人最优巡回控制方法
CN104950887A (zh) * 2015-06-19 2015-09-30 重庆大学 基于机器人视觉系统和自主跟踪系统的运输装置
CN105425795A (zh) * 2015-11-26 2016-03-23 纳恩博(北京)科技有限公司 规划最优跟随路径的方法及装置

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
李飞龙等: "一种基于Kinect的自动跟随机器人设计", 《电脑知识与技术》 XP NUMBER: SOURCE: (Y) *

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111982134A (zh) * 2020-08-10 2020-11-24 北京轩宇空间科技有限公司 适应未知动态空间的路径跟随控制方法、装置及存储介质
CN111982134B (zh) * 2020-08-10 2022-08-05 北京轩宇空间科技有限公司 适应未知动态空间的路径跟随控制方法、装置及存储介质
CN115565057A (zh) * 2021-07-02 2023-01-03 北京小米移动软件有限公司 地图生成方法、装置、足式机器人及存储介质

Also Published As

Publication number Publication date
CN105425795A (zh) 2016-03-23
CN105425795B (zh) 2020-04-14

Similar Documents

Publication Publication Date Title
WO2017088720A1 (zh) 规划最优跟随路径的方法、装置及计算机存储介质
Pradeep et al. Robot vision for the visually impaired
Engel et al. Scale-aware navigation of a low-cost quadrocopter with a monocular camera
US10197399B2 (en) Method for localizing a robot in a localization plane
Pradeep et al. A wearable system for the visually impaired
More et al. Visual odometry using optic flow for unmanned aerial vehicles
Sun et al. Autonomous state estimation and mapping in unknown environments with onboard stereo camera for micro aerial vehicles
CN103743394A (zh) 一种基于光流的移动机器人避障方法
Hoffmann et al. Autonomous indoor exploration with an event-based visual SLAM system
Kato A remote navigation system for a simple tele-presence robot with virtual reality
Flores et al. A vision and GPS-based real-time trajectory planning for MAV in unknown urban environments
Arola et al. UAV pursuit-evasion using deep learning and search area proposal
Ochiai et al. Remote control system for multiple mobile robots using touch panel interface and autonomous mobility
Atsuzawa et al. Robot navigation in outdoor environments using odometry and convolutional neural network
Narayanan et al. On equitably approaching and joining a group of interacting humans
Klaser et al. Simulation of an autonomous vehicle with a vision-based navigation system in unstructured terrains using OctoMap
Fu et al. A robust pose estimation method for multicopters using off-board multiple cameras
Lee et al. Vision-based perimeter defense via multiview pose estimation
Williams et al. Scalable distributed collaborative tracking and mapping with micro aerial vehicles
Kodagoda et al. Simultaneous people tracking and motion pattern learning
Dam et al. Person following mobile robot using pedestrian dead-reckoning with inertial data of smartphones
Ciuccarelli et al. Cooperative robots architecture for an assistive scenario
Jung et al. Visual cooperation based on LOS for self-organization of swarm robots
Kurdi et al. Trajectory and Motion for Agricultural Robot
Tomari et al. Wide field of view Kinect undistortion for social navigation implementation

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 16867943

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 16867943

Country of ref document: EP

Kind code of ref document: A1