WO2018108087A1 - 实体目标的跟随方法、机器人和计算机可读存储介质 - Google Patents

实体目标的跟随方法、机器人和计算机可读存储介质 Download PDF

Info

Publication number
WO2018108087A1
WO2018108087A1 PCT/CN2017/115772 CN2017115772W WO2018108087A1 WO 2018108087 A1 WO2018108087 A1 WO 2018108087A1 CN 2017115772 W CN2017115772 W CN 2017115772W WO 2018108087 A1 WO2018108087 A1 WO 2018108087A1
Authority
WO
WIPO (PCT)
Prior art keywords
target
entity
physical
real
forward direction
Prior art date
Application number
PCT/CN2017/115772
Other languages
English (en)
French (fr)
Inventor
李一鹏
Original Assignee
深圳天轮科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 深圳天轮科技有限公司 filed Critical 深圳天轮科技有限公司
Publication of WO2018108087A1 publication Critical patent/WO2018108087A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0212Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory
    • G05D1/0214Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory in accordance with safety or protection criteria, e.g. avoiding hazardous areas
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0231Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means
    • G05D1/0238Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using obstacle or wall sensors
    • G05D1/024Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using obstacle or wall sensors in combination with a laser
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0231Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means
    • G05D1/0246Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using a video camera in combination with image processing means
    • G05D1/0253Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using a video camera in combination with image processing means extracting relative motion information from a plurality of images taken successively, e.g. visual odometry, optical flow
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0255Control of position or course in two dimensions specially adapted to land vehicles using acoustic signals, e.g. ultra-sonic singals
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0257Control of position or course in two dimensions specially adapted to land vehicles using a radar
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/12Target-seeking control

Definitions

  • the present application relates to the field of robots, and in particular, to a method for following a physical target, a robot, and a computer readable storage medium.
  • the normal follow function is that the physical target is in front, the robot follows the direction of the physical target, and guarantees a certain distance to complete the following action.
  • This tracking method although technically easy to implement, is not optimal in terms of use experience.
  • the automatic and quiet things that are automatically followed such as the balance car or the trunk, are always behind the user and do not appear in the user's field of vision, it will cause psychological insecurity and practical application to the user. Risks such as accidental impact and loss.
  • a shooting tool such as a drone can only capture the back of the user, and can't do the same as the front shots in the movie, reducing the value of the finished product.
  • a method of following a physical object, a robot, and a computer readable storage medium are provided.
  • a method for following a physical target includes: detecting a location of an entity target and a plurality of feature points in the entity target; predicting a forward direction of the entity target according to the plurality of feature points in the entity target; and according to the entity The position of the target obtains a virtual target based on the advancing direction of the entity target as an arbitrary designated point on the front relative coordinate system, and follows the virtual target.
  • a robot comprising a wheel and/or a rotor, the robot further comprising a first detector, a memory and a processor: the first detector for detecting a location of the physical target and a plurality of feature points in the physical target;
  • the memory stores a computer program, when executed by the processor, causing the processor to perform the steps of: predicting a direction of advancement of the physical object based on a plurality of feature points in the entity target;
  • the position of the physical target obtains a virtual target based on a forward direction of the physical target as an arbitrary designated point on the front relative coordinate system, and controls the robot to follow the virtual target.
  • One or more non-transitory computer readable storage media storing a computer program, when executed by one or more processors, causing the one or more processors to perform the steps of: detecting an entity target a location and a plurality of feature points in the entity target; predicting a direction of advancement of the entity target based on a plurality of feature points in the entity target; and based on a location of the entity target, based on a forward direction of the entity target Any specified point on the front relative coordinate system obtains a virtual target and follows the virtual target.
  • a method for following a physical target includes: detecting a real-time location of an entity target; predicting a forward direction of the entity target according to a change in a real-time location of the entity target; and determining a real-time location based on the entity target based on the entity
  • the forward direction of the target obtains a virtual target as an arbitrary designated point on the front relative coordinate system and follows the virtual target.
  • a robot comprising a wheel and/or a rotor, the robot further comprising a second detector, a memory and a processor, the second detector for detecting a real-time location of the physical object; the memory storing a computer program
  • the computer program When the computer program is executed by the processor, causing the processor to perform the steps of: predicting a direction of advancement of the entity target according to a change in a real-time location of the entity target; and based on a real-time location of the entity target based on The forward direction of the physical target obtains a virtual target as an arbitrary designated point on the front relative coordinate system, and controls the robot to follow the virtual target.
  • One or more non-transitory computer readable storage media storing a computer program, when executed by one or more processors, causing the one or more processors to perform the steps of: detecting an entity target Real-time position; predicting a forward direction of the physical target according to a change in a real-time position of the physical target; and based on a real-time position of the physical target, based on a forward direction of the physical target as a forward relative coordinate system
  • a virtual target is obtained at any specified point and follows the virtual target.
  • FIG. 1 is a flow chart of a method of following a physical target according to an embodiment of the present application
  • FIG. 2 is a schematic diagram showing an initial position of a following method of a physical target according to an embodiment of the present application
  • FIG. 3 is a schematic diagram of the front of two feature points determining a target according to an embodiment of the present application
  • FIG. 4 is a schematic diagram of a physical target turning to a right and a right shifting according to an embodiment of the present application:
  • Figure 5 is a structural view of a robot according to an embodiment of the present application.
  • FIG. 6 is a flowchart of a method of following a physical target according to another embodiment of the present application.
  • FIG. 7 is a schematic diagram of an incremental position point determination target front side according to an embodiment of the present application.
  • FIG. 8 is a structural diagram of a robot according to another embodiment of the present application.
  • FIG. 9 is a structural block diagram of a robot according to an embodiment of the present application.
  • FIG. 1 is a flow chart of a method of following a physical target in accordance with an embodiment of the present application.
  • a method for following a physical target includes the following steps:
  • S110 detecting a location of the entity target and a plurality of feature points in the entity target.
  • a feature on the physical target that is convenient to be recognized, such as visual color information, which can reflect the physical surface of light, ultrasound, etc. It can be a device that actively sends out an identification signal.
  • a sensor capable of detecting a plurality of feature points of a physical target such as a laser radar, a visual sensor or an ultrasonic sensor, whose sensor can be mounted facing the solid target can be installed.
  • the tracking device can also detect multiple feature points of the physical target through UWB (Ultra Wideband) radio frequency communication technology.
  • UWB radio frequency communication technology is a carrierless communication technology, which can transmit data by using non-sinusoidal narrow pulses of nanosecond to microsecond.
  • Ultra-wideband systems have the characteristics of strong penetrating power, low power consumption, good multipath resistance, high security, low system complexity and accurate positioning accuracy. Therefore, UWB RF communication technology can accurately detect multiple physical targets.
  • Feature points. S120 predict a forward direction of the entity target according to multiple feature points in the entity target.
  • the forward direction of the entity object 400 is obtained according to the positional relationship of the plurality of feature points in the entity object 400, that is, the feature point 1 and the feature point 2 are detected, and the feature point is passed.
  • the positional relationship between 1 and feature point 2 is calculated to obtain the heading direction.
  • the direction thus obtained is not only real-time but also reliable.
  • the predicted forward direction obtained by the positions of the plurality of feature points is such that a virtual target is obtained based on the position of the physical target based on the forward direction of the physical target as an arbitrary designated point on the forward relative coordinate system, and follows a virtual target, wherein the virtual target may be a position on an extension line based on the forward direction, or the virtual target may be set at any position, such as right/left side, right front or right rear, in any specified direction. .
  • the virtual target may be a position on an extension line based on the forward direction, or the virtual target may be set at any position, such as right/left side, right front or right rear, in any specified direction.
  • FIG. 2 when the physical target 400 is advanced/retracted, the robot 300 as the tracking is extended by a distance in the forward direction to obtain a virtual target 200, and the robot 300 follows the virtual target 200 forward/backward and with the physical target. 400 keep a certain distance.
  • the left side of FIG. 4 is a state in which the solid object 400 is turned to the right. It can be seen that the robot 300 needs to laterally rotate the body, perform a large lateral displacement, and follow the virtual target 200 at a certain distance.
  • the right side of Figure 4 is a horizontal translation of the physical target 400 to the left and right. It can be seen that the robot 300 needs to laterally rotate the fuselage to follow the virtual target 200 lateral translation over a certain distance.
  • the forward direction is predicted, and a virtual target is obtained in the forward direction, and the virtual target in front is followed, so that the tracking is located in the user's field of view.
  • FIG. 5 is a block diagram of a robot in accordance with one embodiment of the present invention.
  • a robot 500 including a wheel and/or a rotor, according to an embodiment of the present invention further includes a first detection module 510, a first prediction module 520, and a first control module 530.
  • the first detecting module 510 is configured to detect a location of the physical target and a plurality of feature points in the physical target.
  • the first prediction module 520 is configured to predict a forward direction of the entity target according to the plurality of feature points in the entity target.
  • the first control module 530 is configured to obtain a virtual target based on the position of the physical target based on the forward direction of the physical target as an arbitrary designated point on the front relative coordinate system, and control the robot to follow the virtual target.
  • the forward direction is predicted, and a virtual target is created in the forward direction, and the virtual target in front is followed, and the robot is in the middle position between the physical target and the virtual target. Tracking is located in the user's field of view, increasing the user's sense of security.
  • the first prediction module is configured to: obtain a direction of advancement of the entity target according to a positional relationship of the plurality of feature points in the entity target.
  • the first detection module includes at least one of a lidar, a vision sensor, an ultrasonic sensor, and a UWB detector.
  • the robot can also mount the camera on the body, which is mounted on a single-axis or multi-axis pan/tilt, so that the lens can always face the solid target, effectively avoiding the robot itself.
  • the angle of influence brought by the rotation is not limited.
  • FIG. 6 is a flow chart of a method of following a physical object in accordance with another embodiment of the present invention.
  • a method for following a physical target includes the following steps:
  • the tracking device may be equipped with a sensor that can detect the location of the physical target such that it detects the real-time location of the physical target transmitted by the sensor, the sensor of which can be mounted facing the physical target.
  • a sensor that can detect the real-time position of a physical target can be installed as a tracking device, the sensor of which can be mounted facing the solid target.
  • the tracking device can also detect the real-time location of the physical target through UWB radio frequency communication technology.
  • S620 predict the forward direction of the entity target according to the change of the real-time position of the entity target.
  • S630 Obtain a virtual target based on the real-time position of the physical target, based on the forward direction of the physical target as an arbitrary designated point on the front relative coordinate system, and follow the virtual target.
  • the predicted forward direction obtained by the position of the physical target is obtained, and based on the position of the physical target, a virtual target is obtained based on the forward direction of the physical target as an arbitrary designated point on the front relative coordinate system, and follows a virtual target, wherein the virtual target may be a position on an extension line based on the forward direction, or the virtual target may be set at any position, such as right/left side, right front or right rear, in any specified direction. .
  • the virtual target may be a position on an extension line based on the forward direction, or the virtual target may be set at any position, such as right/left side, right front or right rear, in any specified direction.
  • FIG. 2 when the physical target 400 is advanced/retracted, the robot 300 as the tracking is extended by a distance in the forward direction to obtain a virtual target 200, and the robot 300 follows the virtual target 200 forward/backward and with the physical target. 400 keep a certain distance.
  • the left side of FIG. 4 is a state in which the solid object 400 is turned to the right. It can be seen that the robot 300 needs to laterally rotate the body, perform a large lateral displacement, and follow the virtual target 200 at a certain distance.
  • the right side of Figure 4 is a horizontal translation of the physical target 400 to the left and right. It can be seen that the robot 300 needs to laterally rotate the fuselage to follow the virtual target 200 lateral translation over a certain distance.
  • the forward direction is predicted based on the position change of the physical target, and a virtual target is created in the forward direction, and the virtual target in front is followed, so that the tracking is located in the user's field of view. Increase user security.
  • Figure 8 is a structural view of a robot in accordance with another embodiment of the present invention.
  • a robot 800 including a wheel and/or a rotor, according to an embodiment of the present invention, further includes a second detection module 810, a second prediction module 820, and a second control module 830.
  • the second detecting module 810 is configured to detect a real-time location of the physical target.
  • the second prediction module 820 is configured to predict a forward direction of the entity target according to the change of the real-time position of the entity target, and further, predict a forward direction of the entity target according to the incremental relationship of the real-time position of the entity target.
  • the second control module 830 is configured to obtain a virtual target according to the real-time position of the physical target, based on the forward direction of the physical target as an arbitrary designated point on the front relative coordinate system, and control the robot to follow the virtual target.
  • the robot of the present invention based on the position change of the physical target, the forward direction is predicted, and a virtual target is created in the forward direction, and the virtual target in front is followed, and the robot is positioned in the middle of the physical target and the virtual target, so that the tracking is located Increase the user's sense of security in the user's field of vision.
  • the robot can also mount the camera on the body, which is mounted on a single-axis or multi-axis pan/tilt, so that the lens can always face the solid target, effectively avoiding the robot itself.
  • the angle of influence brought by the rotation is not limited.
  • Each of the above-described robot 500 or robot 800 may be implemented in whole or in part by software, hardware, and combinations thereof.
  • the above modules may be embedded in the hardware or independent of the processor in the robot, or may be stored in the memory in the robot in a software form, so that the processor calls the execution of the corresponding operations of the above modules.
  • module and the like are intended to mean a computer-related entity, which may be hardware, a combination of hardware and software, software, or software in execution.
  • a module can be, but is not limited to being, a process running on a processor, a processor, an object, an executable, a thread of execution, a program, and/or a computer.
  • both the application running on the robot and the robot can be modules.
  • One or more modules can reside within a process and/or a thread of execution, and a module can be located in a computer and/or distributed between two or more computers.
  • a robot in one embodiment, is provided, the internal structural block diagram of which may be as shown in FIG.
  • the robot includes a processor, memory, detector, and network interface connected by a system bus. Among them, the robot's processor is used to provide calculation and control capabilities.
  • the memory of the robot includes a non-volatile storage medium and an internal memory.
  • the non-volatile storage medium stores an operating system and a computer program.
  • the internal memory provides an environment for operation of an operating system and computer programs in a non-volatile storage medium.
  • the detector of the robot is configured to detect a location of the physical target and a plurality of feature points in the physical target, or detect a location of the detected physical target and a plurality of feature points in the physical target.
  • the robot's network interface is used to communicate with external terminals via a network connection.
  • the computer program is executed by the processor to implement a follow-up method of a physical object.
  • FIG. 9 is only a block diagram of a part of the structure related to the solution of the present application, and does not constitute a limitation of the robot to which the solution of the present application is applied.
  • the specific robot may include a ratio. More or fewer components are shown in the figures, or some components are combined, or have different component arrangements.
  • a robot including a wheel and/or a rotor, further comprising a first detector, a memory and a processor, the first detector for detecting a location of the physical target and a plurality of features in the physical target
  • the memory stores a computer program that, when executed by the processor, causes the processor to perform the steps of: predicting the direction of advancement of the entity target based on the plurality of feature points in the entity target; and advancing based on the entity target based on the location of the entity target
  • the direction obtains a virtual target as an arbitrary designated point on the relative coordinate system in front, and controls the robot to follow the virtual target.
  • the computer program causes the processor to perform the following steps when performing the step of predicting the heading direction of the entity object according to the plurality of feature points in the entity target: obtaining the entity target according to the positional relationship of the plurality of feature points in the entity target Forward direction.
  • the first detector comprises at least one of a lidar, a visual sensor, an ultrasonic sensor, or a UWB detector.
  • the robot based on the plurality of feature point information of the physical target, predicts the forward direction, and obtains a virtual target in the forward direction, and follows the virtual target in front, so that the tracking is located in the user's field of view, thereby increasing the user's sense of security.
  • a robot including a wheel and/or a rotor, further comprising a second detector, a memory and a processor, a second detector for detecting a real-time location of the physical object; the memory storing a computer program
  • the computer program When the computer program is executed by the processor, causing the processor to perform the following steps: predicting the forward direction of the entity target according to the change of the real-time position of the entity target; and based on the real-time position of the entity target, based on the forward direction of the entity target as the relative direction of the front Any specified point on the coordinate system obtains a virtual target and controls the robot to follow the virtual target.
  • the computer program causes the processor to specifically perform the step of predicting the entity target based on the incremental relationship of the real-time location of the entity target when performing the step of predicting the heading of the entity target based on the change in the real-time location of the entity target Forward direction.
  • the above robot based on the position change of the physical target, predicts the forward direction, and creates a virtual target in the forward direction, and follows the virtual target in front, so that the tracking is located in the user's field of view, increasing the user's sense of security.
  • one or more non-transitory computer-readable storage media storing a computer program, when executed by one or more processors, cause one or more processors to perform the steps of: detecting The position of the entity target and the plurality of feature points in the entity target; predicting the forward direction of the entity target according to the plurality of feature points in the entity target; and according to the position of the entity target, the forward direction based on the entity target is used as an arbitrary coordinate on the front relative coordinate system
  • the specified point gets the virtual target and follows the virtual target.
  • the computer program causes the processor to perform the following steps when performing the step of predicting the heading direction of the entity object according to the plurality of feature points in the entity target: obtaining the entity target according to the positional relationship of the plurality of feature points in the entity target The way forward.
  • multiple feature points in the physical target are detected by lidar, visual sensors, ultrasonic sensors, or UWB radio frequency communication techniques.
  • the computer readable storage medium predicts a forward direction based on a plurality of feature point information of the entity target, and obtains a virtual target in the forward direction, and follows the virtual target in front, so that the tracking is located in the user's field of view, and the user is added The sense of security.
  • one or more non-transitory computer-readable storage media storing a computer program, when executed by one or more processors, cause one or more processors to perform the steps of: detecting The real-time position of the physical target; predict the forward direction of the physical target according to the change of the real-time position of the physical target; and obtain the virtual target based on the real-time position of the physical target, based on the forward direction of the physical target as an arbitrary designated point on the front relative coordinate system And follow the virtual target.
  • the computer program causes the processor to perform the step of predicting the heading of the entity target based on the change in the real-time location of the entity target, specifically performing the step of: predicting the entity target based on the incremental relationship of the real-time location of the entity target The way forward.
  • the computer readable storage medium based on the position change of the physical target, predicts the forward direction, and creates a virtual target in the forward direction, and follows the virtual target in front, so that the tracking is located in the user's field of view, increasing the user's A sense of security.
  • Non-volatile memory can include read only memory (ROM), programmable ROM (PROM), electrically programmable ROM (EPROM), electrically erasable programmable ROM (EEPROM), or flash memory.
  • Volatile memory can include random access memory (RAM) or external cache memory.
  • RAM is available in a variety of formats, such as static RAM (SRAM), dynamic RAM (DRAM), synchronous DRAM (SDRAM), double data rate SDRAM (DDRSDRAM), enhanced SDRAM (ESDRAM), synchronization chain.
  • SRAM static RAM
  • DRAM dynamic RAM
  • SDRAM synchronous DRAM
  • DDRSDRAM double data rate SDRAM
  • ESDRAM enhanced SDRAM
  • Synchlink DRAM SLDRAM
  • Memory Bus Radbus
  • RDRAM Direct RAM
  • DRAM Direct Memory Bus Dynamic RAM
  • RDRAM Memory Bus Dynamic RAM
  • first and second are used for descriptive purposes only and are not to be construed as indicating or implying a relative importance or implicitly indicating the number of technical features indicated.
  • features defining “first” or “second” may include at least one of the features, either explicitly or implicitly.
  • the meaning of "a plurality” is at least two, such as two, three, etc., unless specifically defined otherwise.
  • the terms “installation”, “connected”, “connected”, “fixed” and the like shall be understood broadly, and may be either a fixed connection or a detachable connection, unless explicitly stated and defined otherwise. , or integrated; can be mechanical or electrical connection; can be directly connected, or indirectly connected through an intermediate medium, can be the internal communication of two elements or the interaction of two elements, unless otherwise specified Limited.
  • the specific meanings of the above terms in the present invention can be understood on a case-by-case basis.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Aviation & Aerospace Engineering (AREA)
  • General Physics & Mathematics (AREA)
  • Automation & Control Theory (AREA)
  • Electromagnetism (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Acoustics & Sound (AREA)
  • Optics & Photonics (AREA)
  • Control Of Position, Course, Altitude, Or Attitude Of Moving Bodies (AREA)
  • Manipulator (AREA)

Abstract

一种实体目标的跟随方法,包括以下步骤:检测实体目标的位置和实体目标中多个特征点(S110);根据实体目标中多个特征点预测实体目标的前进方向(S120);及根据实体目标的位置基于实体目标的前进方向作为前方的相对坐标系上的任意指定点获得虚拟目标,并跟随虚拟目标(S130)。

Description

实体目标的跟随方法、机器人和计算机可读存储介质
本申请要求于2016年12月14曰提交中国专利局、申请号为201611155557.8、发明名称为“实体目标的跟随方法及机器人”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本申请涉及机器人领域,特别涉及一种实体目标的跟随方法、机器人和计算机可读存储介质。
背景技术
普通的跟随功能,是实体目标在前方,机器人在后面跟随实体目标的方向,并保证一定的距离,完成跟随动作。这种跟踪方法,虽然技术上很容易实现,但是在使用体验上并不是最优状态。尤其是,当自动跟随的是贵重而安静的东西,例如平衡车或者行李箱,始终是在用户的身后,没有出现在用户的视野范围内的话,会给用户造成心理的不安全感和实际应用上的危险性,如意外撞击和丢失等。同样的,像无人机这样的拍摄工具,只能拍摄到用户的背影,无法做到像电影中那些拍正面的镜头,降低了拍摄成品的价值。
发明内容
根据本申请的各种实施例,提供一种实体目标的跟随方法、机器人和计算机可读存储介质。
一种实体目标的跟随方法,包括:检测实体目标的位置和所述实体目标中多个特征点;根据所述实体目标中多个特征点预测所述实体目标的前进方向;及根据所述实体目标的位置基于所述实体目标的前进方向作为前方的相对坐标系上的任意指定点获得虚拟目标,并跟随所述虚拟目标。
一种机器人,包括车轮和/或旋翼,该机器人还包括第一检测器、存储器和处理器:所述第一检测器,用于检测实体目标的位置和所述实体目标中多个特征点;所述存储器存储有计算机程序,所述计算机程序被所述处理器执行时,使得所述处理器执行以下步骤:根据所述实体目标中多个特征点预测所述实体目标的前进方向;及根据所述实体目标的位置,基于所述实体目标的前进方向作为前方的相对坐标系上的任意指定点获得虚拟目标,并控制所述机器人跟随所述虚拟目标。
一个或多个存储有计算机程序的非易失性的计算机可读存储介质,所述计算机程序被一个或多个处理器执行时,使得所述一个或多个处理器执行以下步骤:检测实体目标的位置和所述实体目标中多个特征点;根据所述实体目标中多个特征点预测所述实体目标的前进方向;及根据所述实体目标的位置,基于所述实体目标的前进方向作为前方的相对坐标系上的任意指定点获得虚拟目标,并跟随所述虚拟目标。
一种实体目标的跟随方法,包括:检测实体目标的实时位置;根据所述实体目标的实时位置的变化情况预测所述实体目标的前进方向;及根据所述实体目标的实时位置基于所述实体目标的前进方向作为前方的相对坐标系上的任意指定点获得虚拟目标,并跟随所述虚拟目标。
一种机器人,包括车轮和/或旋翼,该机器人还包括第二检测器、存储器和处理器,所述第二检测器,用于检测实体目标的实时位置;所述存储器存储有计算机程序,所述计算机程序被所述处理器执行时,使得所述处理器执行以下步骤:根据所述实体目标的实时位置的变化情况预测所述实体目标的前进方向;及根据所述实体目标的实时位置基于所述实体目标的前进方向作为前方的相对坐标系上的任意指定点获得虚拟目标,并控制所述机器人跟随所述虚拟目标。
一个或多个存储有计算机程序的非易失性的计算机可读存储介质,所述计算机程序被一个或多个处理器执行时,使得所述一个或多个处理器执行以下步骤:检测实体目标的实时位置;根据所述实体目标的实时位置的变化情 况预测所述实体目标的前进方向;及根据所述实体目标的实时位置,基于所述实体目标的前进方向作为前方的相对坐标系上的任意指定点获得虚拟目标,并跟随所述虚拟目标。
本申请的一个或多个实施例的细节在下面的附图和描述中提出。本申请的其它特征、目的和优点将从说明书、附图以及权利要求书变得明显。
附图说明
为了更清楚地说明本申请实施例中的技术方案,下面将对实施例描述中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本申请的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其他的附图。
图1是根据本申请一个实施例的实体目标的跟随方法的流程图;
图2是根据本申请一个实施例的实体目标的跟随方法初始位置示意图;
图3是根据本申请一个实施例的两特征点判断目标前方的示意图;
图4是根据本申请一个实施例的实体目标向右转状态和向右平移的示意图:
图5根据本申请一个实施例的机器人的结构图;
图6是根据本申请另一个实施例的实体目标的跟随方法的流程图;
图7是根据本申请一个实施例的增量位置点判断目标前方的示意图;
图8是根据本申请另一个实施例的机器人的结构图;以及
图9是根据本申请一个实施例的机器人的结构框图。
具体实施方式
下面详细描述本申请的实施例,所述实施例的示例在附图中示出,其中自始至终相同或类似的标号表示相同或类似的元件或具有相同或类似功能的元件。下面通过参考附图描述的实施例是示例性的,仅用于解释本申请,而不能理解为对本申请的限制。
以下结合附图描述根据本申请实施例的实体目标的跟随方法、机器人和计算机可读存储介质。
图1是根据本申请一个实施例的实体目标的跟随方法的流程图。
如图1所示,根据本发明一个实施例的实体目标的跟随方法,包括以下步骤:
S110:检测实体目标的位置和实体目标中多个特征点。
具体而言,为了更加容易的检测到实体目标的位置和特征点,可以在实体目标上可以安装自身具有方便被识别的特点,例如视觉的色彩信息,可反射光线、超声波等的实体表面,也可以是主动发出识别信号的装置。同时,作为跟踪的装置上可以安装可探测实体目标的多个特征点的传感器,例如激光雷达、视觉传感器或超声波传感器,其传感器可以安装正面朝向实体目标。
在一个实施例中,作为跟踪的装置也可通过UWB(Ultra Wideband,超宽带)射频通讯技术来检测实体目标的多个特征点。其中,UWB射频通讯技术是一种无载波通信技术,可利用纳秒至微秒级的非正弦波窄脉冲传输数据。超宽带系统具有穿透力强、功耗低、抗多径效果好、安全性高、系统复杂度低和能提供精确定位精度等特点,因此采用UWB射频通讯技术可精确检测实体目标中多个特征点。S120:根据实体目标中多个特征点预测实体目标的前进方向。
如图3所示,在机器人跟踪实体目标400时,根据实体目标400中多个特征点的位置关系得到实体目标400的前进方向,即,检测出特征点1和特征点2,并通过特征点1与特征点2之间的位置关系计算得到前进方向。这样获得的方向,不仅具有实时性,还具有可靠性。
S130:根据实体目标的位置,基于实体目标的前进方向作为前方的相对坐标系上的任意指定点获得虚拟目标,并跟随虚拟目标。
具体而言,通过了多个特征点的位置得到的预测前进方向,使其根据实体目标的位置,基于实体目标的前进方向作为前方的相对坐标系上的任意指定点获得一个虚拟目标,并跟随其虚拟目标,其中,虚拟目标可以是基于正 前进方向的延长线上的一个位置,也可以将虚拟目标设置在任何位置的,比如右/左侧方、右前方或右后方等任意指定方向上。从而实现全方位任意位置的跟随。
结合图2可知,当实体目标400前进/后退时,作为跟踪的机器人300在前进方向延长一段距离上,获得一个虚拟目标200,并使得机器人300跟随这个虚拟目标200前进/后退,并与实体目标400保持一定的距离。图4的左侧为实体目标400向右转状态,可以看出机器人300需要横向转动机身,进行大幅度的横向位移,在一定距离上跟随虚拟目标200。图4的右侧为实体目标400向左右横向平移,可以看出机器人300需要横向转动机身,在一定距离上跟随虚拟目标200横向平移。
根据本发明的实体目标的跟随方法,基于实体目标的多个特征点信息,预测前进方向,并在前进方向上获得一个虚拟目标,并跟随这个前方的虚拟目标,这样使跟踪位于用户的视野中,增加用户的安全感。
图5根据本发明一个实施例的机器人的结构图。
如图5所示,根据本发明一个实施例的机器人500,包括车轮和/或旋翼,还包括:第一检测模块510、第一预测模块520和第一控制模块530。
其中,第一检测模块510用于检测实体目标的位置和实体目标中多个特征点。第一预测模块520用于根据实体目标中多个特征点预测实体目标的前进方向。第一控制模块530用于根据实体目标的位置,基于实体目标的前进方向作为前方的相对坐标系上的任意指定点获得虚拟目标,并控制机器人跟随虚拟目标。
根据本发明的机器人,基于实体目标的多个特征点信息,预测前进方向,并在前进方向上创造一个虚拟目标,并跟随这个前方的虚拟目标,机器人在实体目标和虚拟目标的中间位置,使跟踪位于用户的视野中,增加用户的安全感。
在一些实施例中,第一预测模块用于:根据实体目标中多个特征点的位置关系得到实体目标的前进方向。
在一些实施例中,第一检测模块包括激光雷达、视觉传感器、超声波传感器、UWB检测器中的至少一种。
为了实现拍摄用途,在一些实施例中,机器人还可以在本体上安装摄像装置,其安装在单轴或者多轴的云台上,这样可以使镜头始终朝向实体目标,有效的避免了因为机器人自身的转动带来的角度影响。
需要说明的是,本发明实施例的机器人的具体实现方式与本发明实施例的实体目标的跟随方法的具体实现方式类似,具体请参见方法部分的描述,为了减少冗余,此处不做赘述。
另外,根据本发明实施例的机器人的其它构成以及作用对于本领域的普通技术人员而言都是已知的,为了减少冗余,不做赘述。
图6是根据本发明另一个实施例的实体目标的跟随方法的流程图。
如图6所示,根据本发明另一个实施例的实体目标的跟随方法,包括以下步骤:
S610:检测实体目标的实时位置。
具体而言,作为跟踪的装置可以安装可探测实体目标的位置的传感器,使其检测到由传感器传送的实体目标的实时位置,其传感器可以安装正面朝向实体目标。
在一个实施例中,作为跟踪的装置上可以安装可探测实体目标的实时位置的传感器,例如激光雷达、视觉传感器或超声波传感器,其传感器可以安装正面朝向实体目标。或者,作为跟踪的装置也可通过UWB射频通讯技术来检测实体目标的实时位置。
S620:根据实体目标的实时位置的变化情况预测实体目标的前进方向。
结合图7所示,通过检测实体目标400的位置,进行位置增量的计算,对前进方向做出一定的预判,使得机器人300按照预判的位置点1—3,进行前进。这样只需要通过识别实体目标的位置,不仅可以使用低配的传感器,从而降低跟踪的成本。
S630:根据实体目标的实时位置,基于实体目标的前进方向作为前方的 相对坐标系上的任意指定点获得虚拟目标,并跟随虚拟目标。
具体而言,通过了实体目标的位置得到的预测前进方向,使其根据实体目标的位置,基于所述实体目标的前进方向作为前方的相对坐标系上的任意指定点获得一个虚拟目标,并跟随其虚拟目标,其中,虚拟目标可以是基于正前进方向的延长线上的一个位置,也可以将虚拟目标设置在任何位置的,比如右/左侧方、右前方或右后方等任意指定方向上。从而实现全方位任意位置的跟随。
结合图2可知,当实体目标400前进/后退时,作为跟踪的机器人300在前进方向延长一段距离上,获得一个虚拟目标200,并使得机器人300跟随这个虚拟目标200前进/后退,并与实体目标400保持一定的距离。图4的左侧为实体目标400向右转状态,可以看出机器人300需要横向转动机身,进行大幅度的横向位移,在一定距离上跟随虚拟目标200。图4的右侧为实体目标400向左右横向平移,可以看出机器人300需要横向转动机身,在一定距离上跟随虚拟目标200横向平移。
根据本发明的实体目标的跟随方法,基于实体目标的位置变化情况,预测前进方向,并在前进方向上创造一个虚拟目标,并跟随这个前方的虚拟目标,这样就使跟踪位于用户的视野中,增加用户的安全感。
图8根据本发明另一个实施例的机器人的结构图。
如图8所示,根据本发明一个实施例的机器人800,包括车轮和/或旋翼,还包括:第二检测模块810、第二预测模块820和第二控制模块830。
其中,第二检测模块810用于检测实体目标的实时位置。第二预测模块820,用于根据实体目标的实时位置的变化情况预测实体目标的前进方向,进一步地,根据实体目标的实时位置的增量关系预测实体目标的前进方向。第二控制模块830,用于根据实体目标的实时位置,基于实体目标的前进方向作为前方的相对坐标系上的任意指定点获得虚拟目标,并控制机器人跟随虚拟目标。
根据本发明的机器人,基于实体目标的位置变化情况,预测前进方向, 并在前进方向上创造一个虚拟目标,并跟随这个前方的虚拟目标,机器人在实体目标和虚拟目标的中间位置,使跟踪位于用户的视野中,增加用户的安全感。
为了实现拍摄用途,在一些实施例中,机器人还可以在本体上安装摄像装置,其安装在单轴或者多轴的云台上,这样可以使镜头始终朝向实体目标,有效的避免了因为机器人自身的转动带来的角度影响。
需要说明的是,本发明实施例的机器人的具体实现方式与本发明实施例的实体目标的跟随方法的具体实现方式类似,具体请参见方法部分的描述,为了减少冗余,此处不做赘述。
另外,根据本发明实施例的机器人的其它构成以及作用对于本领域的普通技术人员而言都是已知的,为了减少冗余,不做赘述。
上述机器人500或机器人800中的各个模块可全部或部分通过软件、硬件及其组合来实现。上述各模块可以硬件形式内嵌于或独立于机器人中的处理器中,也可以以软件形式存储于机器人中的存储器中,以便于处理器调用执行以上各个模块对应的操作。
如在本申请中所使用的,术语“模块”等旨在表示计算机相关的实体,它可以是硬件、硬件和软件的组合、软件、或者执行中的软件。例如,模块可以是但不限于是,在处理器上运行的进程、处理器、对象、可执行码、执行的线程、程序和/或计算机。作为说明,运行在机器人上的应用程序和机器人都可以是模块。一个或多个模块可以驻留在进程和/或执行的线程中,并且模块可以位于一个计算机内和/或分布在两个或更多的计算机之间。
在一个实施例中,提供了一种机器人,该机器人的内部结构框图可以如图9所示。该机器人包括通过系统总线连接的处理器、存储器、检测器和网络接口。其中,该机器人的处理器用于提供计算和控制能力。该机器人的存储器包括非易失性存储介质、内存储器。该非易失性存储介质存储有操作系 统和计算机程序。该内存储器为非易失性存储介质中的操作系统和计算机程序的运行提供环境。该机器人的检测器用于检测实体目标的位置和所述实体目标中多个特征点,或检测检测实体目标的位置和所述实体目标中多个特征点。该机器人的网络接口用于与外部的终端通过网络连接通信。该计算机程序被处理器执行时以实现一种实体目标的跟随方法。本领域技术人员可以理解,图9中示出的结构,仅仅是与本申请方案相关的部分结构的框图,并不构成对本申请方案所应用于其上的机器人的限定,具体的机器人可以包括比图中所示更多或更少的部件,或者组合某些部件,或者具有不同的部件布置。
在一个实施例中,提供了一种机器人,包括车轮和/或旋翼,还包括第一检测器、存储器和处理器,第一检测器,用于检测实体目标的位置和实体目标中多个特征点;存储器存储有计算机程序,计算机程序被处理器执行时,使得处理器执行以下步骤:根据实体目标中多个特征点预测实体目标的前进方向;及根据实体目标的位置,基于实体目标的前进方向作为前方的相对坐标系上的任意指定点获得虚拟目标,并控制机器人跟随虚拟目标。
在一个实施例中,计算机程序使得处理器在执行根据实体目标中多个特征点预测实体目标的前进方向的步骤时具体执行以下步骤:根据实体目标中多个特征点的位置关系得到实体目标的前进方向。
在一个实施例中,第一检测器包括激光雷达、视觉传感器、超声波传感器或UWB检测器中的至少一种。
上述机器人,基于实体目标的多个特征点信息,预测前进方向,并在前进方向上获得一个虚拟目标,并跟随这个前方的虚拟目标,这样使跟踪位于用户的视野中,增加用户的安全感。
在一个实施例中,提供了一种机器人,包括车轮和/或旋翼,还包括第二检测器、存储器和处理器,第二检测器,用于检测实体目标的实时位置;存储器存储有计算机程序,计算机程序被处理器执行时,使得处理器执行以下 步骤:根据实体目标的实时位置的变化情况预测实体目标的前进方向;及根据实体目标的实时位置,基于实体目标的前进方向作为前方的相对坐标系上的任意指定点获得虚拟目标,并控制机器人跟随虚拟目标。
在一个实施例中,计算机程序使得处理器在执行根据实体目标的实时位置的变化情况预测实体目标的前进方向的步骤时具体执行以下步骤:根据实体目标的实时位置的增量关系预测实体目标的前进方向。
上述机器人,基于实体目标的位置变化情况,预测前进方向,并在前进方向上创造一个虚拟目标,并跟随这个前方的虚拟目标,这样就使跟踪位于用户的视野中,增加用户的安全感。
在一个实施例中,一个或多个存储有计算机程序的非易失性的计算机可读存储介质,计算机程序被一个或多个处理器执行时,使得一个或多个处理器执行以下步骤:检测实体目标的位置和实体目标中多个特征点;根据实体目标中多个特征点预测实体目标的前进方向;及根据实体目标的位置,基于实体目标的前进方向作为前方的相对坐标系上的任意指定点获得虚拟目标,并跟随虚拟目标。
在一个实施例中,计算机程序使得处理器在执行根据实体目标中多个特征点预测实体目标的前进方向的步骤时,具体执行以下步骤:根据实体目标中多个特征点的位置关系得到实体目标的前进方向。
在一个实施例中,实体目标中多个特征点是通过激光雷达、视觉传感器、超声波传感器或或UWB射频通讯技术检测得到的。
上述计算机可读存储介质,基于实体目标的多个特征点信息,预测前进方向,并在前进方向上获得一个虚拟目标,并跟随这个前方的虚拟目标,这样使跟踪位于用户的视野中,增加用户的安全感。
在一个实施例中,一个或多个存储有计算机程序的非易失性的计算机可读存储介质,计算机程序被一个或多个处理器执行时,使得一个或多个处理 器执行以下步骤:检测实体目标的实时位置;根据实体目标的实时位置的变化情况预测实体目标的前进方向;及根据实体目标的实时位置,基于实体目标的前进方向作为前方的相对坐标系上的任意指定点获得虚拟目标,并跟随虚拟目标。
在一个实施例中,计算机程序使得处理器在执行根据实体目标的实时位置的变化情况预测实体目标的前进方向的步骤时,具体执行以下步骤:根据实体目标的实时位置的增量关系预测实体目标的前进方向。
上述计算机可读存储介质,基于实体目标的位置变化情况,预测前进方向,并在前进方向上创造一个虚拟目标,并跟随这个前方的虚拟目标,这样就使跟踪位于用户的视野中,增加用户的安全感。
本领域普通技术人员可以理解实现上述实施例方法中的全部或部分流程,是可以通过计算机程序来指令相关的硬件来完成,所述的程序可存储于一非易失性计算机可读取存储介质中,该程序在执行时,可包括如上述各方法的实施例的流程。其中,本申请所提供的各实施例中所使用的对存储器、存储、数据库或其它介质的任何引用,均可包括非易失性和/或易失性存储器。非易失性存储器可包括只读存储器(ROM)、可编程ROM(PROM)、电可编程ROM(EPROM)、电可擦除可编程ROM(EEPROM)或闪存。易失性存储器可包括随机存取存储器(RAM)或者外部高速缓冲存储器。作为说明而非局限,RAM以多种形式可得,诸如静态RAM(SRAM)、动态RAM(DRAM)、同步DRAM(SDRAM)、双数据率SDRAM(DDRSDRAM)、增强型SDRAM(ESDRAM)、同步链路(Synchlink)DRAM(SLDRAM)、存储器总线(Rambus)直接RAM(RDRAM)、直接存储器总线动态RAM(DRDRAM)、以及存储器总线动态RAM(RDRAM)等。
以上实施例的各技术特征可以进行任意的组合,为使描述简洁,未对上述实施例中的各个技术特征所有可能的组合都进行描述,然而,只要这些技术特征的组合不存在矛盾,都应当认为是本说明书记载的范围。
以上所述实施例仅表达了本申请的几种实施方式,其描述较为具体和详细,但并不能因此而理解为对本申请专利范围的限制。应当指出的是,对于本领域的普通技术人员来说,在不脱离本申请构思的前提下,还可以做出若干变形和改进,这些都属于本申请的保护范围。因此,本申请专利的保护范围应以所附权利要求为准。
此外,术语“第一”、“第二”仅用于描述目的,而不能理解为指示或暗示相对重要性或者隐含指明所指示的技术特征的数量。由此,限定有“第一”、“第二”的特征可以明示或者隐含地包括至少一个该特征。在本发明的描述中,“多个”的含义是至少两个,例如两个,三个等,除非另有明确具体的限定。
在本发明中,除非另有明确的规定和限定,术语“安装”、“相连”、“连接”、“固定”等术语应做广义理解,例如,可以是固定连接,也可以是可拆卸连接,或成一体;可以是机械连接,也可以是电连接;可以是直接相连,也可以通过中间媒介间接相连,可以是两个元件内部的连通或两个元件的相互作用关系,除非另有明确的限定。对于本领域的普通技术人员而言,可以根据具体情况理解上述术语在本实用新型中的具体含义。
在本说明书的描述中,参考术语“一个实施例”、“一些实施例”、“示例”、“具体示例”、或“一些示例”等的描述意指结合该实施例或示例描述的具体特征、结构、材料或者特点包含于本申请的至少一个实施例或示例中。在本说明书中,对上述术语的示意性表述不必须针对的是相同的实施例或示例。而且,描述的具体特征、结构、材料或者特点可以在任一个或多个实施例或示例中以合适的方式结合。此外,在不相互矛盾的情况下,本领域的技术人员可以将本说明书中描述的不同实施例或示例以及不同实施例或示例的特征进行结合和组合。
尽管上面已经示出和描述了本发明的实施例,可以理解的是,上述实施例是示例性的,不能理解为对本申请的限制,本领域的普通技术人员在本申请的范围内可以对上述实施例进行变化、修改、替换和变型。

Claims (15)

  1. 一种实体目标的跟随方法,包括:
    检测实体目标的位置和所述实体目标中多个特征点;
    根据所述实体目标中多个特征点预测所述实体目标的前进方向;及
    根据所述实体目标的位置,基于所述实体目标的前进方向作为前方的相对坐标系上的任意指定点获得虚拟目标,并跟随所述虚拟目标。
  2. 根据权利要求1所述的方法,其特征在于,所述根据所述实体目标中多个特征点预测所述实体目标的前进方向的步骤,包括:
    根据所述实体目标中多个特征点的位置关系得到所述实体目标的前进方向。
  3. 根据权利要求2所述的方法,其特征在于,所述实体目标中多个特征点是通过激光雷达、视觉传感器、超声波传感器或UWB射频通讯技术检测得到的。
  4. 一种机器人,包括车轮和/或旋翼,其特征在于,还包括第一检测器、存储器和处理器,所述第一检测器,用于检测实体目标的位置和所述实体目标中多个特征点;所述存储器存储有计算机程序,所述计算机程序被所述处理器执行时,使得所述处理器执行以下步骤:
    根据所述实体目标中多个特征点预测所述实体目标的前进方向;及
    根据所述实体目标的位置,基于所述实体目标的前进方向作为前方的相对坐标系上的任意指定点获得虚拟目标,并控制所述机器人跟随所述虚拟目标。
  5. 根据权利要求4所述的机器人,其特征在于,计算机程序使得处理器在执行所述根据所述实体目标中多个特征点预测所述实体目标的前进方向的步骤时具体执行以下步骤:根据所述实体目标中多个特征点的位置关系得到所述实体目标的前进方向。
  6. 根据权利要求5所述的机器人,其特征在于,所述第一检测器包括激 光雷达、视觉传感器、超声波传感器或UWB检测器中的至少一种。
  7. 一个或多个存储有计算机程序的非易失性的计算机可读存储介质,所述计算机程序被一个或多个处理器执行时,使得所述一个或多个处理器执行以下步骤:
    检测实体目标的位置和所述实体目标中多个特征点;
    根据所述实体目标中多个特征点预测所述实体目标的前进方向;及
    根据所述实体目标的位置,基于所述实体目标的前进方向作为前方的相对坐标系上的任意指定点获得虚拟目标,并跟随所述虚拟目标。
  8. 根据权利要求7所述的计算机可读存储介质,其特征在于,所述计算机程序使得处理器在执行所述根据所述实体目标中多个特征点预测所述实体目标的前进方向的步骤时,具体执行以下步骤:根据所述实体目标中多个特征点的位置关系得到所述实体目标的前进方向。
  9. 根据权利要求8所述的计算机可读存储介质,其特征在于,所述实体目标中多个特征点是通过激光雷达、视觉传感器、超声波传感器或UWB射频通讯技术检测得到的。
  10. 一种实体目标的跟随方法,包括:
    检测实体目标的实时位置;
    根据所述实体目标的实时位置的变化情况预测所述实体目标的前进方向;及
    根据所述实体目标的实时位置,基于所述实体目标的前进方向作为前方的相对坐标系上的任意指定点获得虚拟目标,并跟随所述虚拟目标。
  11. 根据权利要求10所述的方法,其特征在于,所述根据所述实体目标的实时位置的变化情况预测所述实体目标的前进方向的步骤包括:
    根据所述实体目标的实时位置的增量关系预测所述实体目标的前进方向。
  12. 一种机器人,包括车轮和/或旋翼,其特征在于,还包括第二检测器、存储器和处理器,所述第二检测器,用于检测实体目标的实时位置;所述存储器存储有计算机程序,所述计算机程序被所述处理器执行时,使得所述处理器执行以下步骤:
    根据所述实体目标的实时位置的变化情况预测所述实体目标的前进方向;及
    根据所述实体目标的实时位置,基于所述实体目标的前进方向作为前方的相对坐标系上的任意指定点获得虚拟目标,并控制所述机器人跟随所述虚拟目标。
  13. 根据权利要求12所述的机器人,其特征在于,计算机程序使得处理器在执行所述根据所述实体目标的实时位置的变化情况预测所述实体目标的前进方向的步骤时具体执行以下步骤:根据所述实体目标的实时位置的增量关系预测所述实体目标的前进方向。
  14. 一个或多个存储有计算机程序的非易失性的计算机可读存储介质,所述计算机程序被一个或多个处理器执行时,使得所述一个或多个处理器执行以下步骤:
    检测实体目标的实时位置;
    根据所述实体目标的实时位置的变化情况预测所述实体目标的前进方向;及
    根据所述实体目标的实时位置,基于所述实体目标的前进方向作为前方的相对坐标系上的任意指定点获得虚拟目标,并跟随所述虚拟目标。
  15. 根据权利要求14所述的计算机可读存储介质,其特征在于,所述计算机程序使得处理器在执行所述根据所述实体目标的实时位置的变化情况预测所述实体目标的前进方向的步骤时,具体执行以下步骤:根据所述实体目标的实时位置的增量关系预测所述实体目标的前进方向。
PCT/CN2017/115772 2016-12-14 2017-12-13 实体目标的跟随方法、机器人和计算机可读存储介质 WO2018108087A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201611155557.8 2016-12-14
CN201611155557.8A CN108021128B (zh) 2016-12-14 2016-12-14 实体目标的跟随方法及机器人

Publications (1)

Publication Number Publication Date
WO2018108087A1 true WO2018108087A1 (zh) 2018-06-21

Family

ID=62083839

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2017/115772 WO2018108087A1 (zh) 2016-12-14 2017-12-13 实体目标的跟随方法、机器人和计算机可读存储介质

Country Status (2)

Country Link
CN (1) CN108021128B (zh)
WO (1) WO2018108087A1 (zh)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111067354A (zh) * 2018-10-19 2020-04-28 佛山市顺德区美的饮水机制造有限公司 饮水机及其移动方法与装置

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108829137A (zh) * 2018-05-23 2018-11-16 中国科学院深圳先进技术研究院 一种机器人目标追踪的避障方法及装置
CN109191182A (zh) * 2018-08-08 2019-01-11 深圳市科脉技术股份有限公司 基于无人超市的商品导航方法及装置
CN109557944B (zh) * 2018-11-30 2021-08-20 南通大学 一种运动目标位置检测方法

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140277841A1 (en) * 2013-03-15 2014-09-18 Elizabeth Klicpera Motorized Luggage or Luggage Platform with Wired or Wireless Guidance and Distance Control
CN104075710A (zh) * 2014-04-28 2014-10-01 中国科学院光电技术研究所 一种基于航迹预测的机动扩展目标轴向姿态实时估计方法
CN106127117A (zh) * 2016-06-16 2016-11-16 哈尔滨工程大学 基于双目视觉快速高鲁棒性识别、定位的自动跟随行李箱

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140277841A1 (en) * 2013-03-15 2014-09-18 Elizabeth Klicpera Motorized Luggage or Luggage Platform with Wired or Wireless Guidance and Distance Control
CN104075710A (zh) * 2014-04-28 2014-10-01 中国科学院光电技术研究所 一种基于航迹预测的机动扩展目标轴向姿态实时估计方法
CN106127117A (zh) * 2016-06-16 2016-11-16 哈尔滨工程大学 基于双目视觉快速高鲁棒性识别、定位的自动跟随行李箱

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111067354A (zh) * 2018-10-19 2020-04-28 佛山市顺德区美的饮水机制造有限公司 饮水机及其移动方法与装置

Also Published As

Publication number Publication date
CN108021128A (zh) 2018-05-11
CN108021128B (zh) 2020-01-10

Similar Documents

Publication Publication Date Title
WO2018108087A1 (zh) 实体目标的跟随方法、机器人和计算机可读存储介质
EP3633539A3 (en) Method for position detection, device, and storage medium
CN108828527B (zh) 一种多传感器数据融合方法、装置、车载设备及存储介质
CN107976999A (zh) 一种移动机器人及其避障和路径规划方法和系统
US10775797B2 (en) Method and device for mobile robot to move in proximity to obstacle
US9251705B2 (en) Apparatus and method for detecting moving-object around vehicle
CN107223200B (zh) 一种导航方法、装置及终端设备
CN103557796B (zh) 基于激光测距和计算机视觉的三维定位系统及定位方法
TWI481980B (zh) 電子裝置及其導航方法
JP5843948B1 (ja) 駐車支援装置および駐車支援方法
KR20150010318A (ko) 요 레이트 센서의 오프셋 보정 장치와 방법 및 상기 장치를 구비하는 차량 속도 제어 시스템
CN108287562B (zh) 一种能自稳的无人机多传感器避障测距系统及方法
WO2018094863A1 (zh) 一种定位方法、装置和计算机存储介质
EP3830604B1 (en) Lidar system design to mitigate lidar crosstalk
US11892536B2 (en) Object-detecting device
CN111188549A (zh) 一种应用于车辆的防撞方法和装置
US20140267772A1 (en) Robotic total station with image-based target re-acquisition
US9519053B2 (en) Distance measuring apparatus and distance measuring method
WO2022179227A1 (zh) 扫地机的回充对准方法、装置及扫地机
JP2017075881A (ja) 物体認識統合装置および物体認識統合方法
JP2007255982A (ja) 目標航跡相関装置及び目標航跡の相関判定方法
CN108776333B (zh) 一种数据二次级联融合方法、系统、车载设备及存储介质
JP6391256B2 (ja) 車両の電動ドア開閉制御装置
KR20190142505A (ko) 실시간 객체 추적 장치
CN112415516B (zh) 一种车辆前方障碍区域感知方法及装置

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 17881581

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 17881581

Country of ref document: EP

Kind code of ref document: A1