CN111381587A - Following method and device for following robot - Google Patents

Following method and device for following robot Download PDF

Info

Publication number
CN111381587A
CN111381587A CN201811512154.3A CN201811512154A CN111381587A CN 111381587 A CN111381587 A CN 111381587A CN 201811512154 A CN201811512154 A CN 201811512154A CN 111381587 A CN111381587 A CN 111381587A
Authority
CN
China
Prior art keywords
following
followed object
information
moving pictures
determining
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201811512154.3A
Other languages
Chinese (zh)
Other versions
CN111381587B (en
Inventor
哈融厚
吴迪
黄玉玺
董秋伟
张金凤
张鹏
王鹏飞
王鹏翔
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Jingbangda Trade Co Ltd
Beijing Jingdong Qianshi Technology Co Ltd
Original Assignee
Beijing Jingdong Century Trading Co Ltd
Beijing Jingdong Shangke Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Jingdong Century Trading Co Ltd, Beijing Jingdong Shangke Information Technology Co Ltd filed Critical Beijing Jingdong Century Trading Co Ltd
Priority to CN201811512154.3A priority Critical patent/CN111381587B/en
Publication of CN111381587A publication Critical patent/CN111381587A/en
Application granted granted Critical
Publication of CN111381587B publication Critical patent/CN111381587B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0231Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means
    • G05D1/0242Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using non-visible light signals, e.g. IR or UV signals
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0212Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory
    • G05D1/0221Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory involving a learning process
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0212Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory
    • G05D1/0223Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory involving speed control of the vehicle
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0231Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means
    • G05D1/0246Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using a video camera in combination with image processing means
    • G05D1/0253Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using a video camera in combination with image processing means extracting relative motion information from a plurality of images taken successively, e.g. visual odometry, optical flow
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0255Control of position or course in two dimensions specially adapted to land vehicles using acoustic signals, e.g. ultra-sonic singals
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0276Control of position or course in two dimensions specially adapted to land vehicles using signals provided by a source external to the vehicle
    • G05D1/0285Control of position or course in two dimensions specially adapted to land vehicles using signals provided by a source external to the vehicle using signals transmitted via a public communication network, e.g. GSM network

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Aviation & Aerospace Engineering (AREA)
  • General Physics & Mathematics (AREA)
  • Automation & Control Theory (AREA)
  • Electromagnetism (AREA)
  • Acoustics & Sound (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Multimedia (AREA)
  • Control Of Position, Course, Altitude, Or Attitude Of Moving Bodies (AREA)

Abstract

The embodiment of the application discloses a following method and a following device for following a robot. One embodiment of the method comprises: the method comprises the steps of obtaining characteristic point information of a followed object, and obtaining at least one following image shot by a target time period in the process of following the followed object, wherein the target time period is a time period formed from a preset time before the current time to the current time; determining whether the followed object is lost based on the feature point information and the at least one following image; in response to determining that the followed object is lost, obtaining positioning information for positioning the followed object; based on the positioning information, movement information of the followed object is determined, and the followed object is re-followed using the movement information. The embodiment ensures the following fluency of the following robot to the followed object.

Description

Following method and device for following robot
Technical Field
The embodiment of the application relates to the technical field of computers, in particular to a following method and a following device for following a robot.
Background
The following robot is a robot that recognizes a target object and determines the position of the target object to perform a following function for the target object. The following method for following a robot is mainly used for following a target object in the following manner: recording the whole or partial characteristic points of the target object by a deep learning method, or directly identifying a preset special image or image information with characteristics of the target object; and after the information of the target object is obtained and recorded, searching is carried out through the shot image so as to find the target object, and the following robot is controlled according to the relative position of the target object and the following robot so as to realize the following function.
Disclosure of Invention
The embodiment of the application provides a following method and a following device for following a robot.
In a first aspect, an embodiment of the present application provides a following method for following a robot, including: the method comprises the steps of obtaining characteristic point information of a followed object, and obtaining at least one following image shot by a target time period in the process of following the followed object, wherein the target time period is a time period formed from a preset time before the current time to the current time; determining whether the followed object is lost based on the feature point information and the at least one following image; in response to determining that the followed object is lost, obtaining positioning information for positioning the followed object; based on the positioning information, movement information of the followed object is determined, and the followed object is re-followed using the movement information.
In some embodiments, the positioning information comprises sensor information comprising current location information of the followed object and at least one of: the reporting frequency for reporting the position of the followed object in the target time interval aims at least two pieces of position information which are continuously reported by the followed object; and determining movement information of the followed object based on the positioning information, and re-following the followed object by using the movement information, including: determining whether the current position information is available based on the reporting frequency and/or the at least two pieces of position information; and in response to determining that the current position information is available, generating a following path for following the followed object based on the current position indicated by the current position information of the followed object and the current position of the following robot, and re-following the followed object according to the following path.
In some embodiments, determining whether current location information is available based on the reporting frequency and/or the at least two location information comprises: determining whether the difference of the reporting frequency between the reporting frequency and a preset reporting frequency is smaller than a preset reporting frequency difference threshold value; and determining that the current position information is available in response to determining that the reporting frequency difference is smaller than a preset reporting frequency difference threshold value.
In some embodiments, determining whether current location information is available based on the reporting frequency and/or the at least two location information comprises: determining, for two pieces of continuously reported location information of at least two pieces of location information, a distance between locations respectively indicated by the two pieces of continuously reported location information as a first distance; determining whether a first distance greater than a preset first distance threshold exists in the determined at least one first distance; determining that the current location information is available in response to determining that no first distance greater than a preset first distance threshold exists in the at least one first distance.
In some embodiments, the positioning information further includes at least two moving pictures of the followed object captured continuously within a preset time period before the followed object is lost; and after determining whether current location information is available based on the reporting frequency and/or the at least two location information, the method further comprises: in response to the fact that the current position information is unavailable, acquiring a preset road network graph of an area where the followed object is located in at least two moving pictures, wherein the road network graph is used for representing a passable road in the area; determining whether at least two moving pictures are available based on the number of the at least two moving pictures and/or the position of the followed object indicated by the at least two moving pictures; and in response to the fact that the at least two moving pictures are determined to be available, predicting the moving direction of the followed object as a first moving direction by utilizing a Kalman filtering algorithm based on the at least two moving pictures and the road network graph, and re-following the followed object according to the first moving direction.
In some embodiments, determining whether the at least two moving pictures are available based on the number of the at least two moving pictures and/or the position of the followed object indicated by the at least two moving pictures comprises: determining whether the number of the at least two moving pictures is greater than a preset number threshold; in response to determining that the number of the at least two moving pictures is greater than a preset number threshold, determining that the at least two moving pictures are available.
In some embodiments, determining whether the at least two moving pictures are available based on the number of the at least two moving pictures and/or the position of the followed object indicated by the at least two moving pictures comprises: determining, as second distances, distances between positions of the followed object indicated by the two continuously shot moving pictures, respectively, for two of the at least two moving pictures; determining whether a second distance greater than a preset second distance threshold exists in the determined at least one second distance; determining that the at least two moving pictures are available in response to determining that there is no second distance of the at least one second distance that is greater than a preset second distance threshold.
In some embodiments, the sensor information further includes relative directional information of the followed object with respect to the following robot; and after determining whether the at least two moving pictures are available based on the number of the at least two moving pictures and/or the position of the followed object indicated by the at least two moving pictures, the method further comprises: and predicting the moving direction of the followed object as a second moving direction based on the relative direction information and the road network graph and re-following the followed object according to the second moving direction in response to the fact that the at least two moving pictures are determined to be unavailable.
In a second aspect, an embodiment of the present application provides a following device for following a robot, including: a first acquisition unit configured to acquire feature point information of a followed object, and acquire at least one following image captured during a target period in a process of following the followed object, wherein the target period is a period formed from a preset time before a current time to the current time; a determination unit configured to determine whether the followed object is lost based on the feature point information and the at least one following image; a second acquisition unit configured to acquire positioning information for positioning the followed object in response to determining that the followed object is lost; a following unit configured to determine movement information of the followed object based on the positioning information, and to re-follow the followed object using the movement information.
In some embodiments, the positioning information comprises sensor information comprising current location information of the followed object and at least one of: the reporting frequency for reporting the position of the followed object in the target time interval aims at least two pieces of position information which are continuously reported by the followed object; and the following unit is further configured to determine movement information of the followed object based on the positioning information, and to re-follow the followed object using the movement information, as follows: determining whether the current position information is available based on the reporting frequency and/or the at least two pieces of position information; and in response to determining that the current position information is available, generating a following path for following the followed object based on the current position indicated by the current position information of the followed object and the current position of the following robot, and re-following the followed object according to the following path.
In some embodiments, the follower unit is further configured to determine whether current location information is available based on the reporting frequency and/or the at least two location information as follows: determining whether the difference of the reporting frequency between the reporting frequency and a preset reporting frequency is smaller than a preset reporting frequency difference threshold value; and determining that the current position information is available in response to determining that the reporting frequency difference is smaller than a preset reporting frequency difference threshold value.
In some embodiments, the follower unit is further configured to determine whether current location information is available based on the reporting frequency and/or the at least two location information as follows: determining, for two pieces of continuously reported location information of at least two pieces of location information, a distance between locations respectively indicated by the two pieces of continuously reported location information as a first distance; determining whether a first distance greater than a preset first distance threshold exists in the determined at least one first distance; determining that the current location information is available in response to determining that no first distance greater than a preset first distance threshold exists in the at least one first distance.
In some embodiments, the positioning information further includes at least two moving pictures of the followed object captured continuously within a preset time period before the followed object is lost; and the following unit is further configured to: in response to the fact that the current position information is unavailable, acquiring a preset road network graph of an area where the followed object is located in at least two moving pictures, wherein the road network graph is used for representing a passable road in the area; determining whether at least two moving pictures are available based on the number of the at least two moving pictures and/or the position of the followed object indicated by the at least two moving pictures; and in response to the fact that the at least two moving pictures are determined to be available, predicting the moving direction of the followed object as a first moving direction by utilizing a Kalman filtering algorithm based on the at least two moving pictures and the road network graph, and re-following the followed object according to the first moving direction.
In some embodiments, the following unit is further configured to determine whether the at least two moving pictures are available based on the number of the at least two moving pictures and/or the position of the followed object indicated by the at least two moving pictures as follows: determining whether the number of the at least two moving pictures is greater than a preset number threshold; in response to determining that the number of the at least two moving pictures is greater than a preset number threshold, determining that the at least two moving pictures are available.
In some embodiments, the following unit is further configured to determine whether the at least two moving pictures are available based on the number of the at least two moving pictures and/or the position of the followed object indicated by the at least two moving pictures as follows: determining, as second distances, distances between positions of the followed object indicated by the two continuously shot moving pictures, respectively, for two of the at least two moving pictures; determining whether a second distance greater than a preset second distance threshold exists in the determined at least one second distance; determining that the at least two moving pictures are available in response to determining that there is no second distance of the at least one second distance that is greater than a preset second distance threshold.
In some embodiments, the sensor information further includes relative directional information of the followed object with respect to the following robot; and the following unit is further configured to: and predicting the moving direction of the followed object as a second moving direction based on the relative direction information and the road network graph and re-following the followed object according to the second moving direction in response to the fact that the at least two moving pictures are determined to be unavailable.
In a third aspect, an embodiment of the present application provides a following robot, including: a controller comprising one or more processors; a camera; a mobile device; a storage device having one or more programs stored thereon, which when executed by one or more processors, cause the one or more processors to implement a method as in any one of the embodiments of the following method for following a robot.
In a fourth aspect, embodiments of the present application provide a computer-readable medium, on which a computer program is stored, which when executed by a processor, implements the method as in any of the embodiments of the following method for following a robot.
According to the following method and the following device for the following robot, the characteristic point information of the followed object is firstly acquired, and at least one following image shot in the process of following the followed object in the target time period is acquired. Then, it is determined whether the followed object is lost based on the feature point information and the at least one following image. And if the fact that the followed object is lost is determined, obtaining positioning information for positioning the followed object. Finally, based on the positioning information, determining the movement information of the followed object, and using the movement information, re-following the followed object. The method avoids the situation that the following robot is artificially helped to follow again under the condition that the following robot is interrupted, and ensures the following smoothness of the following robot to the followed object.
Drawings
Other features, objects and advantages of the present application will become more apparent upon reading of the following detailed description of non-limiting embodiments thereof, made with reference to the accompanying drawings in which:
FIG. 1 is an exemplary system architecture diagram in which one embodiment of the present application may be applied;
FIG. 2 is a flow diagram of one embodiment of a following method for following a robot according to the present application;
FIG. 3 is a schematic diagram of one application scenario of a following method for following a robot according to the present application;
FIG. 4 is a flow diagram of yet another embodiment of a following method for following a robot according to the present application;
FIG. 5 is a schematic structural diagram of one embodiment of a following device for following a robot according to the present application;
FIG. 6 is a schematic diagram of a computer system suitable for use in implementing a follower robot in accordance with embodiments of the present application.
Detailed Description
The present application will be described in further detail with reference to the following drawings and examples. It is to be understood that the specific embodiments described herein are merely illustrative of the relevant invention and not restrictive of the invention. It should be noted that, for convenience of description, only the portions related to the related invention are shown in the drawings.
It should be noted that the embodiments and features of the embodiments in the present application may be combined with each other without conflict. The present application will be described in detail below with reference to the embodiments with reference to the attached drawings.
Fig. 1 shows an exemplary system architecture 100 to which the following method for following a robot or the following apparatus for following a robot of the present application may be applied.
As shown in fig. 1, the system architecture 100 may include a follower robot 101, a network 102, and a server 103. The network 102 serves as a medium to provide a communication link between the follower robot 101 and the server 103. Network 102 may include various connection types, such as wireless communication links, global positioning systems, or fiber optic cables, to name a few.
The following robot 101 may interact with the server 103 through the network 102 to receive or send a message (e.g., obtain a preset road network map of an area where the followed object is located from the server 103), and the like.
The follower robot 101 may be hardware or software. When the follower robot 101 is hardware, it may be a robot having a camera and a moving device. When the following robot 101 is software, it may be installed in the robot. It may be implemented as multiple pieces of software or software modules (e.g., multiple pieces of software or software modules to provide distributed services) or as a single piece of software or software module. And is not particularly limited herein.
The following robot 101 may recognize the followed object, and determine the position of the followed object to re-follow the followed object. For example, the following robot 101 may first acquire feature point information of the followed object, and acquire at least one following image captured during the following of the followed object by the target period. Thereafter, the following robot 101 may determine whether the above-described followed object is lost based on the acquired feature point information and the at least one following image. And then, if the fact that the followed object is lost is determined, obtaining positioning information for positioning the followed object. Finally, the following robot 101 may determine movement information of the followed object based on the positioning information, and perform re-following on the followed object using the movement information.
The server 103 may be a server that provides various services, such as a server that stores a road network graph of a plurality of areas and a server that provides the road network graph of the area where the followed object is located to the following robot 101.
The server may be hardware or software. When the server is hardware, it may be implemented as a distributed server cluster formed by multiple servers, or may be implemented as a single server. When the server is software, it may be implemented as multiple pieces of software or software modules (e.g., multiple pieces of software or software modules used to provide distributed services), or as a single piece of software or software module. And is not particularly limited herein.
It should be noted that the following method for the following robot provided in the embodiment of the present application is generally performed by the following robot 101, and accordingly, a following device for the following robot is generally provided in the following robot 101.
It should be noted that the following robot 101 may also directly store the road network graph of the region where the followed object is located, and the following robot 101 may directly obtain the road network graph of the region where the followed object is located from the local, in this case, the network 102 and the server 103 may not be present in the exemplary system architecture 100.
It should be understood that the number of follower robots, networks, and servers in fig. 1 is merely illustrative. There may be any number of follower robots, networks, and servers, as desired for the implementation.
With continued reference to FIG. 2, a flow 200 of one embodiment of a following method for following a robot according to the present application is shown. The following method for a following robot includes the steps of:
step 201, feature point information of the followed object is acquired, and at least one following image which is shot in the process of following the followed object in the target period is acquired.
In the present embodiment, an execution subject of a following method for a following robot (e.g., the following robot 101 shown in fig. 1) can acquire feature point information of a followed object. The following robot may also be referred to as a visual recognition following robot, which moves by following the followed object by recognizing the followed object to determine the position of the followed object. In the following process, the following robot generally needs to avoid the obstacle. Following robots generally have the following functions: the following system comprises a followed object positioning function, an obstacle identification and avoiding function, a dynamic path planning function and a moving function. In image processing, a feature point refers to a point where the image gradation value changes drastically or a point where the curvature is large on an edge of an image (i.e., an intersection of two edges). The image feature points can reflect the essential features of the image and can identify the target object in the image. Matching of images can be completed through matching of feature points. The feature point information is used to describe feature points of the followed object.
In this embodiment, the executing body may acquire at least one following image captured during the following of the followed object in the target period. The at least one following image is typically a continuous frame. The target period may be a period of time formed from a preset time before the current time to the current time. As an example, if the current time is 8 dots, the at least one following image may be a continuous frame captured from 7 dots 59 minutes 30 seconds to 8 dots.
In this embodiment, the executing body may acquire feature point information of the followed object from other electronic devices in which feature point information of the followed object is stored. If the execution body locally stores the feature point information of the followed object, the execution body may locally acquire the feature point information of the followed object. In order to obtain the feature point information locally, the executing entity generally needs to first identify the followed object to obtain the feature point of the followed object, and the executing entity may identify the followed object by using an existing identification and following algorithm, for example, using KCF (Kernel Correlation Filter) to identify the followed object. KCF is a discriminant tracking method, which generally trains a target detector during tracking, uses the target detector to detect whether the next frame predicted position is the target object, and then uses the new detection result to update the training set and further update the target detector.
Step 202, determining whether the followed object is lost or not based on the characteristic point information and the at least one following image.
In this embodiment, based on the feature point information and the at least one following image acquired in step 201, the executing entity may determine whether the followed object is lost, and the loss of the followed object may also be regarded as interruption of following of the followed object by the following robot. Specifically, for a following image of the at least one following image, the executing body may first determine whether a feature point indicated by the feature point information can be recognized from the following image; if there is a following image in the at least one following image in which the feature point indicated by the feature point information is not recognized, the execution subject may determine the number of images of consecutive following images in the at least one following image in which the feature point is not recognized; and then, determining whether the number of images is greater than a preset number of images threshold, wherein if the number of images is greater than the preset number of images threshold, the execution main body can determine that the followed object is lost. As an example, the preset image number threshold is 10, and if the number of images of consecutive following images in which the feature point is not recognized is greater than 10, it may be determined that the followed object is lost.
Step 203, in response to determining that the followed object is lost, obtaining positioning information for positioning the followed object.
In this embodiment, if it is determined in step 202 that the followed object is lost, the execution main body may acquire positioning information for positioning the followed object. As an example, the followed object may carry an electronic device (e.g., a mobile phone, a smart watch, etc.) having a positioning function, and at this time, the execution subject may acquire positioning information, e.g., latitude and longitude information, of the followed object from the electronic device.
And step 204, determining the movement information of the followed object based on the positioning information, and performing re-following on the followed object by using the movement information.
In this embodiment, the executing body may determine the movement information of the followed object based on the positioning information acquired in step 203. The movement information may include a movement direction, a movement track, a movement speed, and the like of the followed object. The execution main body can acquire longitude and latitude information of the following robot at the current moment; then, determining the relative direction and the relative distance of the followed object relative to the following robot by utilizing the longitude and latitude information of the following robot at the current moment and the longitude and latitude information of the followed object, thereby determining the moving direction of the following robot; then, the executing body may control the moving means (e.g., wheels, crawler belts, legs, etc.) of the following robot using a basic motion control algorithm to move according to the determined moving direction to achieve re-following of the followed object. In the process of re-following the followed object, the executing body generally needs to identify an obstacle and avoid the obstacle. As an example, obstacles may be recognized and avoided using methods such as depth camera recognition, ultrasonic ranging, and infrared ranging.
With continued reference to fig. 3, fig. 3 is a schematic diagram of an application scenario of the following method for following a robot according to the present embodiment. In the application scenario of fig. 3, the following robot 301 may first acquire feature point information 303 of the followed object 302, and acquire at least one following image 304 that follows the following robot 301 in the process of following the followed object 302 from 20 seconds ago to the current time. Thereafter, the following robot 301 may determine whether the number of images of the consecutive following images in which the feature point indicated by the feature point information 303 is not recognized in the at least one following image 304 is greater than a preset image number threshold value, thereby determining whether the followed object 302 is lost in step 305. As an example, if the number of images of consecutive following images in which the feature point is not recognized is 8 and the preset image number threshold is 5, it is determined that the number of images of consecutive following images in which the feature point is not recognized is 8 greater than the image number threshold 5, and therefore, it can be determined that the followed object 302 is lost. Then, if the following robot 301 determines that the followed object 302 is lost, the positioning information 306 for positioning the followed object 302 may be acquired, for example, the positioning information 306 may include latitude and longitude information of the current position of the followed object 302. Thereafter, the following robot 301 may acquire the latitude and longitude information of the robot itself at the current time, and determine that the relative direction of the followed object 302 with respect to the following robot 301 is the northeast direction and the relative distance is 8 meters by using the latitude and longitude information of the following robot 301 at the current time and the latitude and longitude information of the followed object 302, thereby determining the movement information 307 of the following robot 301, where the movement information 307 includes that the movement direction of the following robot 301 is the northeast direction. Finally, the following robot 301 may control the legs of the following robot 301 using a basic motion control algorithm so that the following robot 301 moves in the northeast direction to achieve re-following of the followed object 302.
The method provided by the above embodiment of the present application determines the movement information of the followed object by acquiring the current positioning information of the followed object under the condition that it is determined that the followed object is lost, so as to re-follow the followed object. The method avoids the situation that the following robot is artificially helped to follow again under the condition that the following robot is interrupted, and ensures the following smoothness of the following robot to the followed object.
With further reference to fig. 4, a flow 400 of yet another embodiment of a following method for following a robot is shown. The flow 400 of the following method for following a robot includes the steps of:
step 401, feature point information of a followed object is acquired, and at least one following image which is shot in a target period in the process of following the followed object is acquired.
Step 402, determining whether the followed object is lost or not based on the characteristic point information and the at least one following image.
In this embodiment, the operations in step 401 to step 402 are substantially the same as the operations in step 201 to step 202, and are not described herein again.
And step 403, in response to determining that the followed object is lost, acquiring positioning information for positioning the followed object.
In this embodiment, if it is determined in step 402 that the followed object is lost, the execution main body may acquire positioning information for positioning the followed object.
In this embodiment, the positioning information may include sensor information. At this time, the followed object needs to wear a positioning sensor, and the worn positioning sensor performs information interaction with the base station of the execution main body to obtain sensor information, for example, current position information of the followed object is acquired. The sensor information may include current position information of the followed object, and the sensor information may further include at least one of: and the reporting frequency for reporting the position of the followed object in the target time interval is at least two pieces of position information which are continuously reported for the followed object. The reporting frequency generally refers to the number of times of reporting information by a positioning sensor worn in a unit time (e.g., 1 second, 3 seconds, etc.). The position information generally refers to relative position information of the followed object relative to the following robot, and may include a distance between the followed object and the following robot, and a relative direction of the followed object relative to the following robot.
In this embodiment, the positioning information may further include at least two moving pictures of the followed object captured continuously within a preset time period (e.g., 5 seconds, 10 seconds, etc.) before the followed object is lost.
In this embodiment, the positioning information may further include relative direction information of the followed object with respect to the following robot. For example, the followed object may be located in the north-east of the following robot; if the direction in which the face of the following robot faces is assumed to be the front, the followed object may be located in front of the following robot on the left.
Step 404, determining whether the current location information is available based on the reporting frequency and/or the at least two location information.
In this embodiment, the execution main body may determine whether the current location information is available based on the reporting frequency. It is also possible to determine whether the current location information is available based on the at least two pieces of location information. And determining whether the current position information is available or not based on the reporting frequency and the at least two pieces of position information.
In some optional implementation manners of this embodiment, the execution main body may determine whether a difference between the reporting frequency and a preset reporting frequency is smaller than a preset reporting frequency difference threshold. If the difference between the reporting frequencies is smaller than a preset reporting frequency difference threshold, the execution main body may determine that the current location information is available. The preset reporting frequency and the preset reporting frequency difference threshold are usually set manually, and are used for determining whether the information reporting frequency of the positioning sensor worn by the followed object has an excessive deviation from a preset reporting frequency, and if the deviation is excessive, the sensor information can be considered to be inaccurate. As an example, if the preset reporting frequency is 5 times per second, and the preset reporting frequency difference threshold is 3 times per second, and if the obtained reporting frequency is 4 times per second, it may be determined that the difference between the reporting frequencies is 1 time per second and is less than the preset reporting frequency difference threshold for 3 times per second, and it may be determined that the current location information is available.
In some optional implementation manners of this embodiment, since the positioning sensor worn by the followed object generally reports the position information to the base station of the following robot according to a predefined reporting frequency, if a distance between positions respectively indicated by two times of continuously reported position information is too large, the sensor information may be considered to be inaccurate. For two pieces of location information that are continuously reported in the at least two pieces of location information, the execution main body may determine, as a first distance, a distance between locations respectively indicated by the two pieces of location information that are continuously reported; thereafter, the executing entity may determine whether there is a first distance greater than a preset first distance threshold (e.g., 2 meters) from the determined at least one first distance; finally, if it is determined that there is no first distance greater than a preset first distance threshold in the at least one first distance, the executing entity may determine that the current location information is available. As an example, if 5 pieces of location information are continuously reported for the followed object, where a first distance between a location indicated by the location information reported for the first time and a location indicated by the location information reported for the second time is 1.2 meters, a first distance between a location indicated by the location information reported for the second time and a location indicated by the location information reported for the third time is 0.9 meters, a first distance between a location indicated by the location information reported for the third time and a location indicated by the location information reported for the fourth time is 1.6 meters, and a first distance between a location indicated by the location information reported for the fourth time and a location indicated by the location information reported for the fifth time is 1.5 meters. If the preset first distance threshold is 2 meters, it may be determined that there is no first distance greater than 2 meters among the 4 first distances, and the execution subject may determine that the current location information is available.
In this embodiment, the execution body may determine whether a difference between the reporting frequency and a preset reporting frequency is smaller than a preset reporting frequency difference threshold. If it is determined that the difference between the reporting frequencies is smaller than the preset reporting frequency difference threshold, for two pieces of location information that are continuously reported in the at least two pieces of location information, the execution main body may determine, as a third distance, a distance between locations respectively indicated by the two pieces of location information that are continuously reported. Thereafter, the executing entity may determine whether there is a third distance greater than a preset third distance threshold from among the determined at least one third distance. If there is a third distance greater than a preset third distance threshold among the at least one third distance, the executing entity may determine the number of the third distances greater than the preset third distance threshold. If the ratio between the number of the third distances greater than the preset third distance threshold and the number of the at least one third distance is smaller than the preset first ratio threshold, the execution main body may determine that the current location information is available.
Step 405, in response to determining that the current position information is available, generating a following path for following the followed object based on the current position indicated by the current position information of the followed object and the current position of the following robot, and re-following the followed object according to the following path.
In this embodiment, in response to determining that the current position information is available in step 404, the executing entity may generate a following path for following the followed object based on the current position indicated by the current position information of the followed object and the current position of the following robot, and then the executing entity may re-follow the followed object according to the following path. The generated following path may be the shortest following path or a straight following path. The executing body can control the moving device of the following robot by using a basic motion control algorithm so as to move according to the following path to realize the re-following of the followed object. In the process of re-following the followed object, the executing body generally needs to identify an obstacle and avoid the obstacle. As an example, obstacles may be recognized and avoided using methods such as depth camera recognition, ultrasonic ranging, and infrared ranging.
In this embodiment, the executing agent may first determine the current position of the executing agent itself (following the robot). Since the current position indicated by the current position information of the followed object is generally a relative position with respect to the following robot, the current position information may include relative distance information and relative direction information with respect to the following robot, and thus, the executing body may determine the current position of the followed object using the current position of the following robot and the relative distance and relative direction with respect to the following robot.
And step 406, in response to determining that the current position information is unavailable, acquiring a preset road network graph of the area where the followed object is located in the at least two moving pictures.
In this embodiment, in response to determining that the current position information is not available in step 404, the executing entity may obtain a preset road network map of an area in which the followed object is located in the at least two moving pictures. The road network map can be used for representing the passable roads in the located area. The execution subject may recognize a region where the followed object is located from a moving picture, and then acquire a road network map of the region where the followed object is located.
In this embodiment, the road network map may be established as follows: the robot capable of feeding back the motion track information feeds back the walking track of the walking; then, the walking track can be approximated to a regular road network. In some cases, the information of the building construction diagram can be directly converted, so that the path information in a certain area is obtained, and the building of the road network diagram is completed.
Step 407, determining whether at least two moving pictures are available based on the number of the at least two moving pictures and/or the position of the followed object indicated by the at least two moving pictures.
In this embodiment, the execution main body may determine whether the at least two moving pictures are available based on the number of the at least two moving pictures. It is also possible to determine whether the at least two moving pictures are available based on the positions of the followed objects indicated by the at least two moving pictures. It is also possible to determine whether at least two moving pictures are available based on the number of the at least two moving pictures and the position of the followed object indicated by the at least two moving pictures.
In some optional implementations of the embodiment, the execution main body may determine whether the number of the at least two moving pictures is greater than a preset number threshold (e.g., 5). If it is determined that the number of the at least two moving pictures is greater than the preset number threshold, the executing body may determine that the at least two moving pictures are available.
In some optional implementations of the present embodiment, since the followed object is usually photographed at a preset photographing frequency, if a distance between positions of the followed object respectively indicated by two moving pictures photographed in succession is too large, the moving picture may be considered to be unavailable. For two moving pictures consecutively photographed among the at least two moving pictures, the execution subject may determine, as a second distance, a distance between positions of the followed object indicated by the two consecutively photographed moving pictures, respectively; thereafter, the executing entity may determine whether there is a second distance greater than a preset second distance threshold (e.g., 3 meters) in the determined at least one second distance; finally, if it is determined that there is no second distance greater than a preset second distance threshold in the at least one second distance, the executing entity may determine that the at least two moving pictures are available. As an example, if 5 moving pictures of the followed object are continuously shot within 5 seconds before the followed object is lost, wherein a second distance between the position of the followed object indicated by the moving picture photographed for the first time and the position of the followed object indicated by the moving picture photographed for the second time is 1.9 m, a second distance between the position of the followed object indicated by the moving picture photographed for the second time and the position of the followed object indicated by the moving picture photographed for the third time is 2.3 m, a second distance between the position of the followed object indicated by the moving picture photographed for the third time and the position of the followed object indicated by the moving picture photographed for the fourth time is 1.6 m, and a second distance between the position of the followed object indicated by the moving picture photographed for the fourth time and the position of the followed object indicated by the moving picture photographed for the fifth time is 2.6 m. If the preset second distance threshold is 3 meters, it may be determined that there is no second distance greater than 3 meters among the 4 second distances, and the execution subject may determine that the at least two moving pictures are available.
In this embodiment, the execution main body may determine whether the number of the at least two moving pictures is greater than a preset number threshold. If it is determined that the number of the at least two moving pictures is greater than the preset number threshold, for two moving pictures continuously captured among the at least two moving pictures, the executing body may determine, as a fourth distance, a distance between positions of the followed object respectively indicated by the two continuously captured moving pictures. Thereafter, the executing body may determine whether there is a fourth distance greater than a preset fourth distance threshold from the determined at least one fourth distance. If there is a fourth distance greater than a preset fourth distance threshold in the at least one fourth distance, the executing entity may determine the number of the fourth distances greater than the preset fourth distance threshold. If the ratio between the number of the fourth distances greater than the preset fourth distance threshold and the number of the at least one fourth distance is smaller than the preset second ratio threshold, the executing body may determine that the at least two moving pictures are available.
And step 408, responding to the determination that at least two moving pictures are available, predicting the moving direction of the followed object by using a Kalman filtering algorithm as a first moving direction based on the at least two moving pictures and the road network graph, and re-following the followed object according to the first moving direction.
In this embodiment, if it is determined in step 407 that the at least two moving pictures are available, the executing entity may predict a moving direction of the followed object as a first moving direction by using a kalman filter algorithm based on the at least two moving pictures and the road network map, and re-follow the followed object according to the first moving direction. The Kalman filtering algorithm is an algorithm which utilizes a linear system state equation, outputs observation data through system input and outputs and performs optimal estimation on the system state. The optimal estimation can also be seen as a filtering process, since the observed data includes the effects of noise and interference in the system.
In this embodiment, the executing body may first determine a moving trend of the followed object by using the at least two moving pictures. Then, the executing body may find a position where the followed object displayed by the at least two moving pictures is located before the loss in the road network graph, so as to determine a movable direction of the followed object after the loss. Then, the executing body may predict a moving direction of the followed object as a first moving direction from the movable direction by using a kalman filter algorithm based on the moving tendency of the followed object. Finally, the executing body may control a moving device of the following robot using a basic motion control algorithm to move according to the first moving direction so as to realize re-following of the followed object. In the process of re-following the followed object, the executing body generally needs to identify an obstacle and avoid the obstacle. As an example, obstacles may be recognized and avoided using methods such as depth camera recognition, ultrasonic ranging, and infrared ranging.
And step 409, in response to the fact that at least two moving pictures are determined to be unavailable, predicting the moving direction of the followed object as a second moving direction based on the relative direction information and the road network graph, and re-following the followed object according to the second moving direction.
In this embodiment, if it is determined in step 407 that the at least two moving pictures are not available, the execution subject may predict the moving direction of the followed object as a second moving direction based on the relative direction information and the road network map, and re-follow the followed object in the second moving direction.
In this embodiment, the executing agent may predict a moving direction of the following robot moving in the relative direction indicated by the relative direction information as a second moving direction by using the road network map, and then may control the moving device of the following robot to move in the second moving direction by using a basic motion control algorithm so as to realize re-following of the followed object.
For example, if the followed object is located in the northeast of the following robot, the executing entity may predict that the second moving direction is the first northeast moving direction and then drives to the east at the first corner by using the road network map.
As can be seen in fig. 4, compared with the embodiment corresponding to fig. 2, the flow 400 of the following method for following a robot in this embodiment embodies a step of determining movement information of a followed object by determining the movement information of the followed object using sensor information or at least two moving pictures by determining whether the sensor information and at least two moving pictures before the followed object is lost are available, so as to re-follow the followed object. Therefore, the solution described in this embodiment may first determine whether current location information of the followed object is available, and if the current location information is available, determine the movement information of the followed object by using the current location information. And if the at least two moving pictures are available, determining the moving information of the followed object by utilizing the at least two moving pictures. If at least two moving pictures are not available, determining the moving information of the followed object by using the relative direction information. Therefore, the followed object can be accurately re-followed.
With further reference to fig. 5, as an implementation of the method shown in the above figures, the present application provides an embodiment of a following apparatus for following a robot, which corresponds to the method embodiment shown in fig. 2, and which is particularly applicable in various electronic devices.
As shown in fig. 5, the following device 500 for following a robot of the present embodiment includes: a first acquisition unit 501, a determination unit 502, a second acquisition unit 503, and a following unit 504. The first acquisition unit 501 is configured to acquire feature point information of a followed object, and acquire at least one following image captured during a target period in the process of following the followed object, wherein the target period is a period formed from a preset time before a current time to the current time; the determination unit 502 is configured to determine whether the followed object is lost based on the feature point information and the at least one following image; the second obtaining unit 503 is configured to obtain positioning information for positioning the followed object in response to determining that the followed object is lost; the following unit 504 is configured to determine movement information of the followed object based on the positioning information, and to re-follow the followed object using the movement information.
In the present embodiment, specific processing of the first acquisition unit 501, the determination unit 502, the second acquisition unit 503, and the following unit 504 of the following apparatus 500 for following a robot may refer to step 201, step 202, step 203, and step 204 in the corresponding embodiment of fig. 2.
In some optional implementations of this embodiment, the positioning information may include sensor information. At this time, the followed object needs to wear a positioning sensor, and the worn positioning sensor performs information interaction with a base station of the following robot to obtain sensor information, for example, current position information of the followed object is obtained. The sensor information may include current position information of the followed object, and the sensor information may further include at least one of: and the reporting frequency for reporting the position of the followed object in the target time interval is at least two pieces of position information which are continuously reported for the followed object. The reporting frequency generally refers to the number of times that the positioning sensor worn in unit time reports information. The position information generally refers to relative position information of the followed object relative to the following robot, and may include a distance between the followed object and the following robot, and a relative direction of the followed object relative to the following robot.
In some optional implementations of the embodiment, the following unit 504 may be further configured to determine movement information of the followed object based on the positioning information, and perform re-following on the followed object by using the movement information, as follows: the following unit 504 may first determine whether the current location information is available based on the reporting frequency; determining whether the current position information is available based on the at least two pieces of position information; and determining whether the current location information is available based on the reporting frequency and the at least two pieces of location information. Then, if it is determined that the current position information is available, the following unit 504 may generate a following path for following the followed object based on the current position indicated by the current position information of the followed object and the current position of the following robot, and then the following unit 504 may re-follow the followed object according to the following path. The generated following path may be the shortest following path or a straight following path. The following unit 504 may control the moving device of the following robot using a basic motion control algorithm to move along the following path to achieve re-following of the followed object. In the process of re-following the followed object, it is generally necessary to identify and avoid an obstacle. As an example, obstacles may be recognized and avoided using methods such as depth camera recognition, ultrasonic ranging, and infrared ranging.
In some optional implementations of the embodiment, the following unit 504 may be further configured to determine whether the current location information is available based on the reporting frequency and/or the at least two pieces of location information as follows: the following unit 504 may determine whether a difference between the reporting frequency and a preset reporting frequency is smaller than a preset reporting frequency difference threshold. If it is determined that the reporting frequency difference is smaller than a preset reporting frequency difference threshold, the following unit 504 may determine that the current location information is available. The preset reporting frequency and the preset reporting frequency difference threshold are usually set manually, and are used for determining whether the information reporting frequency of the positioning sensor worn by the followed object has an excessive deviation from a preset reporting frequency, and if the deviation is excessive, the sensor information can be considered to be inaccurate.
In some optional implementations of the embodiment, the following unit 504 may be further configured to determine whether the current location information is available based on the reporting frequency and/or the at least two pieces of location information as follows: for two pieces of location information that are continuously reported in the at least two pieces of location information, the following unit 504 may determine a distance between locations respectively indicated by the two pieces of location information that are continuously reported as a first distance; then, the following unit 504 may determine whether there is a first distance greater than a preset first distance threshold (e.g., 2 meters) in the determined at least one first distance; finally, if it is determined that there is no first distance greater than a preset first distance threshold in the at least one first distance, the following unit 504 may determine that the current location information is available.
In some optional implementations of the embodiment, in response to determining that the current location information is not available, the following unit 504 may obtain a preset road network map of an area in which the followed object is located in the at least two moving pictures. The road network map can be used for representing the passable roads in the located area. The following unit 504 may recognize a region where the followed object is located from a moving picture, and then acquire a road network map of the region where the followed object is located. Then, the following unit 504 may determine whether the at least two moving pictures are available based on the number of the at least two moving pictures. It is also possible to determine whether the at least two moving pictures are available based on the positions of the followed objects indicated by the at least two moving pictures. It is also possible to determine whether at least two moving pictures are available based on the number of the at least two moving pictures and the position of the followed object indicated by the at least two moving pictures. If it is determined that the at least two moving pictures are available, the following unit 504 may predict a moving direction of the followed object as a first moving direction by using a kalman filter algorithm based on the at least two moving pictures and the road network map, and re-follow the followed object according to the first moving direction. The Kalman filtering algorithm is an algorithm which utilizes a linear system state equation, outputs observation data through system input and outputs and performs optimal estimation on the system state. The optimal estimation can also be seen as a filtering process, since the observed data includes the effects of noise and interference in the system.
In some optional implementations of the embodiment, the following unit 504 may be further configured to determine whether the at least two moving pictures are available based on the number of the at least two moving pictures and/or the position of the followed object indicated by the at least two moving pictures as follows: the following unit 504 may determine whether the number of the at least two moving pictures is greater than a preset number threshold (e.g., 5). If it is determined that the number of the at least two moving pictures is greater than the preset number threshold, the following unit 504 may determine that the at least two moving pictures are available.
In some optional implementations of the embodiment, the following unit 504 may be further configured to determine whether the at least two moving pictures are available based on the number of the at least two moving pictures and/or the position of the followed object indicated by the at least two moving pictures as follows: for two moving pictures consecutively photographed among the at least two moving pictures, the following unit 504 may determine a distance between positions of the followed object respectively indicated by the two consecutively photographed moving pictures as a second distance; then, the following unit 504 may determine whether there is a second distance greater than a preset second distance threshold (e.g., 3 meters) in the determined at least one second distance; finally, if it is determined that there is no second distance greater than the preset second distance threshold in the at least one second distance, the following unit 504 may determine that the at least two moving pictures are available.
In some optional implementations of this embodiment, if it is determined that the at least two moving pictures are not available, the following unit 504 may predict a moving direction of the followed object as a second moving direction based on the relative direction information and the road network map, and re-follow the followed object according to the second moving direction.
Referring now to FIG. 6, a block diagram of a computer system 600 suitable for use in a follow-on robot implementing embodiments of the present invention is shown. The following robot shown in fig. 6 is only an example, and should not bring any limitation to the functions and the range of use of the embodiment of the present application.
As shown in fig. 6, the following robot 600 includes a Central Processing Unit (CPU)601, a memory 602, an input unit 603, and a moving unit 604, wherein the CPU601, the memory 602, the input unit 603, and the moving unit 604 are connected to each other through a bus 605. Here, the method according to the present application may be implemented as a computer program and stored in the memory 602. The CPU601 in the following robot 600 embodies the following function defined in the method of the present application by calling the above-described computer program stored in the memory 602. In some implementations, the input unit 603 may be a camera for acquiring at least one follow-up image, and the moving unit 604 may be a wheel, a crawler, a leg, or the like, which may be used for movement. Thus, the CPU601, when calling the above-described computer program to execute the following function, can control the input unit 603 to acquire at least one following image from the outside and control the moving unit 604 to move to realize re-following of the followed object.
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present invention. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The units described in the embodiments of the present invention may be implemented by software or hardware. The described units may also be provided in a processor, and may be described as: a processor includes a first acquisition unit, a determination unit, a second acquisition unit, and a following unit. Wherein the names of the elements do not in some way constitute a limitation on the elements themselves. For example, the determination unit may also be described as "a unit that determines whether the followed object is lost based on the feature point information and the at least one following image".
As another aspect, the present application also provides a computer-readable medium, which may be contained in the apparatus described in the above embodiments; or may be present separately and not assembled into the device. The computer readable medium carries one or more programs which, when executed by the apparatus, cause the apparatus to: the method comprises the steps of obtaining characteristic point information of a followed object, and obtaining at least one following image shot by a target time period in the process of following the followed object, wherein the target time period is a time period formed from a preset time before the current time to the current time; determining whether the followed object is lost based on the feature point information and the at least one following image; in response to determining that the followed object is lost, obtaining positioning information for positioning the followed object; based on the positioning information, movement information of the followed object is determined, and the followed object is re-followed using the movement information.
The foregoing description is only exemplary of the preferred embodiments of the invention and is illustrative of the principles of the technology employed. It will be appreciated by those skilled in the art that the scope of the invention according to the present invention is not limited to the specific combination of the above-mentioned features, but also encompasses other embodiments in which any combination of the above-mentioned features or their equivalents is possible without departing from the scope of the invention as defined by the appended claims. For example, the above features and (but not limited to) features having similar functions disclosed in the present invention are mutually replaced to form the technical solution.

Claims (18)

1. A following method for a following robot, comprising:
the method comprises the steps of obtaining characteristic point information of a followed object, and obtaining at least one following image shot by a target time period in the process of following the followed object, wherein the target time period is a time period formed from a preset time before the current time to the current time;
determining whether the followed object is lost based on the feature point information and the at least one following image;
in response to determining that the followed object is lost, obtaining positioning information for positioning the followed object;
based on the positioning information, determining movement information of the followed object, and re-following the followed object by using the movement information.
2. The method of claim 1, wherein the positioning information includes sensor information including current location information of the followed object and at least one of: the reporting frequency for reporting the position of the followed object in the target time period is at least two pieces of position information which are continuously reported for the followed object; and
the determining movement information of the followed object based on the positioning information and re-following the followed object by using the movement information includes:
determining whether the current location information is available based on the reporting frequency and/or the at least two pieces of location information;
in response to determining that the current position information is available, generating a following path for following the followed object based on the current position indicated by the current position information of the followed object and the current position of the following robot, and re-following the followed object according to the following path.
3. The method of claim 2, wherein the determining whether the current location information is available based on the reporting frequency and/or the at least two location information comprises:
determining whether the difference between the reporting frequency and a preset reporting frequency is smaller than a preset reporting frequency difference threshold value;
and determining that the current position information is available in response to determining that the reporting frequency difference is smaller than a preset reporting frequency difference threshold value.
4. The method of claim 2, wherein the determining whether the current location information is available based on the reporting frequency and/or the at least two location information comprises:
determining, for two pieces of continuously reported location information of the at least two pieces of location information, a distance between locations respectively indicated by the two pieces of continuously reported location information as a first distance;
determining whether a first distance greater than a preset first distance threshold exists in the determined at least one first distance;
determining that the current location information is available in response to determining that no first distance greater than a preset first distance threshold exists among the at least one first distance.
5. The method according to claim 2, wherein the positioning information further comprises at least two moving pictures of the followed object captured continuously within a preset time period before the followed object is lost; and
after the determining whether the current location information is available based on the reporting frequency and/or the at least two pieces of location information, the method further includes:
in response to determining that the current position information is unavailable, acquiring a preset road network map of an area where the followed object is located in the at least two moving pictures, wherein the road network map is used for representing a passable road in the area;
determining whether the at least two moving pictures are available based on the number of the at least two moving pictures and/or the position of the followed object indicated by the at least two moving pictures;
and in response to determining that the at least two moving pictures are available, predicting a moving direction of the followed object as a first moving direction by using a Kalman filtering algorithm based on the at least two moving pictures and the road network graph, and re-following the followed object according to the first moving direction.
6. The method of claim 5, wherein the determining whether the at least two moving pictures are available based on the number of the at least two moving pictures and/or the location of the followed object indicated by the at least two moving pictures comprises:
determining whether the number of the at least two moving pictures is greater than a preset number threshold;
determining that the at least two moving pictures are available in response to determining that the number of the at least two moving pictures is greater than a preset number threshold.
7. The method of claim 5, wherein the determining whether the at least two moving pictures are available based on the number of the at least two moving pictures and/or the location of the followed object indicated by the at least two moving pictures comprises:
determining, as second distances, distances between positions of the followed object indicated by two continuously shot moving pictures of the at least two continuously shot moving pictures, respectively;
determining whether a second distance greater than a preset second distance threshold exists in the determined at least one second distance;
determining that the at least two moving pictures are available in response to determining that there is no second distance of the at least one second distance that is greater than a preset second distance threshold.
8. The method of one of claims 5-7, wherein the sensor information further includes relative directional information of the followed object with respect to the following robot; and
after the determining whether the at least two moving pictures are available based on the number of the at least two moving pictures and/or the position of the followed object indicated by the at least two moving pictures, the method further comprises:
and in response to determining that the at least two moving pictures are not available, predicting a moving direction of the followed object as a second moving direction based on the relative direction information and the road network graph, and re-following the followed object according to the second moving direction.
9. A following device for following a robot, comprising:
a first acquisition unit configured to acquire feature point information of a followed object, and acquire at least one following image captured during a target period in a process of following the followed object, wherein the target period is a period of time formed from a preset time before a current time to the current time;
a determination unit configured to determine whether the followed object is lost based on the feature point information and the at least one following image;
a second acquisition unit configured to acquire positioning information for positioning the followed object in response to determining that the followed object is lost;
a following unit configured to determine movement information of the followed object based on the positioning information, and to re-follow the followed object using the movement information.
10. The apparatus of claim 9, wherein the positioning information comprises sensor information including current location information of the followed object and at least one of: the reporting frequency for reporting the position of the followed object in the target time period is at least two pieces of position information which are continuously reported for the followed object; and
the following unit is further configured to determine movement information of the followed object based on the positioning information, and to re-follow the followed object using the movement information, as follows:
determining whether the current location information is available based on the reporting frequency and/or the at least two pieces of location information;
in response to determining that the current position information is available, generating a following path for following the followed object based on the current position indicated by the current position information of the followed object and the current position of the following robot, and re-following the followed object according to the following path.
11. The apparatus of claim 10, wherein the follower unit is further configured to determine whether the current location information is available based on the reporting frequency and/or the at least two location information as follows:
determining whether the difference between the reporting frequency and a preset reporting frequency is smaller than a preset reporting frequency difference threshold value;
and determining that the current position information is available in response to determining that the reporting frequency difference is smaller than a preset reporting frequency difference threshold value.
12. The apparatus of claim 10, wherein the follower unit is further configured to determine whether the current location information is available based on the reporting frequency and/or the at least two location information as follows:
determining, for two pieces of continuously reported location information of the at least two pieces of location information, a distance between locations respectively indicated by the two pieces of continuously reported location information as a first distance;
determining whether a first distance greater than a preset first distance threshold exists in the determined at least one first distance;
determining that the current location information is available in response to determining that no first distance greater than a preset first distance threshold exists among the at least one first distance.
13. The apparatus according to claim 10, wherein the positioning information further includes at least two moving pictures of the followed object captured continuously within a preset time period before the followed object is lost; and
the follower unit is further configured to:
in response to determining that the current position information is unavailable, acquiring a preset road network map of an area where the followed object is located in the at least two moving pictures, wherein the road network map is used for representing a passable road in the area;
determining whether the at least two moving pictures are available based on the number of the at least two moving pictures and/or the position of the followed object indicated by the at least two moving pictures;
and in response to determining that the at least two moving pictures are available, predicting a moving direction of the followed object as a first moving direction by using a Kalman filtering algorithm based on the at least two moving pictures and the road network graph, and re-following the followed object according to the first moving direction.
14. The apparatus according to claim 13, wherein the following unit is further configured to determine whether the at least two moving pictures are available based on the number of the at least two moving pictures and/or the position of the followed object indicated by the at least two moving pictures as follows:
determining whether the number of the at least two moving pictures is greater than a preset number threshold;
determining that the at least two moving pictures are available in response to determining that the number of the at least two moving pictures is greater than a preset number threshold.
15. The apparatus according to claim 13, wherein the following unit is further configured to determine whether the at least two moving pictures are available based on the number of the at least two moving pictures and/or the position of the followed object indicated by the at least two moving pictures as follows:
determining, as second distances, distances between positions of the followed object indicated by two continuously shot moving pictures of the at least two continuously shot moving pictures, respectively;
determining whether a second distance greater than a preset second distance threshold exists in the determined at least one second distance;
determining that the at least two moving pictures are available in response to determining that there is no second distance of the at least one second distance that is greater than a preset second distance threshold.
16. The apparatus of one of claims 13-15, wherein the sensor information further comprises relative directional information of the followed object with respect to the following robot; and
the follower unit is further configured to:
and in response to determining that the at least two moving pictures are not available, predicting a moving direction of the followed object as a second moving direction based on the relative direction information and the road network graph, and re-following the followed object according to the second moving direction.
17. A follower robot, comprising:
a controller comprising one or more processors;
a camera;
a mobile device;
a storage device having one or more programs stored thereon,
when executed by the one or more processors, cause the one or more processors to implement the method of any one of claims 1-8.
18. A computer-readable medium, on which a computer program is stored which, when being executed by a processor, carries out the method according to any one of claims 1-8.
CN201811512154.3A 2018-12-11 2018-12-11 Following method and device for following robot Active CN111381587B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811512154.3A CN111381587B (en) 2018-12-11 2018-12-11 Following method and device for following robot

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811512154.3A CN111381587B (en) 2018-12-11 2018-12-11 Following method and device for following robot

Publications (2)

Publication Number Publication Date
CN111381587A true CN111381587A (en) 2020-07-07
CN111381587B CN111381587B (en) 2023-11-03

Family

ID=71216219

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811512154.3A Active CN111381587B (en) 2018-12-11 2018-12-11 Following method and device for following robot

Country Status (1)

Country Link
CN (1) CN111381587B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113959432A (en) * 2021-10-20 2022-01-21 上海擎朗智能科技有限公司 Method and device for determining following path of mobile equipment and storage medium

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130342652A1 (en) * 2012-06-22 2013-12-26 Microsoft Corporation Tracking and following people with a mobile robotic device
CN105353395A (en) * 2015-09-24 2016-02-24 广州视源电子科技股份有限公司 Method and device of regulation of report frequency of position information
CN107042829A (en) * 2016-02-05 2017-08-15 上海汽车集团股份有限公司 Fleet follows monitoring method, apparatus and system
CN107073711A (en) * 2015-09-08 2017-08-18 深圳市赛亿科技开发有限公司 A kind of robot follower method
CN107608345A (en) * 2017-08-26 2018-01-19 深圳力子机器人有限公司 A kind of robot and its follower method and system
CN108549088A (en) * 2018-04-27 2018-09-18 科沃斯商用机器人有限公司 Localization method, equipment, system based on robot and storage medium

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130342652A1 (en) * 2012-06-22 2013-12-26 Microsoft Corporation Tracking and following people with a mobile robotic device
CN107073711A (en) * 2015-09-08 2017-08-18 深圳市赛亿科技开发有限公司 A kind of robot follower method
CN105353395A (en) * 2015-09-24 2016-02-24 广州视源电子科技股份有限公司 Method and device of regulation of report frequency of position information
CN107042829A (en) * 2016-02-05 2017-08-15 上海汽车集团股份有限公司 Fleet follows monitoring method, apparatus and system
CN107608345A (en) * 2017-08-26 2018-01-19 深圳力子机器人有限公司 A kind of robot and its follower method and system
CN108549088A (en) * 2018-04-27 2018-09-18 科沃斯商用机器人有限公司 Localization method, equipment, system based on robot and storage medium

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
刘宇航等: "基于实时手势识别与跟踪的人机交互实现", 《科学技术与工程》 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113959432A (en) * 2021-10-20 2022-01-21 上海擎朗智能科技有限公司 Method and device for determining following path of mobile equipment and storage medium
CN113959432B (en) * 2021-10-20 2024-05-17 上海擎朗智能科技有限公司 Method, device and storage medium for determining following path of mobile equipment

Also Published As

Publication number Publication date
CN111381587B (en) 2023-11-03

Similar Documents

Publication Publication Date Title
US10748061B2 (en) Simultaneous localization and mapping with reinforcement learning
US9846043B2 (en) Map creation apparatus, map creation method, and computer-readable recording medium
EP2858008B1 (en) Target detecting method and system
CN107123142B (en) Pose estimation method and device
EP3605390A1 (en) Information processing method, information processing apparatus, and program
US10950125B2 (en) Calibration for wireless localization and detection of vulnerable road users
US20140348380A1 (en) Method and appratus for tracking objects
US20170039727A1 (en) Methods and Systems for Detecting Moving Objects in a Sequence of Image Frames Produced by Sensors with Inconsistent Gain, Offset, and Dead Pixels
CN110717918B (en) Pedestrian detection method and device
CN103679742B (en) Method for tracing object and device
JP6185968B2 (en) Information processing system, portable terminal, server device, information processing method, and program
EP2594899A2 (en) Using structured light to update inertial navigation systems
CN109655786B (en) Mobile ad hoc network cooperation relative positioning method and device
Cai et al. Robust hybrid approach of vision-based tracking and radio-based identification and localization for 3D tracking of multiple construction workers
EP3690849A1 (en) Method and device for detecting emergency vehicles in real time and planning driving routes to cope with situations to be expected to be occurred by the emergency vehicles
CN111353453B (en) Obstacle detection method and device for vehicle
Papaioannou et al. Tracking people in highly dynamic industrial environments
KR101030317B1 (en) Apparatus for tracking obstacle using stereo vision and method thereof
CN113910224A (en) Robot following method and device and electronic equipment
AU2016202042A1 (en) Backtracking indoor trajectories using mobile sensors
WO2017150162A1 (en) Position estimating device, position estimating method and program
CN111381587B (en) Following method and device for following robot
CN111445499B (en) Method and device for identifying target information
CN110781730B (en) Intelligent driving sensing method and sensing device
CN111340880B (en) Method and apparatus for generating predictive model

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right

Effective date of registration: 20210302

Address after: 101, 1st floor, building 2, yard 20, Suzhou street, Haidian District, Beijing 100080

Applicant after: Beijing Jingbangda Trading Co.,Ltd.

Address before: 100086 8th Floor, 76 Zhichun Road, Haidian District, Beijing

Applicant before: BEIJING JINGDONG SHANGKE INFORMATION TECHNOLOGY Co.,Ltd.

Applicant before: BEIJING JINGDONG CENTURY TRADING Co.,Ltd.

Effective date of registration: 20210302

Address after: Room a1905, 19 / F, building 2, No. 18, Kechuang 11th Street, Daxing District, Beijing, 100176

Applicant after: Beijing Jingdong Qianshi Technology Co.,Ltd.

Address before: 101, 1st floor, building 2, yard 20, Suzhou street, Haidian District, Beijing 100080

Applicant before: Beijing Jingbangda Trading Co.,Ltd.

TA01 Transfer of patent application right
GR01 Patent grant
GR01 Patent grant