CN114659450A - Robot following method, device, robot and storage medium - Google Patents

Robot following method, device, robot and storage medium Download PDF

Info

Publication number
CN114659450A
CN114659450A CN202210306848.1A CN202210306848A CN114659450A CN 114659450 A CN114659450 A CN 114659450A CN 202210306848 A CN202210306848 A CN 202210306848A CN 114659450 A CN114659450 A CN 114659450A
Authority
CN
China
Prior art keywords
pixel point
point set
robot
pixel
depth information
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202210306848.1A
Other languages
Chinese (zh)
Other versions
CN114659450B (en
Inventor
刘非非
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Xiaomi Robot Technology Co ltd
Original Assignee
Beijing Xiaomi Mobile Software Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Xiaomi Mobile Software Co Ltd filed Critical Beijing Xiaomi Mobile Software Co Ltd
Priority to CN202210306848.1A priority Critical patent/CN114659450B/en
Publication of CN114659450A publication Critical patent/CN114659450A/en
Application granted granted Critical
Publication of CN114659450B publication Critical patent/CN114659450B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01BMEASURING LENGTH, THICKNESS OR SIMILAR LINEAR DIMENSIONS; MEASURING ANGLES; MEASURING AREAS; MEASURING IRREGULARITIES OF SURFACES OR CONTOURS
    • G01B11/00Measuring arrangements characterised by the use of optical techniques
    • G01B11/02Measuring arrangements characterised by the use of optical techniques for measuring length, width or thickness
    • G01B11/026Measuring arrangements characterised by the use of optical techniques for measuring length, width or thickness by measuring distance between sensor and object
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01DMEASURING NOT SPECIALLY ADAPTED FOR A SPECIFIC VARIABLE; ARRANGEMENTS FOR MEASURING TWO OR MORE VARIABLES NOT COVERED IN A SINGLE OTHER SUBCLASS; TARIFF METERING APPARATUS; MEASURING OR TESTING NOT OTHERWISE PROVIDED FOR
    • G01D21/00Measuring or testing not otherwise provided for
    • G01D21/02Measuring two or more variables by means not covered by a single other subclass
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/30Determination of transform parameters for the alignment of images, i.e. image registration
    • G06T7/33Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods
    • G06T7/337Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods involving reference images or patches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Theoretical Computer Science (AREA)
  • Manipulator (AREA)
  • Image Processing (AREA)

Abstract

The present disclosure relates to a robot following method, apparatus, robot, and storage medium, the method comprising: acquiring a depth image of a following object of the robot, and determining depth information of pixel points in the depth image; determining target pixel points on the depth image according to the depth information, wherein the target pixel points comprise at least one pixel point set, the number of the pixel points in the pixel point set meets a preset number condition, and the depth information of the pixel points in the pixel point set is the same; determining the distance between the robot and the following object according to the depth information of the target pixel points and the number of the target pixel points; following the following object based on the distance. The robot distance calculating method and device can calculate the distance and guarantee the precision under the condition that the calculation force of the robot is limited, and then the stability of the robot in following is guaranteed.

Description

Robot following method, device, robot and storage medium
Technical Field
The present disclosure relates to the field of robot technologies, and in particular, to a robot following method and apparatus, a robot, and a storage medium.
Background
The field of robots is currently a very popular and promising field, and as more and more robots enter people's lives, interactions between robots and other objects are more and more frequent, wherein the distance between robots and interactive objects is very critical information in the process of interaction between robots and interactive objects. Usually, the robot can only perform normal interaction if the distance to the interactive object is accurately known.
Disclosure of Invention
To overcome the problems in the related art, the present disclosure provides a robot following method, apparatus, robot, and storage medium.
According to a first aspect of embodiments of the present disclosure, there is provided a robot following method including:
acquiring a depth image of a following object of the robot, and determining depth information of pixel points in the depth image;
determining target pixel points on the depth image according to the depth information, wherein the target pixel points comprise at least one pixel point set, the number of the pixel points in the pixel point set meets a preset number condition, and the depth information of the pixel points in the pixel point set is the same;
determining the distance between the robot and the following object according to the depth information of the target pixel points and the number of the target pixel points;
following the following object based on the distance.
Optionally, the determining a target pixel point on the depth image according to the depth information includes:
determining pixel points in the depth image into at least one pixel point set according to the depth information, wherein different pixel point sets correspond to different depth information;
determining a pixel point set with the number of pixels larger than a first number threshold in the at least one pixel point set as a target pixel point set, wherein the first number threshold is positively correlated with the total number of pixels in the depth image;
and determining the pixel points in the target pixel point set as the target pixel points.
Optionally, the determining a target pixel point on the depth image according to the depth information includes:
determining pixel points in the depth image into at least one pixel point set according to the depth information, wherein different pixel point sets correspond to different depth information;
determining a pixel point set of which the number of the pixel points in the at least one pixel point set is greater than a first number threshold value as a first pixel point set;
if the number of the first pixel point sets is multiple, merging the multiple first pixel point sets according to the depth information corresponding to each first pixel point set in the multiple first pixel point sets to obtain a second pixel point set;
if the number of the pixels in the second pixel set is larger than a second number threshold, determining the pixel set included in the second pixel set as the target pixel set, wherein the first number threshold is positively correlated with the total number of the pixels in the depth image, and the second number threshold is larger than the first number threshold;
and determining the pixel points in the target pixel point set as the target pixel points.
Optionally, the merging, according to depth information corresponding to each first pixel point set in the plurality of first pixel point sets, the plurality of first pixel point sets to obtain a second pixel point set includes:
and merging the first pixel point sets with the depth information difference not exceeding a preset difference value in the first pixel point sets according to the depth information corresponding to each first pixel point set in the first pixel point sets to obtain a second pixel point set.
Optionally, the acquiring a depth image of a following object of the robot includes:
acquiring, by an image acquisition device of the robot, an image including the following object;
identifying the following object in the image, and intercepting a region image corresponding to the following object from the image as an initial image;
and carrying out depth alignment processing on the initial image and the RGB image to obtain a depth image of the following object.
Optionally, before the determining, according to the depth information, a target pixel point on the depth image, the method further includes:
and filtering pixel points with the depth information of 0 in the depth image.
Optionally, the determining a distance between the robot and the following object according to the depth information of the target pixel points and the number of the target pixel points includes:
determining the product of the number of the pixels in the pixel set and the depth information corresponding to the pixel set aiming at each pixel set in at least one pixel set included by the target pixel to obtain the product corresponding to each pixel set;
accumulating the product corresponding to each pixel point set to obtain a first value;
accumulating the number of the pixels of each pixel set to obtain a second value;
determining a quotient of the first value and the second value, and determining a distance between the robot and the following object based on the quotient of the first value and the second value.
Optionally, said following the following object based on the distance comprises:
adjusting a following speed of the robot based on the distance, wherein the following speed is positively correlated with the distance.
Optionally, said following the following object based on the distance comprises:
acquiring a plurality of feasible paths between the robot and the following object;
and determining a target feasible path from the feasible paths according to the distance, and following the following object according to the target feasible path.
Optionally, the determining a target feasible path from the multiple feasible paths according to the distance includes:
and if the distance is greater than a distance threshold value, determining the feasible path with the shortest path in the feasible paths as the target feasible path.
According to a second aspect of embodiments of the present disclosure, there is provided a robot following device, the device comprising:
the depth information determining module is configured to acquire a depth image of a following object of the robot and determine depth information of pixel points in the depth image;
a target pixel point determining module configured to determine a target pixel point on the depth image according to the depth information, where the target pixel point includes at least one pixel point set, the number of pixel points in the pixel point set satisfies a preset number condition, and the depth information of the pixel points in the pixel point set is the same;
a distance determination module configured to determine a distance between the robot and the following object according to the depth information of the target pixel points and the number of the target pixel points;
a following module configured to follow the following object based on the distance.
According to a third aspect of embodiments of the present disclosure, there is provided a robot comprising:
a processor;
a memory for storing processor-executable instructions;
wherein the processor is configured to:
acquiring a depth image of a following object of the robot, and determining depth information of pixel points in the depth image;
determining target pixel points on the depth image according to the depth information, wherein the target pixel points comprise at least one pixel point set, the number of the pixel points in the pixel point set meets a preset number condition, and the depth information of the pixel points in the pixel point set is the same;
determining the distance between the robot and the following object according to the depth information of the target pixel points and the number of the target pixel points;
following the following object based on the distance.
According to a fourth aspect of embodiments of the present disclosure, there is provided a computer-readable storage medium having stored thereon computer program instructions which, when executed by a processor, implement the steps of the robot following method provided by the first aspect of the present disclosure.
The technical scheme provided by the embodiment of the disclosure can have the following beneficial effects: the method comprises the steps of obtaining a depth image of a following object of the robot, and determining depth information of pixel points in the depth image; determining target pixel points on the depth image according to the depth information, wherein the target pixel points comprise at least one pixel point set, the number of the pixel points in the pixel point set meets a preset number condition, and the depth information of the pixel points in the pixel point set is the same; then, determining the distance between the robot and the following object according to the depth information of the target pixel points and the number of the target pixel points; finally, the following object is followed based on the distance. Therefore, the robot can screen effective target pixel points from the depth image of the following object, and the distance is calculated according to the effective target pixel points, so that all the pixel points in the depth image are prevented from being processed, the occupancy rate of computing resources of the robot is reduced, and the robot tracking system has the advantages of high processing speed, good numerical stability, high precision and the like, and further can ensure that the robot can stably follow.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the disclosure.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the present disclosure and together with the description, serve to explain the principles of the disclosure.
FIG. 1 is a flow chart illustrating a robot following method according to an exemplary embodiment.
Fig. 2 is a flowchart illustrating a robot following method according to another exemplary embodiment.
Fig. 3 is an initial depth map containing a following object according to the embodiment shown in fig. 2.
Fig. 4 is a histogram for a plurality of pixel point sets according to the embodiment of fig. 2.
Fig. 5 is another histogram for a plurality of pixel point sets according to the embodiment of fig. 2.
FIG. 6 is a block diagram illustrating a robotic follower device, according to an exemplary embodiment.
FIG. 7 is a block diagram of a robot shown in accordance with an exemplary embodiment.
Detailed Description
Reference will now be made in detail to the exemplary embodiments, examples of which are illustrated in the accompanying drawings. When the following description refers to the accompanying drawings, like numbers in different drawings represent the same or similar elements unless otherwise indicated. The implementations described in the exemplary embodiments below are not intended to represent all implementations consistent with the present disclosure. Rather, they are merely examples of apparatus and methods consistent with certain aspects of the disclosure, as detailed in the appended claims.
With the rapid development of the robot technology, the function of the robot in various fields is more and more important. The existing robot can simulate human behaviors to a certain extent and can replace people to do partial work.
In practical application, the robot can complete a work task by interacting with an interactive object, the distance between the robot and the interactive object is a strong requirement universally existing in the field of robots, and only by knowing distance information can the robot grasp the distance between the robot and the interactive object, the robot can reasonably avoid obstacles in the walking process, and can grasp objects by using mechanical arms. Due to the differences and calculation forces of the sensors equipped for each robot, the method for calculating the distance between the robot and the interactive object on different robots is also very different. Wherein, the interactive objects of the robot include but are not limited to: users, other mechanical devices, etc.
In the related art, in the interaction (such as following) process of the robot, because the robot needs to control the motion of the robot, collect various information and process, etc. at the same time, the situation that the calculation resources are limited often occurs, in this situation, the robot cannot occupy a large amount of calculation resources to calculate the distance to the interaction object, and further cannot guarantee the accuracy of the calculated distance, and if in the following process, the problem that the following of the robot is unstable is caused.
In view of the above problems, the present embodiment provides a robot following method, an apparatus, a robot, and a storage medium, which can calculate a distance and ensure accuracy under the condition that the calculation capability of the robot is limited, thereby ensuring the stability of robot following.
Fig. 1 is a flow chart illustrating a robot following method according to an exemplary embodiment, as shown in fig. 1, including the steps of:
in step S11, a depth image of the following object of the robot is acquired, and depth information of a pixel point in the depth image is determined.
For example, the execution subject of the robot following method of the present embodiment may be a movable robot of different application fields, including but not limited to: pet robots, floor sweeping robots, transfer robots, and the like.
In some embodiments, the robot may be configured with a depth camera, and in a scene where the robot follows, the robot may acquire a depth image of a following object in real time through the depth camera, and then scan the depth image to obtain depth information of each pixel point in the depth image.
It is to be understood that the depth information may be a depth value, and may be a range of depth values. The depth value may also be regarded as a distance value between the actual position corresponding to the pixel point and the depth camera, and the larger the depth value, the larger the distance.
In step S12, according to the depth information, a target pixel point on the depth image is determined, where the target pixel point includes at least one pixel point set, the number of pixel points in the pixel point set satisfies a preset number condition, and the depth information of the pixel points in the pixel point set is the same.
In some embodiments, the robot may divide all the pixels on the depth image into different pixel point sets according to the depth information of each of all the pixels on the depth image, that is, count and sort all the pixels into a plurality of pixel point sets according to the depth information of each of the pixels. For example, the pixels with depth values ranging from a1 to a2 may be divided into a set of prime points a, and the pixels with depth values ranging from a2 to a3 may be divided into a set of prime points b, where a1 < a2 < a 3. For another example, the robot may divide the pixel with the depth value equal to a1 into a pixel set a, divide the pixel with the depth value equal to a2 into a pixel set b, divide the pixel with the depth value equal to a3 into a pixel set c, and a1, a2, and a3 are different from each other. Therefore, the depth information of the pixels in the same pixel set in the divided pixel sets is the same.
It is understood that a depth information in this embodiment may be a depth value, or a depth value range, which is not limited herein.
Then, the robot can detect whether the number of the pixels in each pixel set meets a preset number condition.
As a mode, if the robot detects that the number of the pixels in the pixel set is greater than or equal to the preset number, it may be determined that the pixel set satisfies the preset number condition, otherwise, it is determined that the pixel set does not satisfy the preset number condition. The preset number may be determined according to the number of all pixel points in the depth image, for example, the preset number may be 1/2, 2/3 and the like of the number of all pixel points in the depth image.
It can be understood that, when the robot detects whether the number of the pixels in the pixel set meets the preset number condition, it can be indicated that the depth information corresponding to the pixel set has enough pixel support, the depth information is effective depth information, and the pixels in the pixel set are effective pixels.
After the pixel point set is determined to meet the preset quantity condition, the pixel point set can be determined as a target pixel point set, and the pixel points in the target pixel point set are used as target pixel points. Illustratively, for example, if the first pixel point set and the third pixel point set in the first pixel point set, the second pixel point set, and the third pixel point set satisfy a preset number condition, the pixel points in the first pixel point set and the third pixel point set may be determined as target pixel points.
Optionally, if two target pixel point sets with very similar depth information exist in the plurality of target pixel point sets, the two target pixel point sets may be merged into one target pixel point set, for example, a difference between the depth information of the target pixel point set a and the depth information of the target pixel point set B is smaller than a preset value, the target pixel point set a and the target pixel point set B may be merged into one target pixel point set, so that the resource occupancy rate may be further reduced.
In step S13, the distance between the robot and the following object is determined according to the depth information of the target pixel points and the number of the target pixel points.
In some embodiments, the depth information of each target pixel point in all the target pixel points may be accumulated to obtain an accumulated value, the accumulated value is divided by the total number of the target pixel points to obtain an average depth value of the target pixel points, and the average depth value is determined as the distance between the robot and the following object.
In step S14, the following object is followed based on the distance.
In some embodiments, the robot may adjust its speed according to the distance such that the distance between the robot and the following object is always within a specified distance range.
Therefore, in the embodiment, the depth image of the following object of the robot is obtained, and the depth information of the pixel points in the depth image is determined; determining target pixel points on the depth image according to the depth information, wherein the target pixel points comprise at least one pixel point set, the number of the pixel points in the pixel point set meets a preset number condition, and the depth information of the pixel points in the pixel point set is the same; then, determining the distance between the robot and the following object according to the depth information of the target pixel points and the number of the target pixel points; and finally following the following object based on the distance. Like this, the robot just can select effectual target pixel with the mode of pixel set in the depth image of following the object to promote the efficiency of screening target pixel, and calculate the distance according to effectual target pixel, thereby avoided handling all pixels in the depth image, reduced the computational resource occupancy to the robot, and it is fast to have the processing speed, and numerical stability is good, precision height grade advantage, and then can guarantee that the robot can stably follow.
Fig. 2 is a flowchart illustrating a robot following method according to another exemplary embodiment, as shown in fig. 2, including the steps of:
in step S21, an image including the following object is captured by the image capturing device of the robot.
In some embodiments, the image capture device may be a depth camera (e.g., a binocular camera), a color camera, or the like. The depth camera is used for collecting depth images, and the color camera is used for collecting RGB images. The robot can simultaneously acquire an initial depth image and an RGB image including the person to be followed by the image acquisition device.
In step S22, a following object in the image is identified, and a region image corresponding to the following object is cut out from the image as an initial image.
In some embodiments, the robot may identify a following object in the initial depth image, and intercept an area image corresponding to the following object in the initial depth image as the initial image. For example, as shown in fig. 3, after the robot recognizes the following object in the initial depth image, a rectangular frame with an appropriate size may be used to frame the following object in the initial depth image, and then the image in the rectangular frame may be cut out as the initial image. Optionally, the robot may further determine an edge of the following object in the initial depth image after identifying the following object in the initial depth image, and then cut an image in the edge as the initial image.
In step S23, the depth alignment process is performed on the initial image and the RGB image, resulting in a depth image of the follow-up object.
In some embodiments, the robot may perform a depth alignment process on the initial image and the RGB image captured by the color camera to obtain a depth image of the following object. Wherein the initial image and the RGB image correspond to each other in time.
In other embodiments, the robot may perform depth alignment processing on the initial depth image and RGB, and then intercept an area image corresponding to the following object from an image obtained by the depth alignment processing as a depth image of the follower.
In step S24, depth information of a pixel point in the depth image is determined.
The detailed implementation of step S24 can refer to step S11, and therefore is not described herein.
In some embodiments, after determining depth information of a pixel point in the depth image, the method may further include:
and filtering pixel points with the depth information of 0 in the depth image.
For example, a pixel point having a depth value of 0 may be deleted in the depth image. Considering that the pixel point with the depth information of 0 is usually caused by abnormal data acquisition, in the embodiment, the abnormal pixel point in the depth image can be reduced by hardly filtering the pixel point with the depth information of 0 in the depth image, so that the accuracy of distance calculation is improved.
In step S25, according to the depth information, a target pixel point on the depth image is determined, where the target pixel point includes at least one pixel point set, the number of pixel points in the pixel point set satisfies a preset number condition, and the depth information of the pixel points in the pixel point set is the same.
In some embodiments, a specific implementation of step S25 may include:
in step S251a, according to the depth information, the pixels in the depth image are determined as at least one pixel set, where different pixel sets correspond to different depth information.
Exemplarily, as shown in fig. 4, for example, a histogram of a plurality of pixel point sets may be established, where an abscissa of the histogram is a depth value of the pixel point set, and an ordinate is a number of pixel points of the pixel point set. As can be seen from fig. 4, the depth value of the pixel point set 1 is 0.3 m (i.e., the pixel points whose depth values are equal to or approximately equal to 0.3 constitute the pixel point set 1), and the number of the pixel points is 2500; the depth value of the pixel point set 2 is 0.5 m, and the number of the pixel points is 400; the depth value of the pixel point set 3 is 0.7 m, and the number of the pixel points is 2000; the depth value of the pixel point set 4 is 0.9 m, and the number of the pixel points is 1600.
It can be understood that, in this embodiment, when the depth value corresponding to the pixel point set is 0.3, it can be understood that the depth value of the pixel point in the pixel point set is equal to 0.3; it can also be understood that the depth value of the pixel in the pixel set is approximately equal to 0.3, for example, the depth value in an error range (e.g., 0.01) of the depth value 0.3 can also be determined as 0.3, for example, the depth value of the pixel in a pixel set is in an interval of 0.29-0.31, and then the depth value of the pixel set can also be determined as 0.3.
In step S252a, a pixel point set in which the number of pixel points in at least one pixel point set is greater than a first number threshold is determined as a target pixel point set, where the first number threshold is positively correlated with the total number of pixel points in the depth image.
Following the above example, for example, if the first number threshold is 500, it may be determined that the pixel set 1, the pixel set 2, the pixel set 3, the pixel set 1 in the pixel set 4, the pixel set 3, and the pixel set 4 are target pixel sets. The first quantity threshold value can be positively correlated with the total quantity of the pixel points in the depth image, and the larger the total quantity of the pixel points in the depth image is, the first quantity threshold value is obtained. The total number of the pixel points in the depth image may be the total number of the remaining effective pixel points after the pixel point of which the depth information is 0 in the depth image is filtered.
In step S253a, it is determined that the pixels in the target pixel set are target pixels.
Following the above example, the pixels in the pixel set 1, the pixel set 3, and the pixel set 4 may be determined as target pixels.
In other embodiments, the specific implementation of step S25 may include:
in step S251b, according to the depth information, the pixel points in the depth image are determined as at least one pixel point set, where different pixel point sets correspond to different depth information.
In step S252b, a pixel point set, in which the number of pixel points in at least one pixel point set is greater than the first number threshold, is determined as a first pixel point set.
For example, as shown in fig. 5, if the first number threshold is 500, the pixel point set 1, the pixel point set 3, the pixel point set 4, and the pixel point set 7 in fig. 5 may be determined as the first pixel point set. The pixel point set 2, the pixel point set 5, and the pixel point set 6 in fig. 5 are filtered out.
In step S253b, if the number of the first pixel point sets is multiple, the multiple first pixel point sets are merged according to the depth information corresponding to each of the multiple first pixel point sets, so as to obtain a second pixel point set.
In some embodiments, specific embodiments of step S253b may include:
and merging the first pixel point sets with the depth information difference not exceeding a preset difference value in the plurality of first pixel point sets according to the depth information corresponding to each first pixel point set in the plurality of first pixel point sets to obtain a second pixel point set.
Continuing with the above example, as shown in fig. 5, in the first pixel point set, the distance between the pixel point set 1 and the pixel point set 3 is 0.4 m, the distance between the pixel point set 3 and the pixel point set 4 is 0.2 m, and the distance between the pixel point set 4 and the pixel point set 7 is 0.6 m.
In some examples, the robot may sort the plurality of first pixel point sets in an order from small to large or from large to small according to the depth information, and then sequentially merge the plurality of first pixel point sets according to the sorting order and the preset difference. For example, referring to fig. 5 again, the pixel point set 1, the pixel point set 3, the pixel point set 4, and the pixel point set 7 that are reserved in the first pixel point set may be arranged from front to back on the abscissa corresponding to the depth value according to the sequence from small to large of the depth value, and then, according to the preset difference, the merging is performed from the pixel point set 1.
In step S254b, if the number of pixels in the second pixel set is greater than the second number threshold, the pixel set included in the second pixel set is determined as the target pixel set, where the first number threshold is positively correlated with the total number of pixels in the depth image, and the second number threshold is greater than the first number threshold.
Following the above example, the specific implementation of steps S253b to S254b may include: whether the pixel point set 1 and the pixel point set 3 can be combined is judged according to a preset difference value, if the pixel point set 1 and the pixel point set 3 can be combined (if the distance between the pixel point set 1 and the pixel point set 3 is smaller than the preset difference value), after the pixel point set 1 and the pixel point set 3 are combined, whether the pixel point set 3 can be combined with a subsequent pixel point set 4 is judged according to the preset difference value, and the process is repeated until no pixel point set which can be combined exists in the subsequent pixel point set.
Continuing with the example of fig. 5, if the preset difference is 0.5, the pixel point set 1, the pixel point set 3, and the pixel point set 4 can be merged, and since the distance between the pixel point set 7 and the pixel point set 4 is 0.6 and is greater than 0.5, it can be determined that there is no pixel point set that can be merged after the pixel point set 3. Therefore, the pixel point set 1, the pixel point set 3 and the pixel point set 4 can be merged to obtain a second pixel point set. Then, whether the second pixel point set meets the quantity condition (that is, the number of the pixels is greater than a second quantity threshold) is judged, and if yes, the pixel point set 1, the pixel point set 3 and the pixel point set 4 in the second pixel point set can be determined as a target pixel point set.
In another example, continuing with fig. 5 as an example, if the preset difference is 0.2, since the distance between the pixel point set 1 and the pixel point set 3 is 0.4, the pixel point set 1 and the pixel point set 3 cannot be merged, at this time, it may be determined whether the pixel point set 1 meets the number condition, and if the number condition is met, the pixel point set 1 may be directly determined as the second pixel point set. And then judging whether the pixel point set 1 meets the quantity condition, if so, determining the pixel point set 1 as a target pixel point set.
If the pixel point set 1 does not meet the quantity condition, whether the subsequent pixel point set 3 and the subsequent pixel point set 4 can be merged or not can be judged, and since the distance between the pixel point set 3 and the pixel point set 4 is 0.2 and does not exceed a preset difference value, the pixel point set 3 and the pixel point set 4 can be merged, and the distance between the pixel point set 4 and the pixel point set 7 is 0.6, the preset difference value is exceeded, the follow-up condition that no pixel point set can be merged continuously exists, so the pixel point set 3 and the pixel point set 4 can be merged to obtain a second pixel point set, and if the second pixel point set meets the quantity condition, the pixel point set 3 and the pixel point set 4 can be determined as target pixel point sets.
Following the above example, if the pixel point set 1 and the pixel point set 3 cannot be merged, and the second quantity threshold is 2000, then it may be determined that the pixel point set 1 is the target pixel set.
In this embodiment, the total number of the pixel points in the depth image may be the total number of the remaining pixel points after filtering out the pixel points whose depth information is smaller than the first number threshold in the depth image.
In some embodiments, after determining a first target pixel point set from the first pixel point set, it may be determined whether to perform subsequent merging of the first pixel point set according to a second number threshold, for example, the second number threshold is greater than or equal to half of the total number of pixel points in the depth image, and then after determining a target pixel point set, the remaining first pixel point set may not satisfy the second number threshold regardless of how to merge. Therefore, the subsequent combination of the first pixel point set is not needed.
In step S255b, it is determined that the pixels in the target pixel set are target pixels.
Following the above example, the pixel in the pixel set 1 may be determined as the target pixel.
In step S26, the distance between the robot and the following object is determined according to the depth information of the target pixel points and the number of the target pixel points.
In some embodiments, specific embodiments of step S26 may include:
in step S261, for each pixel point set in at least one pixel point set included in the target pixel point, a product of the number of pixel points in the pixel point set and depth information corresponding to the pixel point set is determined, and a product corresponding to each pixel point set is obtained.
Illustratively, for example, the target pixel point includes a pixel point set a and a pixel point set b. Wherein, the depth information of the pixel point set a is daThe number of the pixel points in the pixel point set a is NaDepth information of the pixel point set b is dbThe number of the pixels in the pixel set b is Nb. It can be calculated that: the product corresponding to the pixel point set a is da·NaThe product corresponding to the pixel point set b is db·Nb
In step S262, the product corresponding to each pixel point set is accumulated to obtain a first value.
Following the above example, a first value of: da·Na+db·Nb
In step S263, the number of pixels in each pixel set is accumulated to obtain a second value.
Following the above example, a second value may be obtained as: n is a radical of hydrogena+Nb
In step S264, a quotient of the first value and the second value is determined, and based on the quotient of the first value and the second value, a distance between the robot and the following object is determined.
Following the above example, the distance between the robot and the following object can be found:
Figure BDA0003565691120000151
wherein d isa、dbMay be a single value or a single range, and is not limited herein.
In this embodiment, the product of the number of pixels in a pixel set and depth information corresponding to the pixel set is determined for each pixel set in at least one pixel set included in a target pixel, and the product corresponding to each pixel set is obtained; accumulating the product corresponding to each pixel point set to obtain a first value; accumulating the number of the pixel points of each pixel point set to obtain a second value; and determining the quotient of the first value and the second value, and determining the distance between the robot and the following object based on the quotient of the first value and the second value, so that the distance between the robot and the following object is calculated by using a weighted average mode, and the distance between the robot and the following object can be calculated more comprehensively and accurately by considering the depth information of different pixel point sets.
In step S27, the following object is followed based on the distance.
In some embodiments, specific embodiments of step S27 may include:
adjusting a following speed of the robot based on the distance, wherein the following speed is positively correlated with the distance.
Illustratively, for example, the target distance between the robot and the following object is L, if it is detected that the actual distance between the robot and the following object is L1And L is1Less than L, the following speed of the robot can be reduced, if L1If the following distance is larger than L, the following speed of the robot can be increased, so that the robot and the following object always keep a proper following distance.
In other embodiments, the specific implementation of step S27 may include:
in step S271, a plurality of feasible paths between the robot and the following object are acquired.
For example, the robot may scan an indoor map of a building in advance, and then determine a plurality of feasible paths in the indoor map according to its own position and the position of a following object, where the position of the following object may be acquired by other devices (such as a camera, a human body sensor, etc.) in the room and sent to the robot.
In step S272, a target feasible path is determined from the plurality of feasible paths according to the distance, and the following object is followed in accordance with the target feasible path.
By way of example, a specific implementation of step S272 may include: and if the distance is greater than the distance threshold value, determining the feasible path with the shortest path in the feasible paths as a target feasible path so that the robot can quickly keep a proper following distance with the following object.
Alternatively, embodiments of step 272 may include: and if the distance is smaller than the distance threshold value, determining the feasible path with the longest path among the feasible paths as a target feasible path so that the robot can keep a proper following distance with the following object.
Optionally, after obtaining the distance between the robot and the following object, the robot may remind the user of the current distance in a voice broadcast manner when the distance is greater than a distance threshold. Optionally, the robot may also alert the user of the current distance every specified length of time.
As can be seen, in this embodiment, an image including a following object is acquired by an image acquisition device of a robot, the following object in the image is identified, a region image corresponding to the following object is cut out from the image as an initial image, and depth alignment processing is performed on the initial image and the RGB image, so as to obtain a depth image of the following object. Therefore, the contents irrelevant to the following object in the depth image can be effectively filtered, and the distance between the robot and the following object can be accurately obtained conveniently and subsequently according to the depth image of the following object.
FIG. 6 is a block diagram illustrating a robotic follower device, according to an exemplary embodiment. Referring to fig. 6, the apparatus 300 includes: a depth information determining module 310, a target pixel point determining module 320, a distance determining module 330, and a following module 340, wherein:
the depth information determining module 310 is configured to acquire a depth image of a following object of the robot and determine depth information of a pixel point in the depth image.
The target pixel point determining module 320 is configured to determine a target pixel point on the depth image according to the depth information, where the target pixel point includes at least one pixel point set, the number of pixel points in the pixel point set satisfies a preset number condition, and the depth information of the pixel points in the pixel point set is the same.
A distance determining module 330 configured to determine a distance between the robot and the following object according to the depth information of the target pixel points and the number of the target pixel points.
A following module 340 configured to follow the following object based on the distance.
In some embodiments, the target pixel point determining module 320 includes:
and the set division submodule is configured to determine pixel points in the depth image as at least one pixel point set according to the depth information, wherein different pixel point sets correspond to different depth information.
The first target set determining submodule is configured to determine a pixel point set, in which the number of pixels in at least one pixel point set is greater than a first number threshold, as a target pixel point set, wherein the first number threshold is positively correlated with the total number of pixels in the depth image.
And the first target pixel point determining submodule is configured to determine pixel points in the target pixel point set as target pixel points.
In some embodiments, the target pixel point determining module 320 includes:
and the set division submodule is configured to determine pixel points in the depth image as at least one pixel point set according to the depth information, wherein different pixel point sets correspond to different depth information.
And the first pixel point set determining submodule is configured to determine a pixel point set of which the number of the pixel points in at least one pixel point set is greater than a first number threshold value, and the pixel point set is the first pixel point set.
And the merging submodule is configured to merge the plurality of first pixel point sets according to the depth information corresponding to each first pixel point set in the plurality of first pixel point sets when the number of the first pixel point sets is multiple, so as to obtain a second pixel point set.
And the second target set determining submodule is configured to determine the pixel point set included in the second pixel point set as the target pixel point set under the condition that the number of the pixel points in the second pixel point set is greater than a second number threshold, wherein the first number threshold is positively correlated with the total number of the pixel points in the depth image, and the second number threshold is greater than the first number threshold.
And the second target pixel point determining submodule is configured to determine pixel points in the target pixel point set as target pixel points.
In some embodiments, the merge sub-module is specifically configured to: and merging the first pixel point sets with the depth information difference not exceeding a preset difference value in the plurality of first pixel point sets according to the depth information corresponding to each first pixel point set in the plurality of first pixel point sets to obtain a second pixel point set.
In some implementations, the depth information determination module 310 includes:
an image acquisition sub-module configured to acquire an image including the following object by the image acquisition apparatus 300 of the robot.
And the intercepting submodule is configured to identify a following object in the image and intercept a region image corresponding to the following object from the image as an initial image.
And the depth image acquisition sub-module is configured to perform depth alignment processing on the initial image and the RGB image to obtain a depth image of the following object.
In some embodiments, the apparatus 300 further comprises:
and the filtering module is configured to filter pixel points with the depth information of 0 in the depth image.
In some embodiments, the distance determination module 330 is specifically configured to: determining the product of the number of pixels of the pixel set and depth information corresponding to the pixel set aiming at each pixel set in at least one pixel set included by the target pixel to obtain the product corresponding to each pixel set; accumulating the product corresponding to each pixel point set to obtain a first value; accumulating the number of the pixel points of each pixel point set to obtain a second value; a quotient of the first value and the second value is determined, and a distance between the robot and the following object is determined based on the quotient of the first value and the second value.
In some embodiments, the following module 340 is specifically configured to: adjusting a following speed of the robot based on the distance, wherein the following speed is positively correlated with the distance.
In some embodiments, the following module 340 is specifically configured to: acquiring a plurality of feasible paths between the robot and the following object; and determining a target feasible path from the feasible paths according to the distance, and following the following object according to the target feasible path.
In some embodiments, the following module 340 is further specifically configured to: and if the distance is greater than the distance threshold, determining the feasible path with the shortest path in the feasible paths as the target feasible path.
With regard to the apparatus in the above-described embodiment, the specific manner in which each module performs the operation has been described in detail in the embodiment related to the method, and will not be elaborated here.
The present disclosure also provides a computer readable storage medium having stored thereon computer program instructions which, when executed by a processor, implement the steps of the robot following method provided by the present disclosure.
Fig. 7 is a block diagram illustrating a robot 800 for a robot following method according to an exemplary embodiment. For example, the robot 800 may be a sweeping, mopping robot, a carrier robot, a pet robot, or the like.
Referring to fig. 7, a robot 800 may include one or more of the following components: a processing component 802, a memory 804, a power component 806, a multimedia component 808, an audio component 810, an input/output (I/O) interface 812, a sensor component 814, and a communication component 816.
The processing component 802 generally controls the overall operation of the robot 800, such as operations associated with display, data communication, camera operations, and recording operations. The processing components 802 may include one or more processors 820 to execute instructions to perform all or a portion of the steps of the methods described above. Further, the processing component 802 can include one or more modules that facilitate interaction between the processing component 802 and other components. For example, the processing component 802 can include a multimedia module to facilitate interaction between the multimedia component 808 and the processing component 802.
The memory 804 is configured to store various types of data to support operations at the robot 800. Examples of such data include instructions for any application or method operating on the robot 800, user information, messages, pictures, videos, and so forth. The memory 804 may be implemented by any type or combination of volatile or non-volatile memory devices, such as Static Random Access Memory (SRAM), electrically erasable programmable read-only memory (EEPROM), erasable programmable read-only memory (EPROM), programmable read-only memory (PROM), read-only memory (ROM), magnetic memory, flash memory, and the like.
Power components 806 provide power to the various components of robot 800. Power components 806 may include a power management system, one or more power sources, and other components associated with generating, managing, and distributing power for robot 800.
The multimedia component 808 includes a screen that provides an output interface between the robot 800 and a user. In some embodiments, the screen may include a Liquid Crystal Display (LCD) and a Touch Panel (TP). If the screen includes a touch panel, the screen may be implemented as a touch screen to receive an input signal from a user. The touch panel includes one or more touch sensors to sense touch, slide, and gestures on the touch panel. The touch sensor may not only sense the boundary of a touch or slide action, but also detect the duration and pressure associated with the touch or slide operation.
The audio component 810 is configured to output and/or input audio signals. For example, the audio component 810 includes a Microphone (MIC) configured to receive an external audio signal when the robot 800 is in an operational mode, such as a voice recognition mode. The received audio signals may further be stored in the memory 804 or transmitted via the communication component 816. In some embodiments, audio component 810 also includes a speaker for outputting audio signals.
The I/O interface 812 provides an interface between the processing component 802 and peripheral interface modules, which may be keyboards, click wheels, buttons, etc.
The sensor assembly 814 includes one or more sensors for providing various aspects of state assessment for the robot 800, collecting various ambient information, and the like. For example, the sensor assembly 814 may detect an open/closed status of the robot 800, the relative positioning of components, such as a display and keypad of the robot 800, the sensor assembly 814 may also detect a change in position of the robot 800 or a component of the robot 800, the presence or absence of user contact with the robot 800, the orientation or acceleration/deceleration of the robot 800, and a change in temperature of the robot 800. Sensor assembly 814 may include a proximity sensor configured to detect the presence of a nearby object without any physical contact. The sensor assembly 814 may also include a light sensor, such as a CMOS or CCD image sensor, for use in imaging applications. In some embodiments, the sensor assembly 814 may also include an acceleration sensor, a gyroscope sensor, a magnetic sensor, a pressure sensor, or a temperature sensor.
The communication component 816 is configured to facilitate wired or wireless communication between the robot 800 and other devices. The robot 800 may access a wireless network based on a communication standard, such as WiFi, 4G or 5G, or a combination thereof. In an exemplary embodiment, the communication component 816 receives broadcast signals or broadcast related information from an external broadcast management system via a broadcast channel. In an exemplary embodiment, the communication component 816 further includes a Near Field Communication (NFC) module to facilitate short-range communications. For example, the NFC module may be implemented based on Radio Frequency Identification (RFID) technology, infrared data association (IrDA) technology, Ultra Wideband (UWB) technology, Bluetooth (BT) technology, and other technologies.
In an exemplary embodiment, the robot 800 may be implemented by one or more Application Specific Integrated Circuits (ASICs), Digital Signal Processors (DSPs), Digital Signal Processing Devices (DSPDs), Programmable Logic Devices (PLDs), Field Programmable Gate Arrays (FPGAs), controllers, micro-controllers, microprocessors, or other electronic components for performing the above-described methods.
In an exemplary embodiment, a non-transitory computer-readable storage medium is also provided that includes instructions, such as the memory 804 including instructions, that are executable by the processor 820 of the robot 800 to perform the above-described method. For example, the non-transitory computer readable storage medium may be a ROM, a Random Access Memory (RAM), an optical data storage device, and the like.
In another exemplary embodiment, a computer program product is also provided, which comprises a computer program executable by a programmable apparatus, the computer program having code portions for performing the robot following method described above when executed by the programmable apparatus.
Other embodiments of the disclosure will be apparent to those skilled in the art from consideration of the specification and practice of the disclosure. This application is intended to cover any variations, uses, or adaptations of the disclosure following, in general, the principles of the disclosure and including such departures from the present disclosure as come within known or customary practice within the art to which the disclosure pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the disclosure being indicated by the following claims.
It will be understood that the present disclosure is not limited to the precise arrangements described above and shown in the drawings and that various modifications and changes may be made without departing from the scope thereof. The scope of the present disclosure is limited only by the appended claims.

Claims (13)

1. A robot following method, characterized by comprising:
acquiring a depth image of a following object of the robot, and determining depth information of pixel points in the depth image;
determining target pixel points on the depth image according to the depth information, wherein the target pixel points comprise at least one pixel point set, the number of the pixel points in the pixel point set meets a preset number condition, and the depth information of the pixel points in the pixel point set is the same;
determining the distance between the robot and the following object according to the depth information of the target pixel points and the number of the target pixel points;
following the following object based on the distance.
2. The method of claim 1, wherein determining a target pixel point on the depth image according to the depth information comprises:
determining pixel points in the depth image into at least one pixel point set according to the depth information, wherein different pixel point sets correspond to different depth information;
determining a pixel point set with the number of pixels larger than a first number threshold in the at least one pixel point set as a target pixel point set, wherein the first number threshold is positively correlated with the total number of pixels in the depth image;
and determining the pixel points in the target pixel point set as the target pixel points.
3. The method of claim 1, wherein determining a target pixel point on the depth image according to the depth information comprises:
determining pixel points in the depth image into at least one pixel point set according to the depth information, wherein different pixel point sets correspond to different depth information;
determining a pixel point set of which the number of the pixel points in the at least one pixel point set is greater than a first number threshold value as a first pixel point set;
if the number of the first pixel point sets is multiple, merging the multiple first pixel point sets according to the depth information corresponding to each first pixel point set in the multiple first pixel point sets to obtain a second pixel point set;
if the number of pixels in the second pixel point set is greater than a second number threshold, determining the pixel point set included in the second pixel point set as the target pixel point set, wherein the first number threshold is positively correlated with the total number of pixels in the depth image, and the second number threshold is greater than the first number threshold;
and determining the pixel points in the target pixel point set as the target pixel points.
4. The method according to claim 3, wherein the merging the plurality of first pixel point sets according to the depth information corresponding to each of the plurality of first pixel point sets to obtain a second pixel point set comprises:
and merging the first pixel point sets with the depth information difference not exceeding a preset difference value in the first pixel point sets according to the depth information corresponding to each first pixel point set in the first pixel point sets to obtain a second pixel point set.
5. The method of claim 1, wherein said obtaining a depth image of a following object of the robot comprises:
acquiring, by an image acquisition device of the robot, an image including the following object;
identifying the following object in the image, and intercepting a region image corresponding to the following object from the image as an initial image;
and carrying out depth alignment processing on the initial image and the RGB image to obtain a depth image of the following object.
6. The method of claim 1, further comprising, before the determining a target pixel point on the depth image according to the depth information:
and filtering pixel points with the depth information of 0 in the depth image.
7. The method of claim 1, wherein determining the distance between the robot and the following object according to the depth information of the target pixel and the number of the target pixels comprises:
determining the product of the number of pixels of the pixel set and the depth information corresponding to the pixel set aiming at each pixel set in at least one pixel set included by the target pixel, and obtaining the product corresponding to each pixel set;
accumulating the product corresponding to each pixel point set to obtain a first value;
accumulating the number of the pixels of each pixel set to obtain a second value;
determining a quotient of the first value and the second value, and determining a distance between the robot and the following object based on the quotient of the first value and the second value.
8. The method of any one of claims 1-7, wherein said following the following object based on the distance comprises:
adjusting a following speed of the robot based on the distance, wherein the following speed is positively correlated with the distance.
9. The method of any one of claims 1-7, wherein said following the following object based on the distance comprises:
acquiring a plurality of feasible paths between the robot and the following object;
and determining a target feasible path from the feasible paths according to the distance, and following the following object according to the target feasible path.
10. The method of claim 9, wherein determining a target feasible path from the plurality of feasible paths as a function of distance comprises:
and if the distance is greater than a distance threshold value, determining the feasible path with the shortest path in the feasible paths as the target feasible path.
11. A robot following device, comprising:
the depth information determining module is configured to acquire a depth image of a following object of the robot and determine depth information of pixel points in the depth image;
a target pixel point determining module configured to determine a target pixel point on the depth image according to the depth information, where the target pixel point includes at least one pixel point set, the number of pixel points in the pixel point set satisfies a preset number condition, and the depth information of the pixel points in the pixel point set is the same;
a distance determination module configured to determine a distance between the robot and the following object according to the depth information of the target pixel points and the number of the target pixel points;
a following module configured to follow the following object based on the distance.
12. A robot, comprising:
a processor;
a memory for storing processor-executable instructions;
wherein the processor is configured to:
acquiring a depth image of a following object of the robot, and determining depth information of pixel points in the depth image;
determining target pixel points on the depth image according to the depth information, wherein the target pixel points comprise at least one pixel point set, the number of the pixel points in the pixel point set meets a preset number condition, and the depth information of the pixel points in the pixel point set is the same;
determining the distance between the robot and the following object according to the depth information of the target pixel points and the number of the target pixel points;
following the following object based on the distance.
13. A computer-readable storage medium, on which computer program instructions are stored, which program instructions, when executed by a processor, carry out the steps of the method according to any one of claims 1 to 10.
CN202210306848.1A 2022-03-25 2022-03-25 Robot following method, device, robot and storage medium Active CN114659450B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210306848.1A CN114659450B (en) 2022-03-25 2022-03-25 Robot following method, device, robot and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210306848.1A CN114659450B (en) 2022-03-25 2022-03-25 Robot following method, device, robot and storage medium

Publications (2)

Publication Number Publication Date
CN114659450A true CN114659450A (en) 2022-06-24
CN114659450B CN114659450B (en) 2023-11-14

Family

ID=82033470

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210306848.1A Active CN114659450B (en) 2022-03-25 2022-03-25 Robot following method, device, robot and storage medium

Country Status (1)

Country Link
CN (1) CN114659450B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115920420A (en) * 2023-02-20 2023-04-07 自贡创赢智能科技有限公司 Electronic dinosaur of trailing type

Citations (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2013091369A1 (en) * 2011-12-22 2013-06-27 中国科学院自动化研究所 Multi-target segmentation and tracking method based on depth image
US20150206004A1 (en) * 2014-01-20 2015-07-23 Ricoh Company, Ltd. Object tracking method and device
CN104899590A (en) * 2015-05-21 2015-09-09 深圳大学 Visual target tracking method and system for unmanned aerial vehicle
CN106023219A (en) * 2016-05-26 2016-10-12 无锡天脉聚源传媒科技有限公司 Method and device for determining targets in image
CN107608392A (en) * 2017-09-19 2018-01-19 浙江大华技术股份有限公司 The method and apparatus that a kind of target follows
WO2018133641A1 (en) * 2017-01-19 2018-07-26 Zhejiang Dahua Technology Co., Ltd. A locating method and system
CN108537843A (en) * 2018-03-12 2018-09-14 北京华凯汇信息科技有限公司 The method and device of depth of field distance is obtained according to depth image
CN108527366A (en) * 2018-03-22 2018-09-14 北京理工华汇智能科技有限公司 Robot follower method and device based on depth of field distance
CN108885778A (en) * 2016-04-06 2018-11-23 索尼公司 Image processing equipment and image processing method
CN109087347A (en) * 2018-08-15 2018-12-25 杭州光珀智能科技有限公司 A kind of image processing method and device
CN110135382A (en) * 2019-05-22 2019-08-16 北京华捷艾米科技有限公司 A kind of human body detecting method and device
CN110515384A (en) * 2019-09-09 2019-11-29 深圳市三宝创新智能有限公司 A kind of the human body follower method and robot of view-based access control model mark
CN112102386A (en) * 2019-01-22 2020-12-18 Oppo广东移动通信有限公司 Image processing method, image processing device, electronic equipment and computer readable storage medium
CN112223278A (en) * 2020-09-09 2021-01-15 山东省科学院自动化研究所 Detection robot following method and system based on depth visual information

Patent Citations (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2013091369A1 (en) * 2011-12-22 2013-06-27 中国科学院自动化研究所 Multi-target segmentation and tracking method based on depth image
US20150206004A1 (en) * 2014-01-20 2015-07-23 Ricoh Company, Ltd. Object tracking method and device
CN104899590A (en) * 2015-05-21 2015-09-09 深圳大学 Visual target tracking method and system for unmanned aerial vehicle
CN108885778A (en) * 2016-04-06 2018-11-23 索尼公司 Image processing equipment and image processing method
CN106023219A (en) * 2016-05-26 2016-10-12 无锡天脉聚源传媒科技有限公司 Method and device for determining targets in image
WO2018133641A1 (en) * 2017-01-19 2018-07-26 Zhejiang Dahua Technology Co., Ltd. A locating method and system
CN107608392A (en) * 2017-09-19 2018-01-19 浙江大华技术股份有限公司 The method and apparatus that a kind of target follows
CN108537843A (en) * 2018-03-12 2018-09-14 北京华凯汇信息科技有限公司 The method and device of depth of field distance is obtained according to depth image
CN108527366A (en) * 2018-03-22 2018-09-14 北京理工华汇智能科技有限公司 Robot follower method and device based on depth of field distance
CN109087347A (en) * 2018-08-15 2018-12-25 杭州光珀智能科技有限公司 A kind of image processing method and device
CN112102386A (en) * 2019-01-22 2020-12-18 Oppo广东移动通信有限公司 Image processing method, image processing device, electronic equipment and computer readable storage medium
CN110135382A (en) * 2019-05-22 2019-08-16 北京华捷艾米科技有限公司 A kind of human body detecting method and device
CN110515384A (en) * 2019-09-09 2019-11-29 深圳市三宝创新智能有限公司 A kind of the human body follower method and robot of view-based access control model mark
CN112223278A (en) * 2020-09-09 2021-01-15 山东省科学院自动化研究所 Detection robot following method and system based on depth visual information

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115920420A (en) * 2023-02-20 2023-04-07 自贡创赢智能科技有限公司 Electronic dinosaur of trailing type

Also Published As

Publication number Publication date
CN114659450B (en) 2023-11-14

Similar Documents

Publication Publication Date Title
CN106651955B (en) Method and device for positioning target object in picture
RU2596580C2 (en) Method and device for image segmentation
CN106331504B (en) Shooting method and device
US20210374447A1 (en) Method and device for processing image, electronic equipment, and storage medium
CN105512685B (en) Object identification method and device
CN108010060B (en) Target detection method and device
CN106778773B (en) Method and device for positioning target object in picture
RU2612892C2 (en) Method and device of auto focus
JP6800628B2 (en) Tracking device, tracking method, and program
CN107193653B (en) Bandwidth resource allocation method, device and storage medium
EP3185209A1 (en) Depth maps generated from a single sensor
CN107563994B (en) Image significance detection method and device
US20170118298A1 (en) Method, device, and computer-readable medium for pushing information
CN106713734B (en) Automatic focusing method and device
CN104182127A (en) Icon movement method and device
CN111461182B (en) Image processing method, image processing apparatus, and storage medium
CN105631803A (en) Method and device for filter processing
CN114187498A (en) Occlusion detection method and device, electronic equipment and storage medium
CN114659450B (en) Robot following method, device, robot and storage medium
CN112202962A (en) Screen brightness adjusting method and device and storage medium
CN108154090B (en) Face recognition method and device
CN114549578A (en) Target tracking method, device and storage medium
EP4366289A1 (en) Photographing method and related apparatus
CN111832338A (en) Object detection method and device, electronic equipment and storage medium
CN112153291B (en) Photographing method and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right

Effective date of registration: 20231008

Address after: Room 602, 6th Floor, Building 5, Building 15, Kechuang 10th Street, Beijing Economic and Technological Development Zone, Daxing District, Beijing, 100176

Applicant after: Beijing Xiaomi Robot Technology Co.,Ltd.

Address before: No.018, 8th floor, building 6, No.33 yard, middle Xierqi Road, Haidian District, Beijing 100085

Applicant before: BEIJING XIAOMI MOBILE SOFTWARE Co.,Ltd.

TA01 Transfer of patent application right
GR01 Patent grant
GR01 Patent grant