CN111813120A - Method and device for identifying moving target of robot and electronic equipment - Google Patents

Method and device for identifying moving target of robot and electronic equipment Download PDF

Info

Publication number
CN111813120A
CN111813120A CN202010666568.2A CN202010666568A CN111813120A CN 111813120 A CN111813120 A CN 111813120A CN 202010666568 A CN202010666568 A CN 202010666568A CN 111813120 A CN111813120 A CN 111813120A
Authority
CN
China
Prior art keywords
point cloud
cloud data
target
original point
data sets
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010666568.2A
Other languages
Chinese (zh)
Inventor
吴健
刘晋浩
王东炜
马宇政
赵刚
王辉
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Forestry University
Original Assignee
Beijing Forestry University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Forestry University filed Critical Beijing Forestry University
Priority to CN202010666568.2A priority Critical patent/CN111813120A/en
Publication of CN111813120A publication Critical patent/CN111813120A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course or altitude of land, water, air, or space vehicles, e.g. automatic pilot
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0257Control of position or course in two dimensions specially adapted to land vehicles using a radar
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/66Tracking systems using electromagnetic waves other than radio waves
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course or altitude of land, water, air, or space vehicles, e.g. automatic pilot
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0212Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory
    • G05D1/0223Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory involving speed control of the vehicle
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course or altitude of land, water, air, or space vehicles, e.g. automatic pilot
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0276Control of position or course in two dimensions specially adapted to land vehicles using signals provided by a source external to the vehicle

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • General Physics & Mathematics (AREA)
  • Aviation & Aerospace Engineering (AREA)
  • Automation & Control Theory (AREA)
  • Electromagnetism (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Optical Radar Systems And Details Thereof (AREA)

Abstract

The invention provides a method and a device for identifying a moving target of a robot and electronic equipment, which relate to the technical field of automatic control and comprise the following steps: firstly, acquiring the environment of a robot by a laser radar to obtain a plurality of groups of original point cloud data sets; then, registering the point cloud data in any two adjacent original point cloud data sets in the multiple groups of original point cloud data sets according to the collection sequence of the multiple groups of original point cloud data sets to obtain any two adjacent original point cloud data sets after registration; finally, calculating the average distance between any two adjacent groups of original point cloud data sets after registration; and determining whether the environment of the robot contains a moving target or not according to the average distance, wherein if the environment of the robot contains the moving target, tracking the moving target. The invention solves the technical problem of tracking failure caused by the fact that pedestrians exceed the scanning range of the laser radar or objects are lost due to external environment interference in the traditional tracking method based on the laser radar.

Description

Method and device for identifying moving target of robot and electronic equipment
Technical Field
The present invention relates to the field of automatic control technologies, and in particular, to a method and an apparatus for identifying a moving target of a robot, and an electronic device.
Background
Since the 21 st century, intelligent robots have been rapidly developed as human beings gradually enter the intelligent era. As an important branch of an intelligent mobile robot, an indoor mobile robot is valued by more and more researchers, such as a floor sweeping robot, a market shopping guide robot, and the like. Pedestrian tracking is an important function of indoor mobile robots, and can enable the robots to have certain recognition capability, and the robots in many application scenes have requirements on pedestrian tracking, so that the research on the pedestrian tracking of the indoor mobile robots has extremely important theoretical significance and practical application value.
Currently, in the field of pedestrian tracking, mainly used methods include a tracking method based on multi-sensor fusion, a tracking method based on vision, and a tracking method based on a laser radar, according to different types of sensors. However, the tracking method based on multi-sensor fusion has fewer technical schemes for performing sight tracking, and is difficult to implement; vision-based tracking methods are susceptible to factors such as lighting, appearance, and background. The tracking method based on the laser radar can directly measure the relative position relationship between the robot and the person, and the laser radar is not easily influenced by external illumination, so the technical scheme of the laser radar is adopted to solve the problems.
However, when the pedestrian walks beyond the scanning range of the laser radar and the moving distance of the pedestrian is too large, the existing tracking method based on the laser radar has the phenomenon of tracking failure.
Disclosure of Invention
In view of the above, the present invention provides a method and an apparatus for identifying a moving target of a robot, and an electronic device, which solve the technical problem of tracking failure caused by the moving distance of a pedestrian and the fact that the pedestrian walks beyond the scanning range of a laser radar in the conventional tracking method based on the laser radar.
In a first aspect, an embodiment of the present invention provides a method for identifying a moving target of a robot, where the method is applied to the robot, and a laser radar is disposed in the robot, and the method includes: collecting the environment of the robot through the laser radar to obtain a plurality of groups of original point cloud data sets; the original point cloud data set is used for representing coordinate information of a target object in the environment in a target coordinate system, and the target coordinate system is a coordinate system established based on the center of the laser radar; registering the point cloud data in any two adjacent groups of original point cloud data sets in the multiple groups of original point cloud data sets according to the acquisition sequence of the multiple groups of original point cloud data sets by taking the invariants in the original point cloud data sets as registration conditions to obtain the two adjacent groups of original point cloud data sets after registration; calculating the average distance between any two adjacent groups of original point cloud data sets after registration; and determining whether the environment of the robot contains a moving target or not according to the average distance, wherein if the environment of the robot contains the moving target, extracting the point cloud distance and angle information of the moving target, and tracking the moving target.
Further, calculating the average distance between the two sets of original point cloud data sets after registration comprises: calculating the distance between the matched point clouds in any two adjacent sets of original point cloud data sets after registration to obtain a plurality of distances; and calculating the average value among the plurality of distances to obtain the average distance.
Further, determining whether the environment in which the robot is located includes a moving target according to the average distance includes: determining a target threshold value according to the average distance; comparing each distance in the plurality of distances with the target threshold value to obtain a comparison result; and if the target distance which is greater than or equal to the target threshold value is determined to be contained in the plurality of distances according to the comparison result, determining that the moving target is contained, wherein the point cloud corresponding to the target distance in the original point cloud data set is the point cloud to which the moving target belongs.
Further, registering the point cloud data in any two adjacent sets of original point cloud data sets in the multiple sets of original point cloud data sets according to the collection sequence of the multiple sets of original point cloud data sets by using the invariant in the original point cloud data sets as a registration condition comprises: registering the point cloud data in any two adjacent groups of original point cloud data sets to obtain a transformation formula; the transformation formula is expressed as: q is RP + T, where Q, P is the two sets of arbitrary adjacent original point cloud data sets, and original point cloud data set Q is acquired after original point cloud data set P, R is a rotation parameter, and T is a translation parameter; and transforming the original point cloud data set P according to the transformation formula to obtain a transformed original point cloud data set P ', and determining the transformed original point cloud data set P' and the original point cloud data set Q as any two adjacent original point cloud data sets after registration.
Further, acquiring the environment where the robot is located through the laser radar to obtain a plurality of groups of original point cloud data sets comprises: collecting the environment where the robot is located through the laser radar to obtain a plurality of groups of original information data sets, wherein the original information data sets comprise: a distance of a target object within the environment from the lidar center, an angle of the target object within the environment in the target coordinate system; and converting the multiple groups of original information data sets into the multiple groups of original point cloud data sets according to the conversion relation between the polar coordinate system and the Cartesian coordinate system.
Further, extracting the point cloud distance and angle information of the moving target, and tracking the moving target comprises: determining the point cloud of the moving target in the original point cloud data set Q as a target point cloud; determining the rotation angle of the robot according to the angle of the center of the target point cloud in the target coordinate system; calculating the target average distance between the target point cloud and the center of the laser radar, and determining the moving speed of the robot according to the target average distance; and tracking the moving target according to the rotation angle and the moving speed.
Further, calculating the target average distance between the target point cloud and the center of the laser radar comprises: calculating the distance between each point cloud in the target point cloud and the center of the laser radar to obtain a plurality of target distances; and calculating the average value of the plurality of target distances to obtain the target average distance.
Further, the method further comprises: if the mobile target is determined not to be contained, an emergency instruction is sent to an upper computer; and receiving a holding instruction sent by the upper computer according to the emergency instruction, wherein the holding instruction is used for indicating the robot to execute the motion state of the robot at the previous moment, and keeping the laser radar in a collection state so as to enable the laser radar to continue to collect.
In a second aspect, an embodiment of the present invention further provides an apparatus for identifying a moving target of a robot, including: the acquisition unit is used for acquiring the environment where the robot is located through the laser radar to obtain a plurality of groups of original point cloud data sets; the original point cloud data set is used for representing coordinate information of a target object in the environment in a target coordinate system, and the target coordinate system is a coordinate system established based on the center of the laser radar; the registration unit is used for registering the point cloud data in any two adjacent original point cloud data sets in the multiple groups of original point cloud data sets according to the acquisition sequence of the multiple groups of original point cloud data sets by taking the invariants in the original point cloud data sets as registration conditions to obtain the two adjacent original point cloud data sets after registration; and the identification unit is used for calculating the average distance between any two adjacent original point cloud data sets after registration, and determining whether the environment where the robot is located contains a moving target according to the average distance, wherein if the environment contains the moving target, the point cloud distance and the angle information of the moving target are extracted, and the moving target is tracked.
In a third aspect, an embodiment of the present invention further provides an electronic device, including a memory, a processor, and a computer program stored on the memory and executable on the processor, where the processor mounts an ROS platform, and executes the computer program to implement the steps of the method in any one of the above first aspects.
In the embodiment of the invention, firstly, the environment of the robot is collected through a laser radar to obtain a plurality of groups of original point cloud data sets; then, registering the point cloud data in any two adjacent original point cloud data sets in the multiple groups of original point cloud data sets according to the collection sequence of the multiple groups of original point cloud data sets to obtain any two adjacent original point cloud data sets after registration; finally, calculating the average distance between any two adjacent groups of original point cloud data sets after registration; and determining whether the environment of the robot contains a moving target or not according to the average distance, wherein if the environment of the robot contains the moving target, tracking the moving target. In this embodiment, a mode of determining whether the environment of the robot contains a moving target or not according to the average distance between any two adjacent sets of original point cloud data sets after registration is adopted, so that the requirement on an upper computer is reduced, the scanning capability of the laser radar is applied to the greatest extent, and the technical problem of tracking failure caused by the fact that the moving distance of pedestrians and the fact that the pedestrians walk beyond the scanning range of the laser radar in the existing tracking method based on the laser radar are relieved.
Additional features and advantages of the invention will be set forth in the description which follows, and in part will be obvious from the description, or may be learned by practice of the invention. The objectives and other advantages of the invention will be realized and attained by the structure particularly pointed out in the written description and claims hereof as well as the appended drawings.
In order to make the aforementioned and other objects, features and advantages of the present invention comprehensible, preferred embodiments accompanied with figures are described in detail below.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, and it is obvious that the drawings in the following description are some embodiments of the present invention, and other drawings can be obtained by those skilled in the art without creative efforts.
Fig. 1 is a flowchart of a method for identifying a moving target of a robot according to an embodiment of the present invention;
FIG. 2 is an overall outline display diagram of a robot according to an embodiment of the present invention;
fig. 3 is a schematic connection diagram of an internal component structure of the robot according to the embodiment of the present invention;
fig. 4 is a schematic diagram of a device for identifying a moving object of a robot according to an embodiment of the present invention;
fig. 5 is a schematic view of a storage box of an identification device for a moving object of a robot according to an embodiment of the present invention;
fig. 6 is a schematic diagram of an electronic device for identifying a moving object of a robot according to an embodiment of the present invention.
Detailed Description
To make the objects, technical solutions and advantages of the embodiments of the present invention clearer, the technical solutions of the present invention will be clearly and completely described below with reference to the accompanying drawings, and it is apparent that the described embodiments are some, but not all embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
The first embodiment is as follows:
in accordance with an embodiment of the present invention, there is provided an embodiment of a method for identifying a moving object of a robot, it is noted that the steps illustrated in the flowchart of the drawings may be performed in a computer system such as a set of computer executable instructions, and that while a logical order is illustrated in the flowchart, in some cases the steps illustrated or described may be performed in an order different than here.
Fig. 1 is a method for identifying a moving object of a robot according to an embodiment of the present invention, as shown in fig. 1, the method including the steps of:
step S102, collecting the environment of the robot through the laser radar to obtain a plurality of groups of original point cloud data sets; the original point cloud data set is used for representing coordinate information of a target object in the environment in a target coordinate system, and the target coordinate system is a coordinate system established based on the center of the laser radar;
step S104, registering the point cloud data in any two adjacent original point cloud data sets in the multiple groups of original point cloud data sets according to the collection sequence of the multiple groups of original point cloud data sets by taking the invariants in the original point cloud data sets as registration conditions to obtain the two adjacent original point cloud data sets after registration;
step S106, calculating the average distance between any two adjacent groups of original point cloud data sets after registration; and determining whether the environment of the robot contains a moving target or not according to the average distance, wherein if the environment of the robot contains the moving target, extracting the point cloud distance and angle information of the moving target, and tracking the moving target.
In the embodiment of the invention, firstly, the environment of the robot is collected through the laser radar to obtain a plurality of groups of original point cloud data sets; then, registering the point cloud data in any two adjacent original point cloud data sets in the multiple groups of original point cloud data sets according to the acquisition sequence of the multiple groups of original point cloud data sets by taking the invariants in the original point cloud data sets as registration conditions to obtain the two adjacent original point cloud data sets after registration; finally, calculating the average distance between any two adjacent groups of original point cloud data sets after registration; and determining whether the environment of the robot contains a moving target or not according to the average distance, wherein if the environment of the robot contains the moving target, extracting the point cloud distance and angle information of the moving target, and tracking the moving target. In this embodiment, a mode of determining whether the environment of the robot contains a moving target or not according to the average distance between any two adjacent sets of original point cloud data sets after registration is adopted, so that the requirement on an upper computer can be reduced, the scanning capability of a laser radar is applied to the maximum extent, and the technical problem of tracking failure caused by the fact that the moving distance of pedestrians and the fact that the pedestrians walk beyond the scanning range of the laser radar in the conventional tracking method based on the laser radar are solved.
It should be noted that steps S102 to S106 may be executed by the controller in the robot described in steps S102 to S106 described above in the present application. Step S102 to step S106 will be described below with reference to the detailed description.
As can be seen from the above description, in this embodiment, first, the environment where the robot is located is collected by the laser radar, so as to obtain a plurality of sets of original point cloud data sets, specifically, the step S102 includes the following steps:
step S1021, collecting the environment where the robot is located through the laser radar to obtain a plurality of groups of original information data sets, wherein the original information data sets comprise: a distance of a target object within the environment from the lidar center, an angle of a target object within the environment in the target coordinate system.
In the application, the original information data set acquired by the laser radar is the distance ρ between the target object in the environment where the robot is located and the center of the laser radar, and the angle θ of the target object in the environment where the robot is located in the target coordinate system. In the present application, the lidar may be configured to collect data at different time periods, so as to obtain multiple sets of raw information data sets, wherein after each collection is performed, one set of raw information data sets is obtained. Suppose that the original information data set acquired by the laser radar at the ith time is HiWherein H isiIncluding the distance rho between the target object and the center of the laser radar in the environment where the robot is locatediAnd an angle theta of a target object in the target coordinate system within the environment in which the robot is locatedi. Therefore, the multiple sets of original information data sets collected by the laser radar are H, and the expression of the multiple sets of original information data sets H is as follows: h ═ H1,...,Hk)T. The multiple sets of original information data sets H are k sets, and k is a positive integer greater than 0.
Step S1022, according to the conversion relationship between the polar coordinate system and the cartesian coordinate system, the multiple sets of original information data sets are converted into the multiple sets of original point cloud data sets.
Specifically, an information data set H acquired by the laser radar at the ith timeiConversion into a corresponding original point cloud dataset MiThe expression is:
Figure BDA0002579599870000081
wherein, the original point cloud data set MiAnd acquiring coordinate information of a target object in the environment for the laser radar in the target coordinate system for the ith time, wherein the target coordinate system is a coordinate system established based on the center of the laser radar.
In the application, after a plurality of groups of original point cloud data sets are obtained by collecting the environment where the robot is located through the laser radar, point cloud data in any two adjacent groups of original point cloud data sets in the plurality of groups of original point cloud data sets can be registered according to the collection sequence of the plurality of groups of original point cloud data sets by using invariant in the original point cloud data sets as a registration condition, so that any two adjacent groups of original point cloud data sets after registration are obtained.
In an optional embodiment, the step S104 of registering, with an invariant in the original point cloud data sets as a registration condition, point cloud data in two sets of original point cloud data sets that are arbitrarily adjacent to each other in the multiple sets of original point cloud data sets according to the collection order of the multiple sets of original point cloud data sets includes the following steps:
step S1041, registering the point cloud data in any two adjacent groups of original point cloud data sets to obtain a transformation formula; the transformation formula is expressed as: and Q is RP + T, wherein Q, P is the two sets of arbitrary adjacent original point cloud data sets, and the original point cloud data set Q is acquired after the original point cloud data set P, R is a rotation parameter, and T is a translation parameter.
In this embodiment, the original point cloud data set Q is a data set collected after the original point cloud data set P, and if the original point cloud data set P is the original point cloud data set M collected by the laser radar at the ith timeiThen the original point cloud data set Q is the original point cloud data acquired by the laser radar at the (i + 1) th timeCollection Mi+1
It should be noted that, since the robot is moving along the line target object (e.g. human) at the moment, there is a rotation angle between the two sets of continuous raw point cloud data Q and P
Figure BDA0002579599870000091
And displacements Δ X, Δ Y in the X and Y axis directions. Therefore, in the present application, the objective of registering the point cloud data in any two adjacent sets of original point cloud data sets in the multiple sets of original point cloud data sets according to the collection order of the multiple sets of original point cloud data sets with the invariant in the original point cloud data sets as the registration condition is to eliminate the rotation angle existing between any two adjacent sets of original point cloud data sets
Figure BDA0002579599870000092
And displacements Δ X, Δ Y in the X and Y axis directions.
Specifically, in the present application, with an invariant in an original point cloud data set as a registration condition, a target of registering point cloud data in two sets of original point cloud data sets that are arbitrarily adjacent to each other in the multiple sets of original point cloud data sets according to an acquisition order of the multiple sets of original point cloud data sets is to obtain a rotation parameter R and a translation parameter T.
The expressions of the rotation parameter R and the translation parameter T are as follows:
Figure BDA0002579599870000101
in this embodiment, the rotation parameter R and the translation parameter T may be obtained in the following manner, and the specific steps include the following:
(1) carrying out regularization processing on any two adjacent groups of original point cloud data sets;
(2) extracting central point clouds of any two adjacent original point cloud data sets, and registering the central point clouds;
(3) obtaining a target evaluation function;
(4) and (4) deriving corresponding quantities in the target evaluation function to minimize the target evaluation function value so as to obtain a rotation parameter R and a translation parameter T.
After the rotation parameter R and the translation parameter T are obtained, a transformation formula can be determined according to the obtained rotation parameter R and the obtained translation parameter T, wherein the transformation formula is expressed as: q ═ RP + T.
Step S1042, transforming the original point cloud data set P according to the transformation formula to obtain a transformed original point cloud data set P ', and determining the transformed original point cloud data set P' and the original point cloud data set Q as any two adjacent original point cloud data sets after registration.
In this application, the original point cloud data set P may be substituted into a transformation formula Q ═ RP + T to be transformed, so as to obtain a transformed original point cloud data set P ', and an expression of the transformed original point cloud data set P' is as follows: p ═ RP + T.
It should be noted that, because any two adjacent sets of original point cloud data sets after registration are not completely consistent, that is, the transformed original point cloud data set P' and the original point cloud data Q do not completely coincide. And determining the transformed original point cloud data set P' and the original point cloud data Q as the two adjacent arbitrary point cloud data sets after registration.
In the present application, after determining the two sets of original point cloud data sets after registration, an average distance between the two sets of original point cloud data sets after registration can be calculated. The specific description is as follows:
1) and calculating the distance between the matched point clouds in any two groups of adjacent original point cloud data sets after registration to obtain a plurality of distances.
Specifically, in the present embodiment, p'i(i 1.. n.) is the point cloud in the transformed original point cloud dataset P', q ═ q ·i(i ═ 1.. multidot.n) is the point cloud in the original point cloud data set Q, and the matching point cloud p 'is matched in any two adjacent original point cloud data sets after registration'iAnd q isiA plurality of distances d betweeniThe expression of (i ═ 1.., n) is: di=||p′i-qi||2(i=1,...,n)。
2) And calculating the average value among the plurality of distances to obtain the average distance.
In the present embodiment, the average distance davThe expression of (a) is:
Figure BDA0002579599870000111
in the application, after the average distance between any two adjacent original point cloud data sets after registration is obtained, whether the environment where the robot is located includes a moving target or not can be determined according to the average distance, wherein if the moving target is determined to be included, the point cloud distance and the angle information of the moving target are extracted, and the moving target is tracked. The specific description is as follows:
1) determining a target threshold value according to the average distance; specifically, in the present embodiment, the target threshold dcrIs dcr=4dav
2) And comparing each distance in the plurality of distances with the target threshold value to obtain a comparison result.
In this embodiment, each distance d of the plurality of distances may beiAnd the target threshold value dcrAnd (6) carrying out comparison. If each of the plurality of distances diIs greater than or equal to the target threshold value dcrThe comparison result is 1; if each of the plurality of distances diLess than a target threshold dcrThe comparison result is 0.
In this embodiment, when the robot tracks a pedestrian, public parts such as external environments (walls, tables, etc.) and legs of the pedestrian may exist in the original point cloud data sets collected twice in a neighboring manner. And the distance between the matched point clouds in the two groups of original point cloud data sets after registration corresponding to the public part is lower than a target threshold, and the distance between the matched point clouds in the two groups of original point cloud data sets after registration corresponding to the legs of the pedestrian is higher than the target threshold. Thus, portions above the target threshold are captured, thereby identifying from the raw data set a raw point cloud data point set corresponding to the leg of the pedestrian.
3) If the target distance which is larger than or equal to the target threshold value is determined to be included in the plurality of distances according to the comparison result, the moving target is determined to be included, and the point cloud corresponding to the target distance in the original point cloud data set is the point cloud to which the moving target belongs.
In this embodiment, the point cloud corresponding to the original point cloud data set with the comparison result of 1 is determined as the point cloud to which the moving target belongs.
In the application, if the mobile target is determined not to be contained according to the comparison result, an emergency instruction is sent to the main control board; and receiving a holding instruction sent by the main control board according to the emergency instruction, wherein the holding instruction is used for indicating the robot to execute the motion state of the robot at the previous moment, and keeping the laser radar in a collection state so as to enable the laser radar to continue to collect.
In this embodiment, it should be noted that if it is determined that the moving target is not included according to the comparison result for a plurality of consecutive times (greater than or equal to 3 times), an adjustment instruction is issued to the main control board; and receiving a pause instruction sent by the main control board according to the adjusting instruction, wherein the pause instruction is used for indicating the robot to execute a stopped motion state, stopping the robot to move, adjusting the position of a holder where the laser radar is located, resetting the laser radar and enabling the laser radar to acquire again.
In the application, after the moving target is determined to be included, the point cloud distance and angle information of the moving target can be extracted, and the moving target is tracked. The method comprises the following specific steps:
step S1061, determining the point cloud of the moving target in the original point cloud data set Q as a target point cloud; in this embodiment, the point cloud belonging to the original point cloud data set Q with the comparison result of 1 may be determined as the target point cloud.
Step S1062, determining the rotation angle of the robot according to the angle of the center of the target point cloud in the target coordinate system.
In this embodiment, first, a center of a target point cloud is selected, and then an angle of the center of the target point cloud in a target coordinate system is calculated, where the angle of the center of the target point cloud in the target coordinate system is a rotation angle of the robot.
And step S1063, calculating the target average distance between the target point cloud and the center of the laser radar, and determining the moving speed of the robot according to the target average distance.
In the method, firstly, the distance between each point cloud in target point clouds and the center of a laser radar is calculated to obtain a plurality of target distances; then, an average value of the plurality of target distances is calculated to obtain the target average distance.
In this embodiment, a target average distance threshold may be preset, and if the target average distance is smaller than the target average distance threshold, the moving speed of the robot is a constant speed set manually; if the target average distance is greater than or equal to the target average distance threshold, the moving speed of the robot is multiple times (for example, 1.5 times) of the artificially set constant speed.
Step S1064, tracking the moving target according to the rotation angle and the moving speed.
In this embodiment, after obtaining the rotation angle and the moving speed, the robot can track the target object according to the moving speed and the rotation angle.
It should be noted that, because noise exists in both the tracking of the moving target and the motion process of the robot, in order to meet the real-time performance and stability of the tracking, kalman filtering may be used to perform motion filtering on the robot. The Kalman filtering method comprises the following specific steps:
(1) predicting the motion state of the robot at the next moment according to a system state transition equation and the motion state of the robot at the previous moment;
(2) calculating the covariance of the predicted state;
(3) updating the current state according to the measurement result, namely updating the prediction state in (2) by using the detection result of the tracking target;
(4) and (4) calculating the covariance corresponding to the state after the updating in the step (3).
In the embodiment, the result after the kalman filtering is more accurate relative to the moving speed and the rotation angle of the robot, the fluctuation can be effectively inhibited, and the robot motion is more stable because the part which is incorrect due to the influence of noise is filtered.
Through the above description, this application has following technical effect:
(1) according to the method, the information of the target object in the environment where the laser radar is located is extracted by the two-dimensional laser radar, the problem that the existing tracking method based on the vision sensor is easily influenced by factors such as external illumination and the like is solved, and compared with the three-dimensional laser radar, the cost is greatly saved;
(2) the method directly reflects the relative position relationship between the robot and the person, identifies the moving target in a registration mode, and solves the technical problem of tracking failure caused by the fact that the moving distance of the pedestrian and the walking of the pedestrian exceed the scanning range of the laser radar in the conventional tracking method based on the laser radar;
(3) according to the method and the device, only the target point cloud meeting the target threshold value needs to be extracted, and data processing is carried out on the target point cloud to obtain the rotation angle and the moving speed of the robot. Compared with the method based on SVM training for identifying the target, the method does not need training, reduces the computation amount, reduces the requirement on an upper computer, and can be directly applied to each environment.
Example two: the embodiment of the present invention further provides a device for identifying a moving target of a robot, where the device for identifying a moving target of a robot is mainly used for executing the method for identifying a moving target of a robot provided in the foregoing content of the embodiment of the present invention, and the following describes the device for identifying a moving target of a robot provided in the embodiment of the present invention in detail.
In this embodiment, fig. 2 is an overall appearance display diagram of a robot, including: laser radar, PC and mobile robot platform. Fig. 2 is a schematic diagram showing connection of internal components of the robot, the upper computer of the robot is a PC equipped with an ROS platform, the ROS platform adopts a unique distributed architecture, different parts of the robot can work in a Node mode, each Node operates independently and organically receives and transmits data, and the nodes can communicate with each other in a topic mode, a service mode and the like. The laser radar is one of the working nodes of the ROS and is used for collecting tracking target information in real time and feeding the tracking target information back to the upper computer; the main control board adopts an Arduino type main control board, the communication performance with the ROS is good through experimental verification, and the Arduino can become a standard working node of the ROS through calling a ROS _ serial function packet; the main control board directly controls a lower computer containing a motor drive and a motor, and the motion of the robot is realized. The data processing node in the upper computer processes the data by adopting the method for identifying the moving target of the robot, so as to obtain the target point cloud, sends the information of the obtained target point cloud to the main control board, and sends a corresponding motion instruction to the lower computer (motor), so that the robot can track the moving target.
Fig. 4 is a schematic diagram of a device for identifying a moving object of a robot according to an embodiment of the present invention, as shown in fig. 2, the device for identifying a moving object of a robot mainly includes an acquisition unit 10, a registration unit 20, and an identification unit 30, wherein:
the acquisition unit 10 is used for acquiring the environment where the robot is located through the laser radar to obtain a plurality of groups of original point cloud data sets; the original point cloud data set is used for representing coordinate information of a target object in the environment in a target coordinate system, and the target coordinate system is a coordinate system established based on the center of the laser radar;
the registration unit 20 is configured to register point cloud data in two sets of original point cloud data sets that are arbitrarily adjacent in the multiple sets of original point cloud data sets according to the collection order of the multiple sets of original point cloud data sets, with an invariant in the original point cloud data sets as a registration condition, so as to obtain two sets of original point cloud data sets that are arbitrarily adjacent after registration;
and the identification unit 30 is configured to calculate an average distance between any two adjacent sets of original point cloud data sets after registration, and determine whether the environment where the robot is located includes a moving target according to the average distance, wherein if it is determined that the robot includes the moving target, the point cloud distance and angle information of the moving target are extracted, and the moving target is tracked.
In the embodiment of the invention, firstly, the environment of the robot is collected through the laser radar to obtain a plurality of groups of original point cloud data sets; then, registering the point cloud data in any two adjacent original point cloud data sets in the multiple groups of original point cloud data sets according to the acquisition sequence of the multiple groups of original point cloud data sets by taking the invariants in the original point cloud data sets as registration conditions to obtain the two adjacent original point cloud data sets after registration; finally, calculating the average distance between any two adjacent groups of original point cloud data sets after registration; and determining whether the environment of the robot contains a moving target or not according to the average distance, wherein if the environment of the robot contains the moving target, extracting the point cloud distance and angle information of the moving target, and tracking the moving target. In this embodiment, whether the environment where the robot is located includes a moving target can be determined according to the average distance between any two adjacent sets of original point cloud data sets after registration, the requirement on an upper computer is reduced, the scanning capability of the laser radar is applied to the greatest extent, and the technical problem of tracking failure caused by the fact that the moving distance of pedestrians and the fact that the pedestrians walk beyond the scanning range of the laser radar in the existing tracking method based on the laser radar are solved.
Optionally, the acquisition unit 10 is configured to: collecting the environment where the robot is located through the laser radar to obtain a plurality of groups of original information data sets, wherein the original information data sets comprise: a distance of a target object within the environment from the lidar center, an angle of the target object within the environment in the target coordinate system; and converting the multiple groups of original information data sets into the multiple groups of original point cloud data sets according to the conversion relation between the polar coordinate system and the Cartesian coordinate system.
Optionally, the registration unit 20 is configured to: registering the point cloud data in any two adjacent groups of original point cloud data sets to obtain a transformation formula; the transformation formula is expressed as: q is RP + T, where Q, P is the two sets of arbitrary adjacent original point cloud data sets, and original point cloud data set Q is acquired after original point cloud data set P, R is a rotation parameter, and T is a translation parameter; and transforming the original point cloud data set P according to the transformation formula to obtain a transformed original point cloud data set P ', and determining the transformed original point cloud data set P' and the original point cloud data set Q as any two adjacent original point cloud data sets after registration.
Optionally, the identification unit 30 is configured to: calculating the distance between the matched point clouds in any two adjacent sets of original point cloud data sets after registration to obtain a plurality of distances; and calculating the average value among the plurality of distances to obtain the average distance.
Optionally, the identification unit 30 is further configured to: determining a target threshold value according to the average distance; comparing each distance in the plurality of distances with the target threshold value to obtain a comparison result; and if the target distance which is greater than or equal to the target threshold value is determined to be contained in the plurality of distances according to the comparison result, determining that the moving target is contained, wherein the point cloud corresponding to the target distance in the original point cloud data set is the point cloud to which the moving target belongs.
Optionally, the identification unit 30 is further configured to: determining the point cloud of the moving target in the original point cloud data set Q as a target point cloud; determining the rotation angle of the robot according to the angle of the center of the target point cloud in the target coordinate system; calculating the target average distance between the target point cloud and the center of the laser radar, and determining the moving speed of the robot according to the target average distance; and tracking the moving target according to the rotation angle and the moving speed.
Optionally, the identification unit 30 is further configured to: calculating the distance between each point cloud in the target point cloud and the center of the laser radar to obtain a plurality of target distances; and calculating the average value of the plurality of target distances to obtain the target average distance.
Optionally, the identification unit 30 is further configured to: if the mobile target is determined not to be contained, an emergency instruction is sent to the main control board; and receiving a holding instruction sent by the main control board according to the emergency instruction, wherein the holding instruction is used for indicating the robot to execute the motion state of the robot at the previous moment, and keeping the laser radar in a collection state so as to enable the laser radar to continue to collect.
Optionally, the apparatus is further configured to: the intelligent warehousing space is provided for the moving target. In this embodiment, a storage box is placed above the robot, and the schematic view of the storage box is shown in fig. 5. The storage box has a certain mechanical structure, and when a worker arrives at a destination, the storage box can automatically pop out a corresponding compartment according to the requirement of the worker, and parts (screw nuts), tools (wrench pliers) and the like required by the worker are provided. The device provided by the embodiment of the present invention has the same implementation principle as the method embodiment, and for the sake of brief description, reference may be made to the corresponding content in the method embodiment for which no mention is made in the device embodiment.
The embodiment of the invention not only produces the technical effects of the method embodiment, but also produces the following technical effects:
the device has the function of intelligent warehousing, and the robot can convey parts, tools and the like required by workers to a destination along with the workers according to the requirements of the workers, so that the pressure of the workers is greatly reduced.
Example three:
referring to fig. 6, an embodiment of the present invention further provides an apparatus 100 for identifying a moving target of a robot, including: a processor 40, a memory 41, a bus 42 and a communication interface 43, wherein the processor 40, the communication interface 43 and the memory 41 are connected through the bus 42; the processor 40 is arranged to execute executable modules, such as computer programs, stored in the memory 41.
The Memory 41 may include a high-speed Random Access Memory (RAM) and may also include a non-volatile Memory (non-volatile Memory), such as at least one disk Memory. The communication connection between the network element of the system and at least one other network element is realized through at least one communication interface 43 (which may be wired or wireless), and the internet, a wide area network, a local network, a metropolitan area network, etc. may be used.
The bus 42 may be an ISA bus, PCI bus, EISA bus, or the like. The bus may be divided into an address bus, a data bus, a control bus, etc. For ease of illustration, only one double-headed arrow is shown in FIG. 6, but that does not indicate only one bus or one type of bus.
The memory 41 is used for storing a program, the processor 40 executes the program after receiving an execution instruction, and the method executed by the apparatus defined by the flow process disclosed in any of the foregoing embodiments of the present invention may be applied to the processor 40, or implemented by the processor 40.
The processor 40 may be an integrated circuit chip having signal processing capabilities. In implementation, the steps of the above method may be performed by integrated logic circuits of hardware or instructions in the form of software in the processor 40. The Processor 40 may be a general-purpose Processor, and includes a Central Processing Unit (CPU), a Network Processor (NP), and the like; the device can also be a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a Field-Programmable Gate Array (FPGA) or other Programmable logic device, a discrete Gate or transistor logic device, or a discrete hardware component. The various methods, steps and logic blocks disclosed in the embodiments of the present invention may be implemented or performed. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like. The steps of the method disclosed in connection with the embodiments of the present invention may be directly implemented by a hardware decoding processor, or implemented by a combination of hardware and software modules in the decoding processor. The software module may be located in ram, flash memory, rom, prom, or eprom, registers, etc. storage media as is well known in the art. The storage medium is located in a memory 41, and the processor 40 reads the information in the memory 41 and completes the steps of the method in combination with the hardware thereof.
In addition, in the description of the embodiments of the present invention, unless otherwise explicitly specified or limited, the terms "mounted," "connected," and "connected" are to be construed broadly, e.g., as meaning either a fixed connection, a removable connection, or an integral connection; can be mechanically or electrically connected; they may be connected directly or indirectly through intervening media, or they may be interconnected between two elements. The specific meanings of the above terms in the present invention can be understood in specific cases to those skilled in the art.
In the description of the present invention, it should be noted that the terms "center", "upper", "lower", "left", "right", "vertical", "horizontal", "inner", "outer", etc., indicate orientations or positional relationships based on the orientations or positional relationships shown in the drawings, and are only for convenience of description and simplicity of description, but do not indicate or imply that the device or element being referred to must have a particular orientation, be constructed and operated in a particular orientation, and thus, should not be construed as limiting the present invention. Furthermore, the terms "first," "second," and "third" are used for descriptive purposes only and are not to be construed as indicating or implying relative importance.
It is clear to those skilled in the art that, for convenience and brevity of description, the specific working processes of the above-described systems, apparatuses and units may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
In the several embodiments provided in the present application, it should be understood that the disclosed system, apparatus and method may be implemented in other ways. The above-described embodiments of the apparatus are merely illustrative, and for example, the division of the units is only one logical division, and there may be other divisions when actually implemented, and for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection of devices or units through some communication interfaces, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present invention may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit.
The functions, if implemented in the form of software functional units and sold or used as a stand-alone product, may be stored in a non-volatile computer-readable storage medium executable by a processor. Based on such understanding, the technical solution of the present invention may be embodied in the form of a software product, which is stored in a storage medium and includes instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present invention. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and other various media capable of storing program codes.
Finally, it should be noted that: the above-mentioned embodiments are only specific embodiments of the present invention, which are used for illustrating the technical solutions of the present invention and not for limiting the same, and the protection scope of the present invention is not limited thereto, although the present invention is described in detail with reference to the foregoing embodiments, those skilled in the art should understand that: any person skilled in the art can modify or easily conceive the technical solutions described in the foregoing embodiments or equivalent substitutes for some technical features within the technical scope of the present disclosure; such modifications, changes or substitutions do not depart from the spirit and scope of the embodiments of the present invention, and they should be construed as being included therein. Therefore, the protection scope of the present invention shall be subject to the protection scope of the claims.

Claims (10)

1. A method for identifying a moving target of a robot, the method being applied to the robot, the robot having a laser radar provided therein, the method comprising:
collecting the environment of the robot through the laser radar to obtain a plurality of groups of original point cloud data sets; the original point cloud data set is used for representing coordinate information of a target object in the environment in a target coordinate system, and the target coordinate system is a coordinate system established based on the center of the laser radar;
registering the point cloud data in any two adjacent groups of original point cloud data sets in the multiple groups of original point cloud data sets according to the acquisition sequence of the multiple groups of original point cloud data sets by taking the invariants in the original point cloud data sets as registration conditions to obtain the two adjacent groups of original point cloud data sets after registration;
calculating the average distance between any two adjacent groups of original point cloud data sets after registration; and determining whether the environment of the robot contains a moving target or not according to the average distance, wherein if the environment of the robot contains the moving target, extracting the point cloud distance and angle information of the moving target, and tracking the moving target.
2. The method of claim 1, wherein calculating an average distance between the two sets of raw point cloud data sets after registration comprises:
calculating the distance between the matched point clouds in any two adjacent sets of original point cloud data sets after registration to obtain a plurality of distances;
and calculating the average value among the plurality of distances to obtain the average distance.
3. The method of claim 2, wherein determining whether the environment in which the robot is located includes moving objects based on the average distance comprises:
determining a target threshold value according to the average distance;
comparing each distance in the plurality of distances with the target threshold value to obtain a comparison result;
and if the target distance which is greater than or equal to the target threshold value is determined to be contained in the plurality of distances according to the comparison result, determining that the moving target is contained, wherein the point cloud corresponding to the target distance in the original point cloud data set is the point cloud to which the moving target belongs.
4. The method of claim 1, wherein registering point cloud data in any two adjacent sets of original point cloud data sets in the collection order of the sets of original point cloud data sets with invariants in the sets of original point cloud data sets as registration conditions comprises:
registering the point cloud data in any two adjacent groups of original point cloud data sets to obtain a transformation formula; the transformation formula is expressed as: q is RP + T, where Q, P is the two sets of arbitrary adjacent original point cloud data sets, and original point cloud data set Q is acquired after original point cloud data set P, R is a rotation parameter, and T is a translation parameter;
and transforming the original point cloud data set P according to the transformation formula to obtain a transformed original point cloud data set P ', and determining the transformed original point cloud data set P' and the original point cloud data set Q as any two adjacent original point cloud data sets after registration.
5. The method of claim 1, wherein collecting the environment in which the robot is located by the lidar to obtain a plurality of sets of raw point cloud data sets comprises:
collecting the environment where the robot is located through the laser radar to obtain a plurality of groups of original information data sets, wherein the original information data sets comprise: a distance of a target object within the environment from the lidar center, an angle of the target object within the environment in the target coordinate system;
and converting the multiple groups of original information data sets into the multiple groups of original point cloud data sets according to the conversion relation between the polar coordinate system and the Cartesian coordinate system.
6. The method of claim 3, wherein extracting point cloud distance and angle information of a moving target and tracking the moving target comprises:
determining the point cloud of the moving target in the original point cloud data set Q as a target point cloud;
determining the rotation angle of the robot according to the angle of the center of the target point cloud in the target coordinate system;
calculating the target average distance between the target point cloud and the center of the laser radar, and determining the moving speed of the robot according to the target average distance;
and tracking the moving target according to the rotation angle and the moving speed.
7. The method of claim 6, wherein calculating the target average distance of the target point cloud from the lidar center comprises:
calculating the distance between each point cloud in the target point cloud and the center of the laser radar to obtain a plurality of target distances;
and calculating the average value of the plurality of target distances to obtain the target average distance.
8. The method of claim 1, further comprising:
if the mobile target is determined not to be contained, an emergency instruction is sent to the main control board;
and receiving a holding instruction sent by the main control board according to the emergency instruction, wherein the holding instruction is used for indicating the robot to execute the motion state of the robot at the previous moment, and keeping the laser radar in a collection state so as to enable the laser radar to continue to collect.
9. An apparatus for recognizing a moving object of a robot, comprising:
the acquisition unit is used for acquiring the environment where the robot is located through the laser radar to obtain a plurality of groups of original point cloud data sets; the original point cloud data set is used for representing coordinate information of a target object in the environment in a target coordinate system, and the target coordinate system is a coordinate system established based on the center of the laser radar;
the registration unit is used for registering the point cloud data in any two adjacent original point cloud data sets in the multiple groups of original point cloud data sets according to the acquisition sequence of the multiple groups of original point cloud data sets by taking the invariants in the original point cloud data sets as registration conditions to obtain the two adjacent original point cloud data sets after registration;
and the identification unit is used for calculating the average distance between any two adjacent original point cloud data sets after registration, and determining whether the environment where the robot is located contains a moving target according to the average distance, wherein if the environment contains the moving target, the point cloud distance and the angle information of the moving target are extracted, and the moving target is tracked.
10. An electronic device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, characterized in that the processor hosts an ROS platform and when executing the computer program implements the steps of the method of any of the previous claims 1 to 7.
CN202010666568.2A 2020-07-10 2020-07-10 Method and device for identifying moving target of robot and electronic equipment Pending CN111813120A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010666568.2A CN111813120A (en) 2020-07-10 2020-07-10 Method and device for identifying moving target of robot and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010666568.2A CN111813120A (en) 2020-07-10 2020-07-10 Method and device for identifying moving target of robot and electronic equipment

Publications (1)

Publication Number Publication Date
CN111813120A true CN111813120A (en) 2020-10-23

Family

ID=72842790

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010666568.2A Pending CN111813120A (en) 2020-07-10 2020-07-10 Method and device for identifying moving target of robot and electronic equipment

Country Status (1)

Country Link
CN (1) CN111813120A (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112465795A (en) * 2020-12-09 2021-03-09 广州科莱瑞迪医疗器材股份有限公司 Body surface tracking method and device
CN112546460A (en) * 2020-12-09 2021-03-26 广州科莱瑞迪医疗器材股份有限公司 Body surface tracking system
CN112950708A (en) * 2021-02-05 2021-06-11 深圳市优必选科技股份有限公司 Positioning method, positioning device and robot
CN113268066A (en) * 2021-07-19 2021-08-17 福勤智能科技(昆山)有限公司 Method and device for detecting target object, computer equipment and storage medium

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108664841A (en) * 2017-03-27 2018-10-16 郑州宇通客车股份有限公司 A kind of sound state object recognition methods and device based on laser point cloud
CN110018489A (en) * 2019-04-25 2019-07-16 上海蔚来汽车有限公司 Target tracking method, device and controller and storage medium based on laser radar
CN110533055A (en) * 2018-05-25 2019-12-03 北京京东尚科信息技术有限公司 A kind for the treatment of method and apparatus of point cloud data
CN110717918A (en) * 2019-10-11 2020-01-21 北京百度网讯科技有限公司 Pedestrian detection method and device
CN111239766A (en) * 2019-12-27 2020-06-05 北京航天控制仪器研究所 Water surface multi-target rapid identification and tracking method based on laser radar
CN111337941A (en) * 2020-03-18 2020-06-26 中国科学技术大学 Dynamic obstacle tracking method based on sparse laser radar data

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108664841A (en) * 2017-03-27 2018-10-16 郑州宇通客车股份有限公司 A kind of sound state object recognition methods and device based on laser point cloud
CN110533055A (en) * 2018-05-25 2019-12-03 北京京东尚科信息技术有限公司 A kind for the treatment of method and apparatus of point cloud data
CN110018489A (en) * 2019-04-25 2019-07-16 上海蔚来汽车有限公司 Target tracking method, device and controller and storage medium based on laser radar
CN110717918A (en) * 2019-10-11 2020-01-21 北京百度网讯科技有限公司 Pedestrian detection method and device
CN111239766A (en) * 2019-12-27 2020-06-05 北京航天控制仪器研究所 Water surface multi-target rapid identification and tracking method based on laser radar
CN111337941A (en) * 2020-03-18 2020-06-26 中国科学技术大学 Dynamic obstacle tracking method based on sparse laser radar data

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
刘盈: "基于TLS的变形信息提取及可靠性评价研究", 《中国优秀硕士学位论文全文数据库 基础科学辑》, 15 December 2018 (2018-12-15) *
孙成: "三维激光点云特征提取及配准算法改进", 《中国优秀硕士学位论文全文数据库 基础科学辑》, 15 January 2020 (2020-01-15) *
李强: "基于多约束八叉树和多重特征的点云配准算法", 《中国优秀硕士学位论文全文数据库 信息科技辑》, 15 August 2019 (2019-08-15) *

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112465795A (en) * 2020-12-09 2021-03-09 广州科莱瑞迪医疗器材股份有限公司 Body surface tracking method and device
CN112546460A (en) * 2020-12-09 2021-03-26 广州科莱瑞迪医疗器材股份有限公司 Body surface tracking system
CN112950708A (en) * 2021-02-05 2021-06-11 深圳市优必选科技股份有限公司 Positioning method, positioning device and robot
CN112950708B (en) * 2021-02-05 2023-12-15 深圳市优必选科技股份有限公司 Positioning method, positioning device and robot
CN113268066A (en) * 2021-07-19 2021-08-17 福勤智能科技(昆山)有限公司 Method and device for detecting target object, computer equipment and storage medium
CN113268066B (en) * 2021-07-19 2021-11-12 福勤智能科技(昆山)有限公司 Method and device for detecting target object, computer equipment and storage medium

Similar Documents

Publication Publication Date Title
CN111813120A (en) Method and device for identifying moving target of robot and electronic equipment
CN108253958B (en) Robot real-time positioning method in sparse environment
EP3686703B1 (en) Control method, apparatus and system for robot, and applicable robot
JP7167397B2 (en) Method and apparatus for processing point cloud data
US9083856B2 (en) Vehicle speed measurement method and system utilizing a single image capturing unit
WO2018028649A1 (en) Mobile device, positioning method therefor, and computer storage medium
EP3712853A1 (en) Positioning method and system, and suitable robot
US8755562B2 (en) Estimation apparatus, control method thereof, and program
WO2019129255A1 (en) Target tracking method and device
CN109325456B (en) Target identification method, target identification device, target identification equipment and storage medium
CN112179353B (en) Positioning method and device of self-moving robot, robot and readable storage medium
CN110073362A (en) System and method for lane markings detection
JP6681682B2 (en) Mobile object measuring system and mobile object measuring method
WO2021253245A1 (en) Method and device for identifying vehicle lane changing tendency
CN110110678B (en) Method and apparatus for determining road boundary, storage medium, and electronic apparatus
CN112180931A (en) Sweeping path planning method and device of sweeper and readable storage medium
CN107992044A (en) A kind of autonomous traveling control method of robot and robot system of independently advancing
CN111048208A (en) Indoor solitary old man walking health detection method based on laser radar
CN112684430A (en) Indoor old person walking health detection method and system, storage medium and terminal
Sheng et al. Mobile robot localization and map building based on laser ranging and PTAM
CN110636248A (en) Target tracking method and device
Batavia et al. Obstacle detection in smooth high curvature terrain
Carozza et al. Image-based localization for an indoor VR/AR construction training system
KR101594113B1 (en) Apparatus and Method for tracking image patch in consideration of scale
CN112683266A (en) Robot and navigation method thereof

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination