CN111612841A - Target positioning method and device, mobile robot and readable storage medium - Google Patents

Target positioning method and device, mobile robot and readable storage medium Download PDF

Info

Publication number
CN111612841A
CN111612841A CN202010574730.8A CN202010574730A CN111612841A CN 111612841 A CN111612841 A CN 111612841A CN 202010574730 A CN202010574730 A CN 202010574730A CN 111612841 A CN111612841 A CN 111612841A
Authority
CN
China
Prior art keywords
point cloud
registered
template
point
target
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010574730.8A
Other languages
Chinese (zh)
Other versions
CN111612841B (en
Inventor
文乃武
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Mumu Jucong Robot Technology Co ltd
Original Assignee
Shanghai Mumu Jucong Robot Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Mumu Jucong Robot Technology Co ltd filed Critical Shanghai Mumu Jucong Robot Technology Co ltd
Priority to CN202010574730.8A priority Critical patent/CN111612841B/en
Publication of CN111612841A publication Critical patent/CN111612841A/en
Application granted granted Critical
Publication of CN111612841B publication Critical patent/CN111612841B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • G06T7/74Determining position or orientation of objects or cameras using feature-based methods involving reference images or patches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/30Determination of transform parameters for the alignment of images, i.e. image registration
    • G06T7/33Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods
    • G06T7/337Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods involving reference images or patches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10024Color image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds

Abstract

The invention provides a target positioning method and device, a mobile robot and a readable storage medium, wherein the target positioning method comprises the following steps: acquiring a color image and a depth image containing a target to be positioned; identifying a position area of an object to be positioned in the color image; determining a point cloud to be registered in the depth map according to a position area of the target to be positioned in the color map; partitioning the template point cloud according to the point cloud to be registered to obtain a template point cloud subset; and registering the point cloud to be registered in the depth map and the template point cloud subset based on a normal weighted iterative closest point algorithm to complete the positioning of the target. The problem of among the prior art to the unsettled object be difficult to detect lead to mobile robot and barrier to bump is solved, improve the rate of accuracy of target location, improved the precision and the convergence speed of registration simultaneously, practice thrift the calculation time.

Description

Target positioning method and device, mobile robot and readable storage medium
Technical Field
The present invention relates to the field of obstacle positioning technologies, and in particular, to a target positioning method and apparatus, a mobile robot, and a readable storage medium.
Background
In recent years, with rapid development of mobile robot technology, more and more mobile robots play a great role in various social fields. With the arrival of the Artificial Intelligence (AI) era, robotics has become an important sign of high-tech development, such as home service robots, shopping guide robots, floor sweeping robots, substation inspection robots, and the like. However, as people have higher and higher requirements on robot intelligence, almost all mobile robots face a problem of how to perform accurate positioning and obstacle avoidance.
At present, a mobile robot mainly relies on a laser radar to avoid obstacles in the moving process, and in the prior art, distance information of the obstacles needs to be acquired and positioned. However, the 2d laser radar can only detect plane information, and the installation height of the laser radar in the mobile robot is usually low, which easily causes that a suspended object is difficult to detect, so that the mobile robot collides with an obstacle.
Disclosure of Invention
The invention aims to provide a target positioning method and device, a mobile robot and a readable storage medium, which effectively solve the technical problem that the mobile robot is inaccurate in positioning an obstacle in the prior art.
The technical scheme provided by the invention is as follows:
a method of target localization, comprising:
acquiring a color image and a depth image containing a target to be positioned;
identifying a position area of the target to be positioned in the color image;
determining a point cloud to be registered in the depth map according to the position area of the target to be positioned in the color map;
partitioning the template point cloud according to the point cloud to be registered to obtain a template point cloud subset;
and registering the point cloud to be registered in the depth map and the template point cloud subset based on a normal weighted iterative closest point algorithm to complete the positioning of the target.
Further preferably, in the registration of the to-be-registered point cloud in the depth map with the template point cloud subset by the normal weighting-based iterative closest point algorithm, the registration includes:
fixing a transformation matrix between the template point cloud subset and the point cloud to be registered, sequentially finding points closest to each point in the point cloud to be registered in the template point cloud subset, and calculating a square distance;
sorting the calculated square distances and updating the overlapping rate;
according to the updated overlapping rate, carrying out a branch removing operation on the points with the distance larger than a preset distance threshold value in the cloud to be registered;
updating a transformation matrix according to the points after the branch removing operation and calculating a mean square error based on a preset objective function;
and judging whether to stop the loop iteration according to the mean square error.
Further preferably, in the updating of the transformation matrix according to the points after the delimbing operation and the calculation of the mean square error based on a preset objective function, the preset objective function f is:
Figure BDA0002550921500000021
wherein, the template point cloud Q ═ { Q ═ Q1,q2,...,qmM represents the number of point clouds of the template point cloud, the subset of the template point cloud
Figure BDA0002550921500000022
Point cloud to be registered P ═ P1,p2,...,pnN represents the number of point clouds of the point cloud to be registered; w is aiRepresenting the ith point p in the point cloud to be registered after the branch removing operationiWeight of viRepresenting the ith point p in the point cloud to be registered after the branch removing operationiNormal vector n ofiWith the ith point q in the template point cloud subsetc(i)Normal vector n ofc(i)At an included angle therebetween, and
Figure BDA0002550921500000023
mrrepresenting the number of point clouds of the point cloud to be registered after the branch removing operation; r denotes a rotation angle, and t denotes a translation amount.
Further preferably, in the area of the position where the target to be located is identified in the color map: identifying a position area where an object to be positioned in a color map by adopting a classifier established based on an SSD (Single Shot Multi Box Detector) object detection algorithm; and/or the presence of a gas in the gas,
partitioning the template point cloud according to the point cloud to be registered to obtain a template point cloud subset, wherein the partitioning comprises the following steps:
selecting key points of point cloud to be registered and template point cloud;
respectively calculating PFH descriptors of key points in the point cloud to be registered and the template point cloud;
carrying out feature matching on the PFH descriptor of the point cloud to be registered and the PFH descriptor of the template point cloud;
selecting a preset group of matching point pairs with the highest matching degree;
and obtaining a template point cloud subset according to the selected matching points.
The present invention also provides a target positioning device, comprising:
the image acquisition module is used for acquiring a color image and a depth image containing a target to be positioned;
the target identification module is used for identifying a position area of the target to be positioned in the color image;
the point cloud to be registered determining module is used for determining a point cloud to be registered in the depth map according to the position area of the target to be positioned in the color map;
the template blocking module is used for blocking the template point cloud according to the point cloud to be registered to obtain a template point cloud subset;
and the point cloud registration module is used for registering the point cloud to be registered in the depth map and the template point cloud subset based on a normal weighted iterative closest point algorithm to complete the positioning of the target.
Further preferably, in the point cloud registration module, the method includes:
the first calculation unit is used for fixing a transformation matrix between the template point cloud subset and the point cloud to be registered, sequentially finding points which are closest to each point in the point cloud to be registered in the template point cloud subset, and calculating a square distance; the system is used for sorting the calculated square distances and updating the overlapping rate; the transformation matrix is updated according to the points after the branch removing operation of the branch removing unit, and the mean square error is calculated based on a preset objective function;
the delimbing unit is used for performing delimbing operation on the points with the distance larger than a preset distance threshold value in the cloud center points of the points to be registered according to the updated overlapping rate of the first computing unit;
and the judging unit is used for judging whether to stop the loop iteration according to the mean square error calculated by the first calculating unit.
Further preferably, in the first calculating unit, the preset objective function f used for calculating the mean square error is:
Figure BDA0002550921500000031
wherein, the template point cloud Q ═ { Q ═ Q1,q2,...,qmM represents the number of point clouds of the template point cloud, the subset of the template point cloud
Figure BDA0002550921500000032
Point cloud to be registered P ═ P1,p2,...,pnN represents the number of point clouds of the point cloud to be registered; w is aiRepresenting the ith point p in the point cloud to be registered after the branch removing operationiWeight of viRepresenting the ith point p in the point cloud to be registered after the branch removing operationiNormal vector n ofiWith the ith point q in the template point cloud subsetc(i)Normal vector n ofc(i)At an included angle therebetween, and
Figure BDA0002550921500000033
mrrepresenting the number of point clouds of the point cloud to be registered after the branch removing operation; r denotes a rotation angle, and t denotes a translation amount.
Further preferably, in the target identification module, a classifier created based on an SSD object detection algorithm is used to identify a location area where a target to be positioned in the color map is located; and/or the presence of a gas in the gas,
in the template block module, the method comprises the following steps:
the key point selecting unit is used for selecting key points of the point cloud to be registered and the template point cloud;
the second calculation unit is used for respectively calculating PFH descriptors of key points in the point cloud to be registered and the template point cloud;
the characteristic matching unit is used for carrying out characteristic matching on the PFH descriptor of the point cloud to be registered and the PFH descriptor of the template point cloud;
and the matching point selection unit is used for selecting a preset group of matching point pairs with the highest matching degree in the feature matching unit and obtaining the template point cloud subset according to the selected matching points.
The invention also provides a mobile robot, which comprises a memory, a processor and a computer program stored in the memory and capable of running on the processor, wherein the processor realizes the steps of the target positioning method when running the computer program.
A computer-readable storage medium, in which a computer program is stored which, when being executed by a processor, carries out the steps of the above-mentioned object localization method.
In the target positioning method and device, the mobile robot and the readable storage medium, after a color image and a depth image containing a target to be positioned are obtained based on a GRBD camera, a position area of the target to be positioned is identified from the color image, and a point cloud to be registered is obtained from the depth image; and then, registering the point cloud to be registered and the template point cloud subset in the depth map based on an improved normal weighting iterative closest point algorithm to complete the positioning of the target, solving the problem that the suspended object is difficult to detect in the prior art to cause the collision between the mobile robot and the obstacle, improving the accuracy of the positioning of the target, improving the precision and the convergence speed of the registration and saving the calculation time. In addition, in the registration process, the overlapping rate of two point clouds is calculated in each iteration, and the accuracy of solving the transformation matrix is improved through pruning operation, so that the accuracy of target positioning is further improved.
Drawings
The foregoing features, technical features, advantages and implementations of which will be further described in the following detailed description of the preferred embodiments in a clearly understandable manner in conjunction with the accompanying drawings.
FIG. 1 is a schematic flow chart of an embodiment of a target location method according to the present invention;
FIG. 2 is a schematic structural diagram of an embodiment of a target positioning device according to the present invention;
fig. 3 is a schematic structural diagram of a mobile robot according to the present invention.
The reference numbers illustrate:
100-a target positioning device, 110-an image acquisition module, 120-a target identification module, 130-a to-be-registered point cloud determination module, 140-a template partitioning module and 150-a point cloud registration module.
Detailed Description
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, specific embodiments of the present invention will be described below with reference to the accompanying drawings. It is to be understood that the drawings in the following description are merely exemplary of the invention and that other drawings and embodiments may be devised by those skilled in the art without the use of inventive faculty.
In a first embodiment of the present invention, as shown in fig. 1, an object localization method includes: s10, acquiring a color image and a depth image containing a target to be positioned; s20 identifying the position area of the target to be positioned in the color picture; s30, determining a point cloud to be registered in the depth map according to the position area of the target to be positioned in the color map; s40, partitioning the template point cloud according to the point cloud to be registered to obtain a template point cloud subset; and S50, registering the point cloud to be registered in the depth map and the template point cloud subset based on a normal weighted iterative closest point algorithm, and completing the positioning of the target.
In the embodiment, when the target needs to be positioned, an RGBD camera is adopted to photograph the direction of the target, and a color image frame and a depth image frame are obtained at the same time; then, the position area of the target in the color image is identified. Because the color image and the depth image are in one-to-one correspondence, after the position area of the target in the color image is obtained, the point cloud to be registered in the depth image can be correspondingly obtained for subsequent point cloud registration operation, and therefore the target is positioned.
Before registration, three-dimensional modeling is carried out on the area where the target to be positioned is located in advance, models of various types of objects in the corresponding area are reconstructed, and therefore position information of the target is determined according to the position of the registered template point cloud in the three-dimensional model in the subsequent steps. For the three-dimensional modeling method, there is no specific limitation, for example, in an example, the open source software elastic fusion is selected for three-dimensional modeling, and dense three-dimensional mapping is performed based on RGBD data, so as to reconstruct the object model.
In this embodiment, the step S50 of registering the to-be-registered point cloud in the depth map and the template point cloud subset based on the normal weighted Iterative Closest Point (ICP) algorithm includes: s51, fixing a transformation matrix between the template point cloud subset and the point cloud to be registered, sequentially finding the points closest to each point in the point cloud to be registered in the template point cloud subset, and calculating the square distance; sorting the calculated square distances and updating the overlapping rate; s52, according to the updated overlapping rate, carrying out a branch removing operation on the point with the distance larger than a preset distance threshold value in the cloud center point of the point to be registered; s53, updating a transformation matrix according to the points after the delimbing operation and calculating a mean square error based on a preset objective function; s54 determines whether to stop loop iteration according to the mean square error.
In the present embodiment, assume that the template point cloud is Q ═ { Q ═ Q1,q2,...,qmM represents the point cloud number of the template point cloud; the cloud set of points to be registered is P ═ P1,p2,...,pnN represents the point cloud number of the point cloud to be registered, and the normal vector of the template point cloud subset is NqThe normal vector of the point cloud to be registered is NpTemplate point cloud subset
Figure BDA0002550921500000061
Then during the registration process:
obtaining a template point cloud subset Q according to the point cloud P to be registered and blocks1Obtaining an initial transformation matrix T1And then, entering loop iteration. During the kth iteration, the transformation matrix T is fixedkIn the template point cloud subset Q1Finding the cloud midpoint p between the cloud point and the point to be registerediThe closest point is calculated and its squared distance is calculated
Figure BDA0002550921500000062
Thereafter, the distances are squared in ascending order
Figure BDA0002550921500000063
Sorting, updating the overlap ratio r, finding the first r corresponding point pairs, and removing the points with the point pair distance larger than a preset distance threshold; then updating the geometric transformation to obtain a transformation matrix Tk+1And calculating the mean square errork+1And further based on mean square errork+1Judging whether a stopping condition is met, if so, stopping iteration, and outputting a rotation angle R, a translation amount t and a mean square error; otherwise, entering the (k + 1) th iteration, and looping until a stop condition is met.
In particular, the initial transformation matrix T1The method may be initiated by Principal Component Analysis (PCA), or may be obtained in other ways, and is not limited in this respect. In the updating of the overlap ratio r, when the distance between a point pair is smaller than a preset distance threshold (which may be set according to practical situations, such as 1cm, 3cm, etc.), it is determined that there is overlap, otherwise, it is determined that there is no overlap. In the branch removing operation in each iteration process, the operation is also carried out according to the preset distance threshold, and the points with the point-to-point distance larger than the preset distance threshold are removed, so that the accuracy of the transformation matrix solving is improved.
In updating the transformation matrix according to the points after the branch removing operation and calculating the mean square error based on a preset objective function, the preset objective function f is as follows (1):
Figure BDA0002550921500000064
wherein, wiRepresenting the ith point p in the point cloud to be registered after the branch removing operationiWeight of viRepresenting the ith point p in the point cloud to be registered after the branch removing operationiNormal vector n ofiWith the ith point q in the template point cloud subsetc(i)Normal vector n ofc(i)At an included angle therebetween, and
Figure BDA0002550921500000065
mrand the number of point clouds of the point clouds to be registered after the branch removing operation is represented, R represents a rotation angle, and t represents a translation amount.
Thus, the mean square error is calculated according to equation (2):
Figure BDA0002550921500000071
in the embodiment, a normal vector is introduced into a preset objective function f as a constraint condition, and in the operation process, a small weight is configured for a point with a large included angle in a self-adaptive manner, and a large weight is configured for a point with a small included angle, so that the registration precision is improved; at the same time, v is introducediThe convergence speed of the function is accelerated, and therefore the operation time is saved.
And comparing the calculated mean square error with a preset error threshold value for judging whether the iteration is stopped, and if the mean square error calculated by one iteration is smaller than the preset error threshold value, the condition of stopping the iteration is achieved, otherwise, the iteration is continued. The preset error threshold value can be set according to the actual situation, for example, set to 1 cm; for example, the length is set to 5 cm.
The embodiment is obtained by modifying the above embodiment, and in this embodiment, step S20 identifies that the object to be positioned is located in the position area in the color map: and identifying the position area of the target to be positioned in the color map by adopting a classifier established based on an SSD object detection algorithm.
In this embodiment, before the target to be located is identified, a classifier is created based on the SSD object detection algorithm, and the classifier is trained by annotating the collected color icons as a training set. In the prediction process, the SSD object detection algorithm achieves the purpose by drawing a rectangular frame on the identified object, and generates a classification label and adjusts position coordinates for each default rectangular frame, so as to determine the category of the target object and the range in the color map. Because the color map and the depth map are in one-to-one correspondence, after the coordinate position of the rectangular frame of the target in the color map is obtained, the depth information of the same position in the depth map is converted into point cloud data to obtain the point cloud to be registered.
After the point cloud to be registered is obtained, partitioning the template point cloud based on the point cloud to be registered to obtain a template point cloud subset: firstly, filtering the point cloud to be registered (a band-pass filter, a statistical filter and the like can be used), and filtering noise points around a target object; then, calculating the normal direction and curvature of the filtered point cloud to be registered, and further extracting key points of the point cloud to be registered according to the curvature value; then, the PFH descriptors of the keypoints are computed (based on the GPU-accelerated PFH feature model implementation). Meanwhile, the reconstructed three-dimensional model point cloud is processed in the same way, and PFH descriptors of key points in the template point cloud are obtained; and matching the PFH descriptors, and dividing point cloud data similar to the cloud of the point to be registered from the three-dimensional model according to the matching result. The key point is selected according to a preset curvature threshold value, namely when the curvature value of a certain point is larger than the curvature threshold value, the point is selected as the key point.
The specific algorithm is as follows: suppose the template point cloud is Q ═ Q1,q2,...,qmM represents the point cloud number of the template point cloud; the cloud set of points to be registered is P ═ P1,p2,...,pnN represents the point cloud number of the point clouds to be registered, and the template point cloud subset
Figure BDA0002550921500000081
The template point cloud is then chunked by the following process: firstly, selecting a key point P of a point cloud P to be registered and a template point cloud QkeyAnd Qkey(ii) a Then, the key points Q are calculated respectivelykeyAnd PkeyPFH descriptor Q ofpfhAnd Ppfh(ii) a Next, PFH descriptor P of point cloud to be registeredpfhAnd PFH descriptor Q of template point cloudpfhCarrying out feature matching; finally, selecting s groups of matching point pairs with the highest matching precision; corresponding PFH descriptors to the spatial coordinates of the points one by one, selecting corresponding spatial coordinate points on the template point cloud Q, and selecting proper k neighborhoods to form a template point cloud subset Q by taking the central point of the selected spatial coordinate points as the center1. Wherein, PFH descriptor P of point cloud to be registeredpfhAnd PFH descriptor Q of template point cloudpfhIn performing feature matching, PFH descriptor P is calculatedpfhAnd PFH descriptor QpfhThe Euclidean distance between, and then from low toHigh run (minimum distance, meaning higher matching accuracy). The number of the selected matching point pairs can be selected according to actual conditions, such as 10 pairs and 15 pairs; and determining the k value according to the point cloud number of the point cloud to be registered, and selecting points with the same number as the point cloud number in the point cloud to be registered.
The present invention also provides an object locating device 100, which in one embodiment, as shown in FIG. 2, comprises: an image acquisition module 110, configured to acquire a color image and a depth image of an object to be positioned; the target identification module 120 is configured to identify a position area of the target to be positioned in the color map; the point cloud to be registered determining module 130 is configured to determine a point cloud to be registered in the depth map according to a position area of the target to be positioned in the color map; the template blocking module 140 is configured to block the template point cloud according to the point cloud to be registered to obtain a template point cloud subset; and the point cloud registration module 150 is configured to register the point cloud to be registered in the depth map and the template point cloud subset based on a normal weighted iterative closest point algorithm, so as to complete the positioning of the target.
In this embodiment, when the target needs to be positioned, the image acquisition module 110 uses an RGBD camera to photograph the position of the target, and obtains a color image and a depth image of a frame at the same time; the target identification module 120 then identifies the location area of the target in the color map. Since the color map and the depth map are in one-to-one correspondence, after the position area of the target in the color map is obtained, the cloud of points to be registered in the depth map can be correspondingly obtained by the cloud determination module 130 for performing subsequent point cloud registration operation, thereby realizing the positioning of the target.
Before registration, three-dimensional modeling is carried out on the area where the target to be positioned is located in advance, models of various types of objects in the corresponding area are reconstructed, and therefore position information of the target is determined according to the position of the registered template point cloud in the three-dimensional model in the subsequent steps. For the three-dimensional modeling method, there is no specific limitation, for example, in an example, the open source software elastic fusion is selected for three-dimensional modeling, and dense three-dimensional mapping is performed based on RGBD data, so as to reconstruct the object model.
The embodiment is obtained by modifying the above embodiment, and in this embodiment, the point cloud registration module 150 includes: the first calculation unit is used for fixing a transformation matrix between the template point cloud subset and the point cloud to be registered, sequentially finding points which are closest to each point in the point cloud to be registered in the template point cloud subset, and calculating a square distance; the system is used for sorting the calculated square distances and updating the overlapping rate; the transformation matrix is updated according to the points after the branch removing operation of the branch removing unit, and the mean square error is calculated based on a preset objective function; the delimbing unit is used for performing delimbing operation on the points with the distance larger than the preset distance threshold value in the cloud center points of the points to be registered according to the updated overlapping rate of the first computing unit; and the judging unit is used for judging whether to stop the loop iteration according to the mean square error calculated by the first calculating unit.
In the present embodiment, assume that the template point cloud is Q ═ { Q ═ Q1,q2,...,qmM represents the point cloud number of the template point cloud; the cloud set of points to be registered is P ═ P1,p2,...,pnN represents the point cloud number of the point cloud to be registered, and the normal vector of the template point cloud subset is NqThe normal vector of the point cloud to be registered is NpTemplate point cloud subset
Figure BDA0002550921500000093
Then during the registration process:
obtaining a template point cloud subset Q according to the point cloud P to be registered and blocks1Obtaining an initial transformation matrix T1And then, entering loop iteration. During the kth iteration, the transformation matrix T is fixedkThe first computing unit is used for computing the point cloud subset Q of the template point1Finding the cloud midpoint p between the cloud point and the point to be registerediThe closest point is calculated and its squared distance is calculated
Figure BDA0002550921500000091
Thereafter, the distances are squared in ascending order
Figure BDA0002550921500000092
Sorting, updating the overlap ratio r, finding the first r corresponding point pairs, and the delimbing unit making the distance between the point pairs larger than that of the first r corresponding point pairsRemoving points with preset distance thresholds; then the first computing unit further updates the geometric transformation to obtain a transformation matrix Tk+1And calculating the mean square errork+1And further the judging unit is based on the mean square errork+1Judging whether a stopping condition is met, if so, stopping iteration, and outputting a rotation angle R, a translation amount t and a mean square error; otherwise, entering the (k + 1) th iteration, and looping until a stop condition is met.
In particular, the initial transformation matrix T1The method may be initiated by Principal Component Analysis (PCA), or may be obtained in other ways, and is not limited in this respect. In the updating of the overlap ratio r, when the distance between a point pair is smaller than a preset distance threshold (which may be set according to practical situations, such as 1cm, 3cm, etc.), it is determined that there is overlap, otherwise, it is determined that there is no overlap. In the branch removing operation in each iteration process, the operation is also carried out according to the preset distance threshold, and the points with the point-to-point distance larger than the preset distance threshold are removed, so that the accuracy of the transformation matrix solving is improved.
In updating the transformation matrix according to the points after the delimbing operation and calculating the mean square error based on the preset objective function, the preset objective function f is as shown in formula (1), and thus the mean square error is calculated according to formula (2).
In the embodiment, a normal vector is introduced into a preset objective function f as a constraint condition, and in the operation process, a small weight is configured for a point with a large included angle in a self-adaptive manner, and a large weight is configured for a point with a small included angle, so that the registration precision is improved; at the same time, v is introducediThe convergence speed of the function is accelerated, and therefore the operation time is saved.
And comparing the calculated mean square error with a preset error threshold value for judging whether the iteration is stopped, and if the mean square error calculated by one iteration is smaller than the preset error threshold value, the condition of stopping the iteration is achieved, otherwise, the iteration is continued. The preset error threshold value can be set according to the actual situation, for example, set to 1 cm; for example, the length is set to 5 cm.
In this embodiment, the target identification module 120 identifies the position area where the target to be positioned in the color map is located by using a classifier created based on an SSD object detection algorithm. The template blocking module 140 includes: the key point selecting unit is used for selecting key points of the point cloud to be registered and the template point cloud; the second calculation unit is used for respectively calculating PFH descriptors of key points in the point cloud to be registered and the template point cloud; the characteristic matching unit is used for carrying out characteristic matching on the PFH descriptor of the point cloud to be registered and the PFH descriptor of the template point cloud; and the matching point selection unit is used for selecting a preset group of matching point pairs with the highest matching degree in the feature matching unit and obtaining the template point cloud subset according to the selected matching points.
In this embodiment, before the target to be located is identified, a classifier is created based on the SSD object detection algorithm, and the classifier is trained by annotating the collected color icons as a training set. In the prediction process, the SSD object detection algorithm achieves the purpose by drawing a rectangular frame on the identified object, and generates a classification label and adjusts position coordinates for each default rectangular frame, so as to determine the category of the target object and the range in the color map. Because the color map and the depth map are in one-to-one correspondence, after the coordinate position of the rectangular frame of the target in the color map is obtained, the depth information of the same position in the depth map is converted into point cloud data to obtain the point cloud to be registered.
After the point cloud to be registered is obtained, partitioning the template point cloud based on the point cloud to be registered to obtain a template point cloud subset: firstly, filtering the point cloud to be registered (a band-pass filter, a statistical filter and the like can be used), and filtering noise points around a target object; then, calculating the normal direction and curvature of the filtered point cloud to be registered, and extracting the key points of the point cloud to be registered by a key point selection unit according to the curvature value; thereafter, the second computing unit computes PFH descriptors (based on GPU-accelerated PFH feature model implementation) for the keypoints. Meanwhile, the reconstructed three-dimensional model point cloud is processed in the same way, and PFH descriptors of key points in the template point cloud are obtained; and then the characteristic matching unit matches the PFH descriptor, and the matching point selection unit divides point cloud data similar to the point cloud to be registered from the three-dimensional model according to the matching result. The specific algorithm is as follows:
suppose the template point cloud is Q ═ Q1,q2,...,qmM represents the point cloud number of the template point cloud; the cloud set of points to be registered is P ═ P1,p2,...,pnN represents the point cloud number of the point clouds to be registered, and the template point cloud subset
Figure BDA0002550921500000111
The template point cloud is then chunked by the following process: firstly, selecting a key point P of a point cloud P to be registered and a template point cloud QkeyAnd Qkey(ii) a Then, the key points Q are calculated respectivelykeyAnd PkeyPFH descriptor Q ofpfhAnd Ppfh(ii) a Next, PFH descriptor P of point cloud to be registeredpfhAnd PFH descriptor Q of template point cloudpfhCarrying out feature matching; finally, selecting s groups of matching point pairs with the highest matching precision; corresponding PFH descriptors to the spatial coordinates of the points one by one, selecting corresponding spatial coordinate points on the template point cloud Q, and selecting proper k neighborhoods to form a template point cloud subset Q by taking the central point of the selected spatial coordinate points as the center1. Wherein, PFH descriptor P of point cloud to be registeredpfhAnd PFH descriptor Q of template point cloudpfhIn performing feature matching, PFH descriptor P is calculatedpfhAnd PFH descriptor QpfhThe euclidean distance between them, and from low to high (the minimum distance indicates a higher matching accuracy). The number of the selected matching point pairs can be selected according to actual conditions, such as 10 pairs and 15 pairs; and determining the k value according to the point cloud number of the point cloud to be registered, and selecting points with the same number as the point cloud number in the point cloud to be registered.
It will be apparent to those skilled in the art that, for convenience and brevity of description, only the above-described division of program modules is illustrated, and in practical applications, the above-described distribution of functions may be performed by different program modules, that is, the internal structure of the apparatus may be divided into different program units or modules to perform all or part of the above-described functions. Each program module in the embodiments may be integrated in one processing unit, or each unit may exist alone physically, or two or more units are integrated in one processing unit, and the integrated unit may be implemented in a form of hardware, or may be implemented in a form of software program unit. In addition, the specific names of the program modules are only for convenience of distinguishing from each other and are not used for limiting the protection scope of the present invention.
Fig. 3 is a schematic structural diagram of a mobile robot provided in an embodiment of the present invention, and as shown, the mobile robot 200 includes: a processor 220, a memory 210, and a computer program 211 stored in the memory 210 and executable on the processor 220, such as: and (5) a target positioning analysis program. The processor 220 implements the steps in the above-described embodiments of the object localization method when executing the computer program 211, or the processor 220 implements the functions of the modules in the above-described embodiments of the object localization apparatus when executing the computer program 211.
The mobile robot 200 may be a notebook, a palm computer, a tablet computer, a mobile phone, or the like. The mobile robot 200 may include, but is not limited to, a processor 220, a memory 210. Those skilled in the art will appreciate that fig. 3 is merely an example of mobile robot 200, does not constitute a limitation of mobile robot 200, and may include more or fewer components than shown, or combine certain components, or different components, such as: the mobile robot 200 may also include input output devices, display devices, network access devices, buses, etc.
The Processor 220 may be a Central Processing Unit (CPU), other general purpose Processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA) or other Programmable logic device, discrete Gate or transistor logic, discrete hardware components, etc. The general purpose processor 220 may be a microprocessor or the processor may be any conventional processor or the like.
The memory 210 may be an internal storage unit of the mobile robot 200, such as: a hard disk or a memory of the mobile robot 200. The memory 210 may also be an external storage device of the mobile robot 200, such as: a plug-in hard disk, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash memory Card (Flash Card), and the like provided on the mobile robot 200. Further, the memory 210 may also include both an internal storage unit and an external storage device of the mobile robot 200. The memory 210 is used to store a computer program 211 as well as other programs and data required by the mobile robot 200. The memory 210 may also be used to temporarily store data that has been output or is to be output.
In the above embodiments, the descriptions of the respective embodiments have respective emphasis, and reference may be made to the related descriptions of other embodiments for parts that are not described or recited in detail in a certain embodiment.
Those of ordinary skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present invention.
In the embodiments provided in the present invention, it should be understood that the disclosed mobile robot and method may be implemented in other ways. For example, the above-described embodiments of the mobile robot are merely illustrative, and for example, a module or a unit may be divided into only one logical function, and may be implemented in other ways, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection of some interfaces, devices or units, and may be in an electrical, mechanical or other form.
Units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present invention may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The integrated modules/units, if implemented in the form of software functional units and sold or used as separate products, may be stored in a computer readable storage medium. Based on such understanding, all or part of the flow of the method according to the embodiments of the present invention may also be implemented by sending instructions to relevant hardware by the computer program 211, where the computer program 211 may be stored in a computer-readable storage medium, and when the computer program 211 is executed by the processor 220, the steps of the method embodiments may be implemented. Wherein the computer program 211 comprises: computer program code which may be in the form of source code, object code, an executable file or some intermediate form, etc. The computer-readable storage medium may include: any entity or device capable of carrying the code of computer program 211, recording medium, usb disk, removable hard disk, magnetic disk, optical disk, computer Memory, Read-Only Memory (ROM), Random Access Memory (RAM), electrical carrier wave signals, telecommunications signals, software distribution medium, and the like. It should be noted that the content of the computer readable storage medium can be increased or decreased according to the requirements of the legislation and patent practice in the jurisdiction, for example: in certain jurisdictions, in accordance with legislation and patent practice, the computer-readable medium does not include electrical carrier signals and telecommunications signals.
It should be noted that the above embodiments can be freely combined as necessary. The foregoing is only a preferred embodiment of the present invention, and it should be noted that, for those skilled in the art, various modifications and improvements can be made without departing from the principle of the present invention, and these modifications and improvements should also be construed as the protection scope of the present invention.
It should be noted that the above embodiments can be freely combined as necessary. The foregoing is only a preferred embodiment of the present invention, and it should be noted that, for those skilled in the art, various modifications and improvements can be made without departing from the principle of the present invention, and these modifications and improvements should also be construed as the protection scope of the present invention.

Claims (10)

1. A method of locating an object, comprising:
acquiring a color image and a depth image containing a target to be positioned;
identifying a position area of the target to be positioned in the color image;
determining a point cloud to be registered in the depth map according to the position area of the target to be positioned in the color map;
partitioning the template point cloud according to the point cloud to be registered to obtain a template point cloud subset;
and registering the point cloud to be registered in the depth map and the template point cloud subset based on a normal weighted iterative closest point algorithm to complete the positioning of the target.
2. The target localization method of claim 1, wherein in the registration of the normal weighting based iterative closest point algorithm to the subset of the template point clouds and the point cloud to be registered in the depth map, comprises:
fixing a transformation matrix between the template point cloud subset and the point cloud to be registered, sequentially finding points closest to each point in the point cloud to be registered in the template point cloud subset, and calculating a square distance;
sorting the calculated square distances and updating the overlapping rate;
according to the updated overlapping rate, carrying out a branch removing operation on the points with the distance larger than a preset distance threshold value in the cloud to be registered;
updating a transformation matrix according to the points after the branch removing operation and calculating a mean square error based on a preset objective function;
and judging whether to stop the loop iteration according to the mean square error.
3. The method of claim 2, wherein in the updating the transformation matrix according to the points after the delimbing operation and calculating the mean square error based on a preset objective function, the preset objective function f is:
Figure FDA0002550921490000011
wherein, the template point cloud Q ═ { Q ═ Q1,q2,...,qmM represents the number of point clouds of the template point cloud, the subset of the template point cloud
Figure FDA0002550921490000013
Point cloud to be registered P ═ P1,p2,...,pnN represents the number of point clouds of the point cloud to be registered; w is aiRepresenting the ith point p in the point cloud to be registered after the branch removing operationiWeight of viRepresenting the ith point p in the point cloud to be registered after the branch removing operationiNormal vector n ofiWith the ith point q in the template point cloud subsetc(i)Normal vector n ofc(i)At an included angle therebetween, and
Figure FDA0002550921490000012
mrrepresenting the number of point clouds of the point cloud to be registered after the branch removing operation; r denotes a rotation angle, and t denotes a translation amount.
4. A method as claimed in claim 1 or 2 or 3, characterized in that, in the area of the position in the color map where the object to be located is identified: identifying a position area where a target to be positioned in the color image is located by adopting a classifier established based on an SSD object detection algorithm; and/or the presence of a gas in the gas,
partitioning the template point cloud according to the point cloud to be registered to obtain a template point cloud subset, wherein the partitioning comprises the following steps:
selecting key points of point cloud to be registered and template point cloud;
respectively calculating PFH descriptors of key points in the point cloud to be registered and the template point cloud;
carrying out feature matching on the PFH descriptor of the point cloud to be registered and the PFH descriptor of the template point cloud;
selecting a preset group of matching point pairs with the highest matching degree;
and obtaining a template point cloud subset according to the selected matching points.
5. An object positioning device, comprising:
the image acquisition module is used for acquiring a color image and a depth image containing a target to be positioned;
the target identification module is used for identifying a position area of the target to be positioned in the color image;
the point cloud to be registered determining module is used for determining a point cloud to be registered in the depth map according to the position area of the target to be positioned in the color map;
the template blocking module is used for blocking the template point cloud according to the point cloud to be registered to obtain a template point cloud subset;
and the point cloud registration module is used for registering the point cloud to be registered in the depth map and the template point cloud subset based on a normal weighted iterative closest point algorithm to complete the positioning of the target.
6. The object localization arrangement according to claim 5, characterized in that in the point cloud registration module comprises:
the first calculation unit is used for fixing a transformation matrix between the template point cloud subset and the point cloud to be registered, sequentially finding points which are closest to each point in the point cloud to be registered in the template point cloud subset, and calculating a square distance; the system is used for sorting the calculated square distances and updating the overlapping rate; the transformation matrix is updated according to the points after the branch removing operation of the branch removing unit, and the mean square error is calculated based on a preset objective function;
the delimbing unit is used for performing delimbing operation on the points with the distance larger than a preset distance threshold value in the cloud center points of the points to be registered according to the updated overlapping rate of the first computing unit;
and the judging unit is used for judging whether to stop the loop iteration according to the mean square error calculated by the first calculating unit.
7. The object localization device according to claim 6, wherein in the first calculation unit, the predetermined objective function f used for calculating the mean square error is:
Figure FDA0002550921490000031
wherein, the template point cloud Q ═ { Q ═ Q1,q2,...,qmM represents the number of point clouds of the template point cloud, the subset of the template point cloud
Figure FDA0002550921490000033
Point cloud to be registered P ═ P1,p2,...,pnN represents the number of point clouds of the point cloud to be registered; w is aiRepresenting the ith point p in the point cloud to be registered after the branch removing operationiWeight of viRepresenting the ith point p in the point cloud to be registered after the branch removing operationiNormal vector n ofiWith the ith point q in the template point cloud subsetc(i)Normal vector n ofc(i)At an included angle therebetween, and
Figure FDA0002550921490000032
mrrepresenting the number of point clouds of the point cloud to be registered after the branch removing operation; r denotes a rotation angle, and t denotes a translation amount.
8. The target positioning device according to claim 5, 6 or 7, wherein in the target identification module, a classifier created based on an SSD object detection algorithm is used to identify a position area where a target to be positioned in the color map is located; and/or the presence of a gas in the gas,
in the template block module, the method comprises the following steps:
the key point selecting unit is used for selecting key points of the point cloud to be registered and the template point cloud;
the second calculation unit is used for respectively calculating PFH descriptors of key points in the point cloud to be registered and the template point cloud;
the characteristic matching unit is used for carrying out characteristic matching on the PFH descriptor of the point cloud to be registered and the PFH descriptor of the template point cloud;
and the matching point selection unit is used for selecting a preset group of matching point pairs with the highest matching degree in the feature matching unit and obtaining the template point cloud subset according to the selected matching points.
9. A mobile robot comprising a memory, a processor and a computer program stored in the memory and executable on the processor, characterized in that the processor realizes the steps of the object localization method according to any of claims 1-4 when executing the computer program.
10. A computer-readable storage medium, in which a computer program is stored which, when being executed by a processor, carries out the steps of the object localization method according to any one of claims 1-4.
CN202010574730.8A 2020-06-22 2020-06-22 Target positioning method and device, mobile robot and readable storage medium Active CN111612841B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010574730.8A CN111612841B (en) 2020-06-22 2020-06-22 Target positioning method and device, mobile robot and readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010574730.8A CN111612841B (en) 2020-06-22 2020-06-22 Target positioning method and device, mobile robot and readable storage medium

Publications (2)

Publication Number Publication Date
CN111612841A true CN111612841A (en) 2020-09-01
CN111612841B CN111612841B (en) 2023-07-14

Family

ID=72202716

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010574730.8A Active CN111612841B (en) 2020-06-22 2020-06-22 Target positioning method and device, mobile robot and readable storage medium

Country Status (1)

Country Link
CN (1) CN111612841B (en)

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112686934A (en) * 2020-12-29 2021-04-20 广州广电研究院有限公司 Point cloud data registration method, device, equipment and medium
CN112862882A (en) * 2021-01-28 2021-05-28 北京格灵深瞳信息技术股份有限公司 Target distance measuring method, device, electronic apparatus and storage medium
CN113096165A (en) * 2021-04-16 2021-07-09 无锡物联网创新中心有限公司 Target object positioning method and device
CN113297989A (en) * 2021-05-28 2021-08-24 深圳市优必选科技股份有限公司 Charging pile identification method and device, robot and computer readable storage medium
CN114387319A (en) * 2022-01-13 2022-04-22 北京百度网讯科技有限公司 Point cloud registration method, device, equipment and storage medium
CN114593681A (en) * 2020-12-07 2022-06-07 北京格灵深瞳信息技术有限公司 Thickness measuring method, thickness measuring apparatus, electronic device, and storage medium
CN115082547A (en) * 2022-07-27 2022-09-20 深圳市华汉伟业科技有限公司 Profile measuring method based on point cloud data and storage medium
CN115496898A (en) * 2022-11-16 2022-12-20 山东科技大学 Mobile robot target positioning method and system
WO2023040737A1 (en) * 2021-09-17 2023-03-23 中国第一汽车股份有限公司 Target location determining method and apparatus, electronic device, and storage medium

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107886528A (en) * 2017-11-30 2018-04-06 南京理工大学 Distribution line working scene three-dimensional rebuilding method based on a cloud
CN108133458A (en) * 2018-01-17 2018-06-08 视缘(上海)智能科技有限公司 A kind of method for automatically split-jointing based on target object spatial point cloud feature
CN109308737A (en) * 2018-07-11 2019-02-05 重庆邮电大学 A kind of mobile robot V-SLAM method of three stage point cloud registration methods
CN109934855A (en) * 2018-12-28 2019-06-25 南京理工大学 A kind of livewire work scene power components three-dimensional rebuilding method based on cloud
CN110749895A (en) * 2019-12-23 2020-02-04 广州赛特智能科技有限公司 Laser radar point cloud data-based positioning method
CN110942476A (en) * 2019-10-17 2020-03-31 湖南大学 Improved three-dimensional point cloud registration method and system based on two-dimensional image guidance and readable storage medium

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107886528A (en) * 2017-11-30 2018-04-06 南京理工大学 Distribution line working scene three-dimensional rebuilding method based on a cloud
CN108133458A (en) * 2018-01-17 2018-06-08 视缘(上海)智能科技有限公司 A kind of method for automatically split-jointing based on target object spatial point cloud feature
CN109308737A (en) * 2018-07-11 2019-02-05 重庆邮电大学 A kind of mobile robot V-SLAM method of three stage point cloud registration methods
CN109934855A (en) * 2018-12-28 2019-06-25 南京理工大学 A kind of livewire work scene power components three-dimensional rebuilding method based on cloud
CN110942476A (en) * 2019-10-17 2020-03-31 湖南大学 Improved three-dimensional point cloud registration method and system based on two-dimensional image guidance and readable storage medium
CN110749895A (en) * 2019-12-23 2020-02-04 广州赛特智能科技有限公司 Laser radar point cloud data-based positioning method

Cited By (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114593681A (en) * 2020-12-07 2022-06-07 北京格灵深瞳信息技术有限公司 Thickness measuring method, thickness measuring apparatus, electronic device, and storage medium
CN112686934A (en) * 2020-12-29 2021-04-20 广州广电研究院有限公司 Point cloud data registration method, device, equipment and medium
CN112862882A (en) * 2021-01-28 2021-05-28 北京格灵深瞳信息技术股份有限公司 Target distance measuring method, device, electronic apparatus and storage medium
CN113096165A (en) * 2021-04-16 2021-07-09 无锡物联网创新中心有限公司 Target object positioning method and device
CN113096165B (en) * 2021-04-16 2022-02-18 无锡物联网创新中心有限公司 Target object positioning method and device
CN113297989A (en) * 2021-05-28 2021-08-24 深圳市优必选科技股份有限公司 Charging pile identification method and device, robot and computer readable storage medium
CN113297989B (en) * 2021-05-28 2024-04-16 优必康(青岛)科技有限公司 Charging pile identification method, device, robot and computer readable storage medium
WO2023040737A1 (en) * 2021-09-17 2023-03-23 中国第一汽车股份有限公司 Target location determining method and apparatus, electronic device, and storage medium
CN114387319A (en) * 2022-01-13 2022-04-22 北京百度网讯科技有限公司 Point cloud registration method, device, equipment and storage medium
CN114387319B (en) * 2022-01-13 2023-11-14 北京百度网讯科技有限公司 Point cloud registration method, device, equipment and storage medium
CN115082547A (en) * 2022-07-27 2022-09-20 深圳市华汉伟业科技有限公司 Profile measuring method based on point cloud data and storage medium
CN115082547B (en) * 2022-07-27 2022-11-15 深圳市华汉伟业科技有限公司 Profile measuring method based on point cloud data and storage medium
CN115496898A (en) * 2022-11-16 2022-12-20 山东科技大学 Mobile robot target positioning method and system
CN115496898B (en) * 2022-11-16 2023-02-17 山东科技大学 Mobile robot target positioning method and system

Also Published As

Publication number Publication date
CN111612841B (en) 2023-07-14

Similar Documents

Publication Publication Date Title
CN111612841A (en) Target positioning method and device, mobile robot and readable storage medium
CN110322500B (en) Optimization method and device for instant positioning and map construction, medium and electronic equipment
CN108898086B (en) Video image processing method and device, computer readable medium and electronic equipment
CN111665842B (en) Indoor SLAM mapping method and system based on semantic information fusion
US20210063577A1 (en) Robot relocalization method and apparatus and robot using the same
CN112528831B (en) Multi-target attitude estimation method, multi-target attitude estimation device and terminal equipment
CN109978925B (en) Robot pose recognition method and robot thereof
CN111931764B (en) Target detection method, target detection frame and related equipment
CN110349212B (en) Optimization method and device for instant positioning and map construction, medium and electronic equipment
CN112328715A (en) Visual positioning method, training method of related model, related device and equipment
CN115063454B (en) Multi-target tracking matching method, device, terminal and storage medium
CN110704652A (en) Vehicle image fine-grained retrieval method and device based on multiple attention mechanism
CN112036381B (en) Visual tracking method, video monitoring method and terminal equipment
CN112336342A (en) Hand key point detection method and device and terminal equipment
CN114612616A (en) Mapping method and device, electronic equipment and storage medium
CN115063447A (en) Target animal motion tracking method based on video sequence and related equipment
CN112198878B (en) Instant map construction method and device, robot and storage medium
CN110673607A (en) Feature point extraction method and device in dynamic scene and terminal equipment
CN112258647B (en) Map reconstruction method and device, computer readable medium and electronic equipment
CN114066999A (en) Target positioning system and method based on three-dimensional modeling
CN111353325A (en) Key point detection model training method and device
CN113673288B (en) Idle parking space detection method and device, computer equipment and storage medium
CN111104965A (en) Vehicle target identification method and device
CN116415652A (en) Data generation method and device, readable storage medium and terminal equipment
CN116343095A (en) Vehicle track extraction method based on video stitching and related equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant