CN117830143B - Method for accurately positioning robot tool based on computer vision - Google Patents

Method for accurately positioning robot tool based on computer vision Download PDF

Info

Publication number
CN117830143B
CN117830143B CN202410255141.1A CN202410255141A CN117830143B CN 117830143 B CN117830143 B CN 117830143B CN 202410255141 A CN202410255141 A CN 202410255141A CN 117830143 B CN117830143 B CN 117830143B
Authority
CN
China
Prior art keywords
data
point
moment
point cloud
data point
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202410255141.1A
Other languages
Chinese (zh)
Other versions
CN117830143A (en
Inventor
严鲜财
向宝明
刘鹏
蓝东沅
皮振军
费鹏
赵兴隆
房信
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Jiangsu Hangzhijia Information Technology Co ltd
Original Assignee
Jiangsu Hangzhijia Information Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Jiangsu Hangzhijia Information Technology Co ltd filed Critical Jiangsu Hangzhijia Information Technology Co ltd
Priority to CN202410255141.1A priority Critical patent/CN117830143B/en
Publication of CN117830143A publication Critical patent/CN117830143A/en
Application granted granted Critical
Publication of CN117830143B publication Critical patent/CN117830143B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Abstract

The invention relates to the technical field of image data noise reduction, in particular to a method for accurately positioning a robot tool based on computer vision. Firstly, acquiring a point cloud initial data set of a robot tool; and carrying out three-dimensional Gaussian filtering on the point cloud initial data at the moment to be processed by utilizing a three-dimensional Gaussian filter core according to the point cloud initial data at the moment to be processed and the point cloud initial data at each historical moment in the historical database, acquiring point cloud filtering data at the moment to be processed, and acquiring a positioning result of the robot tool at the moment to be processed according to the point cloud filtering data at the moment to be processed. According to the invention, by optimizing the weight of the reference data point, the unstable influence of noise is reduced, meanwhile, the data boundary structure characteristics are reserved, the data denoising result is improved, and finally, the robot tool is positioned more accurately.

Description

Method for accurately positioning robot tool based on computer vision
Technical Field
The invention relates to the technical field of image data noise reduction, in particular to a method for accurately positioning a robot tool based on computer vision.
Background
The remote control robot grabs goods through the tool, and can improve the efficiency of goods transportation. The robot needs to determine the tool position when grabbing goods, and then moves the tool to grab goods. The existing robot tool positioning is to collect point cloud data of the robot tool by utilizing laser radar equipment, so that accurate positioning of the tool is performed through the point cloud data.
Because laser radar equipment is in the point cloud data process of gathering robot instrument, the environment that the robot was used often is comparatively abominable, leads to there is a large amount of noise in the point cloud data of robot instrument, and the noise can lead to the robot instrument to fix a position inaccurately. Therefore, denoising processing is required to be performed on the point cloud data of the robot tool, the point cloud data is denoising processed by utilizing three-dimensional Gaussian filtering in the prior art, when the weight of the reference pixel point is determined in the prior art, the weight is determined only according to the distance between the reference pixel point and the pixel point to be denoised, the object structure information is further represented by neglecting the data point distance, excessive denoising loss of the object structure information can be caused, the denoising result is inaccurate, and accurate positioning of the robot tool cannot be realized.
Disclosure of Invention
In order to solve the technical problems that in the prior art, denoising is performed on point cloud data by utilizing three-dimensional Gaussian filtering, structural information of the point cloud data is difficult to realize denoising and meanwhile, a denoising result is inaccurate, and the positioning accuracy of a robot tool is low, the invention aims to provide a method for accurately positioning the robot tool based on computer vision, and the adopted technical scheme is as follows:
A method of accurately positioning a computer vision based robotic tool, the method comprising the steps of:
acquiring a point cloud initial data set of a robot tool; the point cloud initial data set comprises point cloud initial data of the moment to be processed and point cloud initial data of each historical moment in a historical database; each data point in the point cloud initial data set has a corresponding reflection intensity value;
According to point cloud initial data of a moment to be processed and point cloud initial data of each historical moment in a historical database, checking the point cloud initial data of the moment to be processed by utilizing three-dimensional Gaussian filtering to carry out three-dimensional Gaussian filtering, and obtaining point cloud filtering data of the moment to be processed;
The process for acquiring the point cloud filtering data at the moment to be processed comprises the following steps: taking each data point in the point cloud initial data at the moment to be processed as a data point to be denoised, and acquiring the initial weight of each reference data point in the process of determining the weight of each reference data point of each data point to be denoised; taking the time to be processed and each historical time as a target time, taking each data point in point cloud initial data of the target time as a target data point, taking each data point in a preset surrounding area of the target data point as each area point, and acquiring object characteristic values of the target data points in the preset surrounding area of the target data point according to the difference between the reflection intensity values of the target data point and the area points and the distribution of the area points; acquiring a stable value of a reference data point according to the object characteristic value of the reference data point and the object characteristic value relation of the data points in the point cloud initial data of all the historical moments; acquiring a data participation value of the reference data point according to the stable values of the reference data point and the regional point and the distance between the reference data point and the regional point; adjusting the initial weight according to the data participation value to obtain an adjusted weight of a reference data point;
And acquiring a positioning result of the robot tool at the moment to be processed according to the point cloud filtering data at the moment to be processed.
Further, the method for acquiring the object characteristic value comprises the following steps:
obtaining an object feature value according to an object feature value formula, wherein the object feature value formula comprises:
; wherein T (m) is the object feature value of the mth of the target data points; f m is the reflected intensity value of the mth said target data point; f z is the reflection intensity value of the z-th region point in the preset surrounding region of the mth target data point; s is the total number of all the area points in the preset surrounding area of the mth target data point; delta z is the variance of the distance between the z-th region point and all its surrounding points in the preset surrounding range of the z-th region point; and I is an absolute value symbol.
Further, the method for obtaining the stable value comprises the following steps:
Determining the corresponding best matching data point of the reference data point in the point cloud initial data of each historical moment according to the position relation between the point cloud initial data of the moment to be processed and the point cloud initial data of the historical moment;
obtaining the stable value according to a stable value formula, wherein the stable value formula comprises:
; wherein D i,k is the stable value of the reference data point i at the kth historical moment; t (i, O) is the object feature value of the reference data point i in the point cloud initial data O at the moment to be processed; i * is the best matching data point corresponding to the reference data point i in the point cloud initial data of the kth historical moment; t (i *, k) is the object feature value of the corresponding best matching data i * in the point cloud initial data of the reference data point i at the kth historical moment; sinc () is a sine function;
A maximum value is determined among the stable values at all historic times for each reference data point and is determined as the final stable value for each reference data point.
Further, the method for acquiring the best matching data point comprises the following steps:
Determining each data plane corresponding to the point cloud initial data at the target moment, wherein each data plane is perpendicular to a Z axis, and each data point in the point cloud initial data at the target moment is positioned on each data plane;
Performing straight line fitting on the walking track of the robot to obtain a walking fitting straight line; determining each dividing straight line corresponding to each data plane, wherein the extending direction of each dividing straight line is the same as the extending direction of the walking fitting straight line, and each data point in the point cloud initial data corresponding to each data plane is positioned on each dividing straight line;
Determining the starting point of each data plane, sequentially counting the distance between each data point and the starting point on each dividing straight line on each data plane, and performing curve fitting on all the data points as the characteristic distance of each data point to obtain fitting curves of all the data points on each dividing straight line; the abscissa of the fitted curve is the characteristic distance, and the ordinate of the fitted curve is the reflection intensity value;
Taking a fitting curve of the reference data points as a reference curve, and taking a data plane with the same Z-axis height as the data plane of the reference curve at the historical moment as a reference control plane of the historical moment;
Matching all the fitting curves in the reference control plane with the reference curves by using a DTW algorithm to obtain an optimal matching curve of the reference curves corresponding to the historical moment; and taking the matching point of the best matching curve corresponding to the reference data point at the historical moment as the best matching data point corresponding to the reference data point at the historical moment.
Further, the method for acquiring the data participation value comprises the following steps:
acquiring the data participation value according to a data participation value formula, wherein the data participation value formula comprises:
w i=norm[Di×(Dz×Lz)max ]; wherein w i is the data participation value of reference data point i; d i is the stable value of reference data point i; d z is the stable value of the z-th region point in the preset surrounding region of the reference data point i; l z is the Euclidean distance between the reference data point i and the z-th region point; (D z×Lz)max is the maximum value in the correspondence (D z×Lz) of all the region points in the preset surrounding region of the reference data point i; norm () is the normalization function.
Further, the method for acquiring the adjusted weight comprises the following steps:
acquiring the adjusted weight of the reference data point according to the data participation value and the initial weight of the reference data point; the initial weight and the adjusted weight are in positive correlation; the data participation value and the adjusted weight are positively correlated.
Further, the method for acquiring the positioning result of the robot tool at the moment to be processed comprises the following steps:
Extracting each descriptor according to point cloud filtering data at the moment to be processed, and matching the descriptor with the descriptor in the tool model library to obtain a matching result; and further, according to the matching result, acquiring the positioning result of the robot tool at the moment to be processed.
Further, the method for acquiring the initial weight comprises the following steps:
Based on a three-dimensional Gaussian filtering method, initial weights of the reference data points are obtained according to the distance between the reference data points and the data points to be denoised.
Further, the method for acquiring the preset surrounding area comprises the following steps:
The preset surrounding area is a rectangular window which is built by taking a target data point as the center of the rectangular window and taking a preset size as the side length of the rectangular window.
Further, the method for acquiring the point cloud initial data comprises the following steps:
and transmitting uniform points to the robot tool through the laser to sample, and acquiring point cloud initial data of the robot tool.
The invention has the following beneficial effects:
The method is mainly used for denoising the point cloud data by three-dimensional Gaussian filtering when the robot tool is accurately positioned, so that the structural information of the data points is kept while denoising is difficult to achieve, the denoising result is inaccurate, and the positioning accuracy of the robot tool is low.
The weight of the reference data point is adjusted by constructing a data participation value of the reference data point, the data participation value reflects the possibility that the reference data point is at the boundary and noise is unstable, so that the influence of noise on the weight of the reference data point is reduced, and the influence of boundary structure data point on the weight is kept; the adjusted weights enable the point cloud filtering data to preserve the influence of boundary structures and reduce noise influence.
In order to analyze the stability of the reference data point, firstly, the object characteristics of the data point are analyzed, as the reflection intensity of the data point can reflect the smooth condition of the object surface, the material quality of the object and the distance from the object to be detected, and the distance between the data point and the surrounding data point is relatively close to the distance from the object to be detected, the reflection intensity difference between the data point and the surrounding data point can often reflect the characteristic of the object to which the data point belongs; when the laser radar samples point cloud data, the laser radar samples by emitting uniform points, when the surface information of an object is flat, the distances of corresponding data points are uniform, when the surface of the object is uneven, the distances of the corresponding data points are not uniformly distributed, and the distribution of the data points in the area around the data points can reflect the special characteristics of the surface of the object to which the data points belong; further, object characteristic values of the target data points are obtained, and the data points on different objects often have different object characteristic values. Because the process of grabbing goods by the remote control robot through the tool is often repeatable, point cloud initial data at the historical moment and point cloud data at the moment to be processed are often repeatable, object characteristic values corresponding to normal data points are stable, however, noise is accidental, and the change of the object characteristic values of the data points is unstable due to the noise; further obtaining a stable value of the reference data point; the higher the stable value, the higher the likelihood that the reference data point is a normal data point and the lower the likelihood that it is a noisy data point. Taking the stability of the corresponding object characteristic values of the reference data points and the possibility of being in a boundary structure into consideration, acquiring the data participation values of the reference data points; the larger the data participation value, the more likely the reference data point is a non-noise data point and the more likely the boundary data point.
According to the invention, the initial weight is adjusted by constructing the data participation value of the data point, so that the unstable influence of noise can be reduced by adjusting the weight, meanwhile, the data boundary structure characteristics are reserved, the data denoising result is improved, and finally, the robot tool is positioned more accurately.
Drawings
In order to more clearly illustrate the embodiments of the invention or the technical solutions and advantages of the prior art, the following description will briefly explain the drawings used in the embodiments or the description of the prior art, and it is obvious that the drawings in the following description are only some embodiments of the invention, and other drawings can be obtained according to the drawings without inventive effort for a person skilled in the art.
FIG. 1 is a flow chart of a method for precisely positioning a robotic tool based on computer vision according to one embodiment of the present invention;
Fig. 2 is a flowchart of a method for obtaining adjusted weights of reference data points according to an embodiment of the present invention.
Detailed Description
In order to further describe the technical means and effects adopted by the invention to achieve the preset aim, the following is a method for accurately positioning a robot tool based on computer vision according to the invention, which is provided by combining the accompanying drawings and the preferred embodiment, and the specific implementation, structure, characteristics and effects thereof are described in detail below. In the following description, different "one embodiment" or "another embodiment" means that the embodiments are not necessarily the same. Furthermore, the particular features, structures, or characteristics of one or more embodiments may be combined in any suitable manner.
Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this invention belongs.
An embodiment of a method for accurately positioning a robotic tool based on computer vision:
The following specifically describes a specific scheme of a robot tool accurate positioning method based on computer vision provided by the invention with reference to the accompanying drawings.
Referring to fig. 1, a flowchart of a method for precisely positioning a robot tool based on computer vision according to an embodiment of the present invention is shown, the method includes the following steps:
step S1, acquiring a point cloud initial data set of a robot tool; the point cloud initial data set comprises point cloud initial data of the moment to be processed and point cloud initial data of each historical moment in the historical database; each data point in the point cloud initial data set has a corresponding reflected intensity value.
In order to position the robot tool, point cloud data of the robot tool are collected by utilizing laser radar equipment, so that accurate positioning of the tool is performed through the point cloud data. Because the point cloud data has noise, the robot application environment is often worse, so that a great amount of noise exists in the point cloud data of the robot tool, and the noise can cause inaccurate positioning of the robot tool. Therefore, the point cloud data of the robot tool needs to be subjected to denoising processing.
Preferably, in order to position a robot tool, in one embodiment of the present invention, a method for acquiring initial data of a point cloud includes:
In a remote control robot grabs goods through the tool, as the robot always needs to move when grabbing goods through the tool, the laser radar also needs to move to sample the robot tool, the laser radar transmits uniform points to the robot tool to sample according to preset sampling frequency, point cloud initial data of the robot tool are continuously collected, the current moment is taken as the moment to be processed, and the point cloud initial data of the moment to be processed are obtained for subsequent real-time positioning of the robot tool at the current moment. And counting the initial data of the point cloud at each historical moment before the moment to be processed into a historical database. The method comprises the steps of filtering point cloud initial data at the moment to be processed later, and taking the point cloud initial data at each historical moment in a historical database as a reference. In one embodiment of the present invention, the predetermined sampling frequency is 1 time/0.1 seconds.
It should be noted that, setting the point cloud initial data into a three-dimensional space coordinate system, wherein an X-Y plane of the three-dimensional space coordinate is parallel to the ground, a Z axis is perpendicular to the ground, and acquiring a point cloud initial data set of the robot tool by using the laser radar, wherein as goods are grabbed in the working process of the robot, the gesture and background information of the acquired tool in the corresponding acquired point cloud initial data are changed, and the point cloud initial data set not only comprises robot tool information, but also comprises scene information of the robot tool. Because of the smooth surface of the object, the material quality of the object and the distance from the object to be detected, the reflection intensity values of different sampling points are different, and the reflection intensity values of different data points in the initial data of the point cloud are also important information of the object to be analyzed.
It should be noted that, in order to facilitate the operation, all index data involved in the operation in the embodiment of the present invention is subjected to data preprocessing, so as to cancel the dimension effect. The specific means for removing the dimension influence is a technical means well known to those skilled in the art, and is not limited herein.
And S2, performing three-dimensional Gaussian filtering on the point cloud initial data at the moment to be processed by utilizing a three-dimensional Gaussian filtering core according to the point cloud initial data at the moment to be processed and the point cloud initial data at each historical moment in the historical database, and acquiring point cloud filtering data at the moment to be processed.
The method is mainly used for denoising the point cloud data by three-dimensional Gaussian filtering when the robot tool is accurately positioned, so that the structural information of the data points is kept while denoising is difficult to achieve, the denoising result is inaccurate, and the positioning accuracy of the robot tool is low. The weight of the reference data point is adjusted by constructing a data participation value of the reference data point, the data participation value reflects the possibility that the reference data point is at the boundary and noise is unstable, so that the influence of noise on the weight of the reference data point is reduced, and the influence of boundary structure data point on the weight is kept; the adjusted weights enable the point cloud filtering data to preserve the influence of boundary structures and reduce noise influence.
Referring to fig. 2, a flowchart of a method for obtaining adjusted weights of reference data points according to an embodiment of the invention is shown. It should be noted that, step S2 includes an adjusted weight acquiring process of the reference data points, fig. 2 shows a specific method for determining the adjusted weight of each reference data point of each data point to be noise reduced in step S2, and the method includes the following steps:
The process for acquiring the point cloud filtering data at the moment to be processed comprises the following steps: taking each data point in the point cloud initial data at the moment to be processed as a data point to be denoised, and acquiring the initial weight of each reference data point in the process of determining the weight of each reference data point of each data point to be denoised; taking the time to be processed and each historical time as a target time, taking each data point in point cloud initial data of the target time as a target data point, taking each data point in a preset surrounding area of the target data point as each area point, and acquiring object characteristic values of the target data points in the preset surrounding area of the target data point according to the difference between the reflection intensity values of the target data point and the area points and the distribution of the area points; acquiring a stable value of the reference data point according to the object characteristic value of the reference data point and the object characteristic value relation of the data points in the point cloud initial data of all the historical moments; acquiring a data participation value of the reference data point according to the stable values of the reference data point and the region point and the distance between the reference data point and the region point; and adjusting the initial weight according to the data participation value, and acquiring the adjusted weight of the reference data point.
In order to analyze the stability of the reference data point, firstly, the object characteristics of the data point are analyzed, as the reflection intensity of the data point can reflect the smooth condition of the object surface, the material quality of the object and the distance from the object to be detected, and the distance between the data point and the surrounding data point is relatively close to the distance from the object to be detected, the reflection intensity difference between the data point and the surrounding data point can often reflect the characteristic of the object to which the data point belongs; when the laser radar samples point cloud data, the laser radar samples by emitting uniform points, when the surface information of an object is flat, the distances of corresponding data points are uniform, when the surface of the object is uneven, the distances of the corresponding data points are not uniformly distributed, and the distribution of the data points in the area around the data points can reflect the special characteristics of the surface of the object to which the data points belong; further, object characteristic values of the target data points are obtained, and the data points on different objects often have different object characteristic values. Because the process of grabbing goods by the remote control robot through the tool is often repeatable, point cloud initial data at the historical moment and point cloud data at the moment to be processed are often repeatable, object characteristic values corresponding to normal data points are stable, however, noise is accidental, and the change of the object characteristic values of the data points is unstable due to the noise; further obtaining a stable value of the reference data point; the higher the stable value, the higher the likelihood that the reference data point is a normal data point and the lower the likelihood that it is a noisy data point. Taking the stability of the corresponding object characteristic values of the reference data points and the possibility of being in a boundary structure into consideration, acquiring the data participation values of the reference data points; the larger the data participation value, the more likely the reference data point is a non-noise data point and the more likely the boundary data point.
Preferably, in one embodiment of the present invention, the method for acquiring the initial weight includes:
it should be noted that, the three-dimensional gaussian filtering method is a technical means well known to those skilled in the art, and only the steps of acquiring the initial weights of the reference data points are briefly described:
In order to adjust the weight of a reference data point based on a traditional three-dimensional Gaussian filtering method, each data point in point cloud initial data at a moment to be processed is taken as a data point to be denoised, a three-dimensional Gaussian filtering core is built by taking the data point to be denoised as a center, each data point in the three-dimensional Gaussian filtering core is taken as the reference data point, and the initial weight of each reference data point is obtained according to the distance between the reference data point and the data point to be denoised by utilizing a Gaussian function.
Preferably, for the purpose of subsequent analysis of the stability of the reference data point, the object characteristic of the target data point needs to be analyzed, and in one embodiment of the present invention, the acquiring formula of the object characteristic value includes:
; wherein T (m) is the object feature value of the mth target data point; f m is the reflected intensity value of the mth target data point; f z is the reflected intensity value of the z-th region point in the preset surrounding region of the mth target data point; s is the total number of all area points in the preset surrounding area of the mth target data point; delta z is the variance of the distance between the z-th region point and all its surrounding points in the preset surrounding range of the z-th region point; and I is an absolute value symbol.
In order to analyze the characteristics of surrounding data points of a target data point and construct a preset surrounding area of the data point, in one embodiment of the present invention, a method for acquiring the preset surrounding area includes: the preset surrounding area is a window with the size of L multiplied by L and takes the target data point as the center and the preset size as the side length. Wherein, all data points in the preset surrounding area are taken as area points, L is the preset size, and the preset size is 5. The method for acquiring the preset surrounding range comprises the following steps: and taking the regional point as a center, setting the size as R multiplied by R, and constructing a preset surrounding range of the regional point. Wherein, all data points in the preset surrounding range are taken as surrounding points, and R is 11.
In the object characteristic value formula, since the optical radar samples the point cloud data by emitting uniform points, the data point distribution is uniform when the surface of the object is flat, the data point distribution is non-uniform when the surface of the object is non-flat,And reflecting the characteristic features of the object to which the target data point belongs by the uniform distribution of the data around the target data point and the number of the regional points. Because the reflection intensity of the data point can reflect the smooth condition of the surface of the object, the material quality of the object and the distance from the object to be detected, and the distance between the data point and the surrounding data point is relatively close to the distance from the object to be detected, the reflection intensity difference between the data point and the surrounding data point can often reflect the smooth condition of the surface of the object to which the data point belongs and the characteristic information of the material quality of the object. /(I)The characteristic features of the object to which the target data point belongs are reflected by the difference of the reflection intensity of the target data point and surrounding data points and the number of the regional points. The object characteristic values comprehensively consider the uniform distribution of data around the target data point, the number of area points and the difference of the reflection intensity of the data points and the surrounding data points, so that the data points on different objects often have different object characteristic values for the subsequent stability of analysis data points.
Preferably, in one embodiment of the present invention, since the process of grabbing the goods by the tool by the remote control robot tends to have repeatability, the point cloud initial data at the historical moment and the point cloud data at the to-be-processed moment tend to have repeatability, and the object feature values corresponding to the normal data points have stability, so that the near sampling positions of the similar acquisition scenes in the to-be-processed moment and the historical moment have similar object features without noise interference. In order to find a data point corresponding to a near sampling position of a similar acquisition scene of a reference data point in a historical moment, a best matching data point is acquired. The method for acquiring the best matching data points comprises the following steps:
In order to find out the data points corresponding to the near sampling positions of the similar acquisition scenes of the reference data points in the historical moment, firstly carrying out planar research on the point cloud initial data of the target moment, dividing the point cloud initial data of the target moment into a plurality of planes parallel to the ground, namely, planes perpendicular to the Z axis, namely, determining each data plane corresponding to the point cloud initial data of the target moment, wherein each data plane is perpendicular to the Z axis, and each data point in the point cloud initial data of the target moment is positioned on each data plane. Each data point in the initial data of the point cloud at the target moment is provided with a corresponding data plane, namely all data points on all data planes cover all data points in the initial data of the point cloud.
In order to conduct linear research on data in a data plane, conducting linear fitting on a robot walking track to obtain a walking fitting line; and sliding the walking fitting straight lines in parallel in the data planes, and determining each dividing straight line corresponding to each data plane, wherein the dividing straight lines are parallel to the walking fitting straight lines. That is, each dividing straight line corresponding to each data plane is determined, the extending direction of each dividing straight line is the same as the extending direction of the walking fitting straight line, and each data point in the point cloud initial data corresponding to each data plane is located on each dividing straight line; it is realized that each data point on the data plane has a corresponding dividing line, i.e. all data points on all dividing lines in the data plane cover all data points in the data plane. It should be noted that, since the process of grasping the goods by the tool by the remote control robot tends to have repeatability, the trajectory from the start point to the end point acquired by the robot sensor at any time is used as the robot travel trajectory.
The origin of the respective data plane, i.e. the point at the bottom left-most corner of the data plane, is taken as the start point of each data plane. In order to analyze the approach degree of the data points subsequently, sequentially counting the distance between each data point and the starting point on the dividing straight line, and performing curve fitting on all the data points as the characteristic distance of each data point to obtain fitting curves of all the data points on each dividing straight line; the abscissa of the fitted curve is the characteristic distance, and the ordinate of the fitted curve is the reflection intensity value; because different objects have different distributions of data points, the data points on different objects often have different characteristic distances, and the data points on different objects often have different object characteristic values and different characteristic distances, so that the more similar the fitted curves, the more likely the data belong to the same object and the scene is similar. It should be noted that, all the origin points of the data planes are used as the starting points, so that the reference positions of the starting points of the respective data planes are the same.
In order to analyze data points with different times corresponding to the same height, a fitting curve where the reference data points are located is used as a reference curve, a data plane with the history time corresponding to the same Z-axis height as the data plane where the reference curve is located is used as a reference control plane of the history time;
Matching all the fitting curves in the reference control plane with the reference curves by using a DTW algorithm to obtain similarity measurement values of the fitting curves and the reference curves, and taking the fitting curve with the highest similarity measurement value as the best matching curve of the reference curve corresponding to the historical moment; the best matching curve refers to a fitting curve with highest similarity with the reference curve in the historical moment, namely, the more likely that data points on the reference curve and the best matching curve belong to the same object and the corresponding scenes are similar.
And aligning data points of the reference curve and the best matching curve in the DTW algorithm, and taking the matching point of the best matching curve corresponding to the reference data point at the historical moment as the best matching data point corresponding to the reference data point at the historical moment. It should be noted that the DTW algorithm is a technical means well known to those skilled in the art, and will not be described herein.
In other embodiments of the present invention, since the process of grabbing goods by the tool by the remote control robot tends to have repeatability, point cloud initial data at a historical time and point cloud data at a time to be processed tend to have repeatability, the position of a reference data point in a coordinate system is first used as a reference characteristic position of the reference data point, and the position of a data point in each historical time and the closest data point of the reference characteristic position are used as the best matching data point corresponding to the reference data point at the historical time.
Preferably, in order to analyze the stability of the reference data points, in one embodiment of the present invention, the method for obtaining the stability value includes:
Based on the point cloud initial data which the reference data point belongs to and the point cloud initial data at the historical moment, the determined best matching data point of the reference data point in the point cloud initial data at the historical moment;
in one embodiment of the present invention, the stable value formula includes:
; wherein D i,k is the stable value of the reference data point i at the kth historical moment; t (i, O) is an object characteristic value of a reference data point i in point cloud initial data O at the moment to be processed; i * is the best matching data point corresponding to the reference data point i in the point cloud initial data of the kth historical moment; t (i *, k) is the object characteristic value of the corresponding best matching data i * in the point cloud initial data of the reference data point i at the kth historical moment; sinc () is a sine function;
A maximum value is determined among the stable values at all historic times for each reference data point and is determined as the final stable value for each reference data point.
In the formulation of the stable value(s),The closeness of the best matching data point corresponding to the point cloud initial data representing the kth historical moment and the reference data point is characterized, the bigger the closeness is, the more likely is 0,The greater the value of (2); because the process of grabbing goods by the remote control robot through the tool is often repeatable, the object characteristic value corresponding to the normal data point is stable, however, noise is accidental, and the change of the object characteristic value of the data point is unstable due to the noise; the point cloud initial data of the historical moment and the point cloud data of the moment to be processed are easy to have repeatability under the condition of no noise interference, the maximum value is determined in the stable values of all the historical moments of each reference data point, the maximum value is determined as the final stable value of each reference data point, and when the stable value is larger, the point cloud initial data of the historical moment and the point cloud data of the moment to be processed are proved to have repeatability, the noise interference degree is small, and the data stability is high. The higher the stable value, the higher the likelihood that the reference data point is a normal data point and the lower the likelihood that it is a noisy data point.
Preferably, the data participation value is constructed to adjust the weight, so that stability of the data point is not only analyzed in consideration of adjustment of the weight of the subsequent Gaussian convolution kernel, but also the spatial position distribution condition of the point cloud data where the data point is located is considered, so that the characteristics of the data point at the boundary are reserved, and the influence of the over-smoothing on the subsequent positioning operation is avoided. In one embodiment of the present invention, the data participation value acquisition formula includes:
w i=norm[Di×(Dz×Lz)max ]; wherein w i is the data participation value of the reference data point i; d i is the stable value of the reference data point i; d z is a stable value of the z-th region point in the preset surrounding region of the reference data point i; l z is the Euclidean distance between the reference data point i and the z-th region point; (D z×Lz)max is the maximum value in the correspondence (D z×Lz) of all the region points in the preset surrounding region of the reference data point i; norm () is the normalization function.
In the data participation value formula, D i reflects the stability of the corresponding object feature value of the reference data point, the higher the stability, the higher the likelihood that the reference data point is a non-noise data point. L z reflects the Euclidean distance between the data point and the region point, and the larger the distance is, the more likely the data point is at the boundary, the larger the distance between the data point and the region point is possibly caused to be larger due to noise, so the distance is weighted by the stable value of the region point, the probability that the data point is at the boundary is accurately reflected, (D z×Lz)max represents the maximum probability that the data point is at the boundary under the influence of noise reduction).
Preferably, the initial weights are adjusted by constructing data participation values for the data points, and the adjusted weights for the reference data points are obtained. In one embodiment of the present invention, the method for acquiring the adjusted weight includes:
acquiring an adjusted weight of the reference data point according to the data participation value and the initial weight of the reference data point; the initial weight and the adjusted weight are in positive correlation; the data participation value and the adjusted weight are positively correlated.
In one embodiment of the present invention, the adjusted weight acquisition formula includes:
; wherein Zar m is the adjusted weight of reference data point i; car m is the initial weight of the reference data point i; w i is the data participation value of the reference data point i.
In the adjusted weight formula, the reference data point is more likely to be a non-noise data point and the boundary data point is more likely to be a data participation value. The data participation value of the data point is built to adjust the initial weight so that adjusting the weight can reduce the unstable effect of noise while preserving the data boundary structure characteristics.
And further, according to the adjusted weight of each reference data point of each data point to be denoised, performing three-dimensional Gaussian filtering on each data point in the point cloud initial data at the moment to be processed by using a three-dimensional Gaussian filtering method, so as to obtain point cloud filtering data at the moment to be processed.
And step S3, acquiring a positioning result of the robot tool at the moment to be processed according to the point cloud filtering data at the moment to be processed.
Through the steps, the data noise reduction effect is better, more accurate point cloud filtering data is obtained, the data noise reduction result is improved, and finally, more accurate positioning of the robot tool is realized.
Preferably, in one embodiment of the present invention, a method for acquiring a positioning result of a robot tool at a time to be processed includes:
Extracting each descriptor according to point cloud filtering data at the moment to be processed, and matching the descriptor with the descriptor in the tool model library to obtain a matching result; and further, according to the matching result, acquiring the positioning result of the robot tool at the moment to be processed.
Specifically, in point cloud filtering data at a time to be processed, a Scale-INVARIANT FEATURE TRANSFORM (Scale-invariant feature transform) algorithm is utilized to extract descriptors, wherein the descriptors are a method for extracting local features in an image or point cloud, and the method has Scale, rotation and illumination invariance. For each key point in the point cloud filtering data at the moment to be processed, a unique descriptor vector is generated for representing the geometric features around the unique descriptor vector. Matching the extracted descriptors with descriptors in a pre-stored tool model library by calculating similarity or distance between the descriptors, and obtaining a descriptor matching result. And according to the result of the descriptor matching, determining the corresponding relation between the key points on the robot tool and the key points in the tool model library, and further identifying the tool.
In one embodiment of the invention, according to the key point matching result, an ICP (ITERATIVE CLOSEST POINT ) algorithm can be used for registering the point cloud data of the robot tool with the clamp model. The ICP algorithm is used for gradually aligning the tool point cloud data with the clamp model through iterative optimization, and the optimal transformation matrix is calculated. This transformation matrix contains translation vectors and rotation matrices for representing the position and pose of the tool in the coordinate system. In other embodiments of the present invention, registration may also be performed by a RANSAC (Random Sample Consensus ) algorithm. The RANSAC algorithm can also obtain translation vectors and rotation matrices for representing the position and pose of the tool by randomly sampling a subset of the data and estimating the transformation matrix, and then selecting the transformation matrix that aligns the most data points as the best model.
By analyzing the registered transformation matrix, a translation vector and a rotation matrix of the clamp can be obtained. The information is used for determining the position and the gesture of the robot tool in a coordinate system, so that the tool is positioned, and the tool is moved to grasp goods. In a cargo grabbing scene of a remote control robot through a tool, continuously acquiring point cloud initial data at each moment, taking the current moment as a moment to be processed, acquiring point cloud filtering data at the current moment, and further positioning the real-time position of the robot tool, so that the tool is moved in real time to grab cargoes.
In summary, the embodiment of the invention provides a method for accurately positioning a robot tool based on computer vision, which comprises the steps of firstly obtaining a point cloud initial data set of the robot tool; and carrying out three-dimensional Gaussian filtering on the point cloud initial data at the moment to be processed by utilizing a three-dimensional Gaussian filter core according to the point cloud initial data at the moment to be processed and the point cloud initial data at each historical moment in the historical database, acquiring point cloud filtering data at the moment to be processed, and acquiring a positioning result of the robot tool at the moment to be processed according to the point cloud filtering data at the moment to be processed. According to the embodiment of the invention, the weight of the reference data point is optimized, so that the unstable influence of noise is reduced, meanwhile, the data boundary structure characteristics are reserved, the data denoising result is improved, and finally, the robot tool is positioned more accurately.
An embodiment of a data processing method for accurate positioning of a robotic tool:
Because laser radar equipment is in the point cloud data process of gathering robot instrument, the environment that the robot was used often is comparatively abominable, leads to there is a large amount of noise in the point cloud data of robot instrument, and the noise can lead to the robot instrument to fix a position inaccurately. Therefore, the point cloud data of the robot tool needs to be denoising processed, the point cloud data is denoising processed by utilizing three-dimensional Gaussian filtering in the prior art, when the weight of the reference pixel point is determined in the prior art, the weight is determined only according to the distance between the reference pixel point and the pixel point to be denoising, the object structure information is further represented by neglecting the data point distance, excessive denoising loss is caused to the object structure information, and the denoising result is inaccurate.
The invention aims to provide a data processing method for accurately positioning a robot tool, which comprises the following steps:
step S1, acquiring a point cloud initial data set of a robot tool; the point cloud initial data set comprises point cloud initial data of the moment to be processed and point cloud initial data of each historical moment in the historical database; each data point in the point cloud initial data set has a corresponding reflected intensity value.
Step S2, carrying out three-dimensional Gaussian filtering on the point cloud initial data at the moment to be processed by utilizing a three-dimensional Gaussian filtering core according to the point cloud initial data at the moment to be processed and the point cloud initial data at each historical moment in a historical database, and obtaining point cloud filtering data at the moment to be processed; the process for acquiring the point cloud filtering data at the moment to be processed comprises the following steps: taking each data point in the point cloud initial data at the moment to be processed as a data point to be denoised, and acquiring the initial weight of each reference data point in the process of determining the weight of each reference data point of each data point to be denoised; taking the time to be processed and each historical time as a target time, taking each data point in point cloud initial data of the target time as a target data point, taking each data point in a preset surrounding area of the target data point as each area point, and acquiring object characteristic values of the target data points in the preset surrounding area of the target data point according to the difference between the reflection intensity values of the target data point and the area points and the distribution of the area points; acquiring a stable value of the reference data point according to the object characteristic value of the reference data point and the object characteristic value relation of the data points in the point cloud initial data of all the historical moments; acquiring a data participation value of the reference data point according to the stable values of the reference data point and the region point and the distance between the reference data point and the region point; and adjusting the initial weight according to the data participation value, and acquiring the adjusted weight of the reference data point.
Since the specific implementation process of steps S1 to S2 is already described in detail in the above-mentioned method for precisely positioning a robot tool based on computer vision, no further description is given.
The beneficial effects of the embodiment of the invention include: the method comprises the steps of firstly obtaining a point cloud initial data set of a robot tool; and carrying out three-dimensional Gaussian filtering on the point cloud initial data at the moment to be processed by utilizing a three-dimensional Gaussian filtering core according to the point cloud initial data at the moment to be processed and the point cloud initial data at each historical moment in the historical database, and obtaining point cloud filtering data at the moment to be processed. According to the embodiment of the invention, the weight of the reference data point is optimized, so that the unstable influence of noise is reduced, meanwhile, the data boundary structure characteristics are reserved, and the data denoising result is improved.
It should be noted that: the sequence of the embodiments of the present invention is only for description, and does not represent the advantages and disadvantages of the embodiments. The processes depicted in the accompanying drawings do not necessarily require the particular order shown, or sequential order, to achieve desirable results. In some embodiments, multitasking and parallel processing are also possible or may be advantageous.
In this specification, each embodiment is described in a progressive manner, and identical and similar parts of each embodiment are all referred to each other, and each embodiment mainly describes differences from other embodiments.

Claims (10)

1. A method for accurately positioning a robotic tool based on computer vision, the method comprising the steps of:
acquiring a point cloud initial data set of a robot tool; the point cloud initial data set comprises point cloud initial data of the moment to be processed and point cloud initial data of each historical moment in a historical database; each data point in the point cloud initial data set has a corresponding reflection intensity value;
According to point cloud initial data of a moment to be processed and point cloud initial data of each historical moment in a historical database, checking the point cloud initial data of the moment to be processed by utilizing three-dimensional Gaussian filtering to carry out three-dimensional Gaussian filtering, and obtaining point cloud filtering data of the moment to be processed;
The process for acquiring the point cloud filtering data at the moment to be processed comprises the following steps: taking each data point in the point cloud initial data at the moment to be processed as a data point to be denoised, and acquiring the initial weight of each reference data point in the process of determining the weight of each reference data point of each data point to be denoised; taking the time to be processed and each historical time as a target time, taking each data point in point cloud initial data of the target time as a target data point, taking each data point in a preset surrounding area of the target data point as each area point, and acquiring object characteristic values of the target data points in the preset surrounding area of the target data point according to the difference between the reflection intensity values of the target data point and the area points and the distribution of the area points; acquiring a stable value of a reference data point according to the object characteristic value of the reference data point and the object characteristic value relation of the data points in the point cloud initial data of all the historical moments; acquiring a data participation value of the reference data point according to the stable values of the reference data point and the regional point and the distance between the reference data point and the regional point; adjusting the initial weight according to the data participation value to obtain an adjusted weight of a reference data point;
And acquiring a positioning result of the robot tool at the moment to be processed according to the point cloud filtering data at the moment to be processed.
2. The method for precisely positioning a robot tool based on computer vision according to claim 1, wherein the method for acquiring the characteristic value of the object comprises the steps of:
obtaining an object feature value according to an object feature value formula, wherein the object feature value formula comprises:
; wherein T (m) is the object feature value of the mth of the target data points; f m is the reflected intensity value of the mth said target data point; f z is the reflection intensity value of the z-th region point in the preset surrounding region of the mth target data point; s is the total number of all the area points in the preset surrounding area of the mth target data point; delta z is the variance of the distance between the z-th region point and all its surrounding points in the preset surrounding range of the z-th region point; and I is an absolute value symbol.
3. The method for precisely positioning a robot tool based on computer vision according to claim 1, wherein the method for obtaining the stable value comprises:
Determining the corresponding best matching data point of the reference data point in the point cloud initial data of each historical moment according to the position relation between the point cloud initial data of the moment to be processed and the point cloud initial data of the historical moment;
obtaining the stable value according to a stable value formula, wherein the stable value formula comprises:
; wherein D i,k is the stable value of the reference data point i at the kth historical moment; t (i, O) is the object feature value of the reference data point i in the point cloud initial data O at the moment to be processed; i * is the best matching data point corresponding to the reference data point i in the point cloud initial data of the kth historical moment; t (i *, k) is the object feature value of the corresponding best matching data i * in the point cloud initial data of the reference data point i at the kth historical moment; sinc () is a sine function;
A maximum value is determined among the stable values at all historic times for each reference data point and is determined as the final stable value for each reference data point.
4. A method for precisely locating a robotic tool based on computer vision as defined in claim 3, wherein said method for obtaining said best matching data points comprises:
Determining each data plane corresponding to the point cloud initial data at the target moment, wherein each data plane is perpendicular to a Z axis, and each data point in the point cloud initial data at the target moment is positioned on each data plane;
Performing straight line fitting on the walking track of the robot to obtain a walking fitting straight line; determining each dividing straight line corresponding to each data plane, wherein the extending direction of each dividing straight line is the same as the extending direction of the walking fitting straight line, and each data point in the point cloud initial data corresponding to each data plane is positioned on each dividing straight line;
Determining the starting point of each data plane, sequentially counting the distance between each data point and the starting point on each dividing straight line on each data plane, and performing curve fitting on all the data points as the characteristic distance of each data point to obtain fitting curves of all the data points on each dividing straight line; the abscissa of the fitted curve is the characteristic distance, and the ordinate of the fitted curve is the reflection intensity value;
Taking a fitting curve of the reference data points as a reference curve, and taking a data plane with the same Z-axis height as the data plane of the reference curve at the historical moment as a reference control plane of the historical moment;
Matching all the fitting curves in the reference control plane with the reference curves by using a DTW algorithm to obtain an optimal matching curve of the reference curves corresponding to the historical moment; and taking the matching point of the best matching curve corresponding to the reference data point at the historical moment as the best matching data point corresponding to the reference data point at the historical moment.
5. The method for precisely positioning a robot tool based on computer vision according to claim 1, wherein the method for acquiring the data participation value comprises the steps of:
acquiring the data participation value according to a data participation value formula, wherein the data participation value formula comprises:
w i=norm[Di×(Dz×Lz)max ]; wherein w i is the data participation value of reference data point i; d i is the stable value of reference data point i; d z is the stable value of the z-th region point in the preset surrounding region of the reference data point i; l z is the Euclidean distance between the reference data point i and the z-th region point; (D z×Lz)max is the maximum value in the correspondence (D z×Lz) of all the region points in the preset surrounding region of the reference data point i; norm () is the normalization function.
6. The method for precisely positioning a robot tool based on computer vision according to claim 1, wherein the method for acquiring the adjusted weight comprises:
acquiring the adjusted weight of the reference data point according to the data participation value and the initial weight of the reference data point; the initial weight and the adjusted weight are in positive correlation; the data participation value and the adjusted weight are positively correlated.
7. The method for precisely positioning a robot tool based on computer vision according to claim 1, wherein the method for acquiring the positioning result of the robot tool at the time to be processed comprises the following steps:
Extracting each descriptor according to point cloud filtering data at the moment to be processed, and matching the descriptor with the descriptor in the tool model library to obtain a matching result; and further, according to the matching result, acquiring the positioning result of the robot tool at the moment to be processed.
8. The method for precisely positioning a robot tool based on computer vision according to claim 1, wherein the method for acquiring the initial weight comprises:
Based on a three-dimensional Gaussian filtering method, initial weights of the reference data points are obtained according to the distance between the reference data points and the data points to be denoised.
9. The method for precisely positioning a robot tool based on computer vision according to claim 1, wherein the method for acquiring the preset surrounding area comprises the steps of:
The preset surrounding area is a rectangular window which is built by taking a target data point as the center of the rectangular window and taking a preset size as the side length of the rectangular window.
10. The method for precisely positioning a robot tool based on computer vision according to claim 1, wherein the method for acquiring the initial data of the point cloud comprises the following steps:
and transmitting uniform points to the robot tool through the laser to sample, and acquiring point cloud initial data of the robot tool.
CN202410255141.1A 2024-03-06 2024-03-06 Method for accurately positioning robot tool based on computer vision Active CN117830143B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202410255141.1A CN117830143B (en) 2024-03-06 2024-03-06 Method for accurately positioning robot tool based on computer vision

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202410255141.1A CN117830143B (en) 2024-03-06 2024-03-06 Method for accurately positioning robot tool based on computer vision

Publications (2)

Publication Number Publication Date
CN117830143A CN117830143A (en) 2024-04-05
CN117830143B true CN117830143B (en) 2024-05-03

Family

ID=90513929

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202410255141.1A Active CN117830143B (en) 2024-03-06 2024-03-06 Method for accurately positioning robot tool based on computer vision

Country Status (1)

Country Link
CN (1) CN117830143B (en)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116088503A (en) * 2022-12-16 2023-05-09 深圳市普渡科技有限公司 Dynamic obstacle detection method and robot
WO2023093515A1 (en) * 2021-11-29 2023-06-01 珠海一微半导体股份有限公司 Positioning system and positioning method based on sector depth camera
CN117250647A (en) * 2022-06-09 2023-12-19 腾讯科技(深圳)有限公司 Positioning method, device, equipment and storage medium

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2023093515A1 (en) * 2021-11-29 2023-06-01 珠海一微半导体股份有限公司 Positioning system and positioning method based on sector depth camera
CN117250647A (en) * 2022-06-09 2023-12-19 腾讯科技(深圳)有限公司 Positioning method, device, equipment and storage medium
CN116088503A (en) * 2022-12-16 2023-05-09 深圳市普渡科技有限公司 Dynamic obstacle detection method and robot

Also Published As

Publication number Publication date
CN117830143A (en) 2024-04-05

Similar Documents

Publication Publication Date Title
CN108171748B (en) Visual identification and positioning method for intelligent robot grabbing application
EP1477934A2 (en) Image processing apparatus
CN113470090A (en) Multi-solid-state laser radar external reference calibration method based on SIFT-SHOT characteristics
CN107240130B (en) Remote sensing image registration method, device and system
CN113781561B (en) Target pose estimation method based on self-adaptive Gaussian weight quick point feature histogram
US20210358160A1 (en) Method and system for determining plant leaf surface roughness
CN111598172B (en) Dynamic target grabbing gesture rapid detection method based on heterogeneous depth network fusion
CN115272616A (en) Indoor scene three-dimensional reconstruction method, system, device and storage medium
CN108519075B (en) Space multi-target pose measurement method
CN115601407A (en) Infrared and visible light image registration method
CN113469195B (en) Target identification method based on self-adaptive color quick point feature histogram
CN117830143B (en) Method for accurately positioning robot tool based on computer vision
CN113021355B (en) Agricultural robot operation method for predicting sheltered crop picking point
Olson Adaptive-scale filtering and feature detection using range data
CN109635692B (en) Scene re-identification method based on ultrasonic sensor
CN111768423A (en) Automatic fiber angle measuring method based on image recognition
CN114863189B (en) Intelligent image identification method based on big data
Harati et al. A new approach to segmentation of 2D range scans into linear regions
CN114998624A (en) Image searching method and device
CN111626325B (en) Feature-based image matching method
CN113838051B (en) Robot closed-loop detection method based on three-dimensional point cloud
CN113316080A (en) Indoor positioning method based on Wi-Fi and image fusion fingerprint
CN117786439B (en) Visual intelligent navigation system of medical carrying robot
Jamzuri et al. Object Detection and Pose Estimation using Rotatable Object Detector DRBox-v2 for Bin-Picking Robot
CN112785711B (en) Insulator creepage distance detection method and detection system based on three-dimensional reconstruction

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant