CN109448057B - Safety positioning method based on three-dimensional point cloud - Google Patents
Safety positioning method based on three-dimensional point cloud Download PDFInfo
- Publication number
- CN109448057B CN109448057B CN201811298254.0A CN201811298254A CN109448057B CN 109448057 B CN109448057 B CN 109448057B CN 201811298254 A CN201811298254 A CN 201811298254A CN 109448057 B CN109448057 B CN 109448057B
- Authority
- CN
- China
- Prior art keywords
- point cloud
- dimensional point
- dimensional
- detected
- dimensional matrix
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000000034 method Methods 0.000 title claims abstract description 42
- 238000001914 filtration Methods 0.000 claims abstract description 14
- 238000009499 grossing Methods 0.000 claims abstract description 4
- 239000011159 matrix material Substances 0.000 claims description 42
- 238000003384 imaging method Methods 0.000 claims description 28
- 230000014509 gene expression Effects 0.000 claims description 3
- 238000013507 mapping Methods 0.000 claims description 3
- 238000012545 processing Methods 0.000 abstract description 2
- 239000000463 material Substances 0.000 description 5
- 238000012937 correction Methods 0.000 description 2
- 238000010586 diagram Methods 0.000 description 2
- 238000000605 extraction Methods 0.000 description 2
- 238000004519 manufacturing process Methods 0.000 description 2
- 230000000007 visual effect Effects 0.000 description 2
- 238000004364 calculation method Methods 0.000 description 1
- 239000003086 colorant Substances 0.000 description 1
- 238000009776 industrial production Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000012163 sequencing technique Methods 0.000 description 1
- 239000000126 substance Substances 0.000 description 1
- 230000002194 synthesizing effect Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/70—Determining position or orientation of objects or cameras
- G06T7/73—Determining position or orientation of objects or cameras using feature-based methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/50—Depth or shape recovery
- G06T7/521—Depth or shape recovery from laser ranging, e.g. using interferometry; from the projection of structured light
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Optics & Photonics (AREA)
- Length Measuring Devices By Optical Means (AREA)
Abstract
The invention discloses a safety positioning method based on three-dimensional point cloud, which comprises the following steps: s1, three-dimensional point cloud positioning: establishing a real coordinate system where an object to be detected is located, and converting the three-dimensional point cloud into the real coordinate system for representation; s2, limiting three-dimensional point cloud: determining the position of the three-dimensional point cloud within a safety range according to the shape and the coordinates of the object to be detected; s3, filtering the three-dimensional point cloud: and further smoothing the three-dimensional point cloud in the safety range to eliminate external light interference. According to the positioning method provided by the invention, the safety range of the point cloud corresponding to the real coordinate is set, and meanwhile, filtering processing is carried out, so that the precision of the point cloud data is improved, and the safety of the three-dimensional point cloud positioning method is improved.
Description
Technical Field
The invention relates to the technical field of machine vision positioning, in particular to a safety positioning method based on three-dimensional point cloud.
Background
In industrial automation production, a plurality of mechanical hands are adopted to complete the movement, the picking and the placing of workpieces, and the control of the mechanical hands is important for positioning and judging whether materials exist or not. In the prior art, one method based on machine vision positioning is to set label information on an object to be grabbed, and identify and position the label information through machine vision so as to drive a manipulator to grab, and the method needs manual participation and affects the working efficiency; the other method is to collect two-dimensional images of the objects to be grabbed and perform target identification and positioning by extracting and analyzing image features, the method is complex in algorithm and long in operation time, the positioning method is sensitive to the surface shapes of the objects to be grabbed, and different algorithms are required to be operated to extract the features of the objects to be grabbed in different shapes, so that the production efficiency is seriously influenced. In industrial production, a grabbing object is a randomly stacked workpiece under many conditions, when the grabbing sequence is not determined, the visual positioning method in the prior art is complex, the application requirement is high, safety limit is rarely mentioned in the visual positioning method in the prior art, and the influence on the service life of a manipulator due to collision between the manipulator and the surrounding environment cannot be effectively avoided.
Disclosure of Invention
The invention provides a safety positioning method based on three-dimensional point cloud, aiming at overcoming the problems of large calculation amount, high application requirement, no safety limit and the like of a vision positioning method in the prior art, and the safety positioning method can realize the safety positioning of randomly stacked objects to be detected and improve the grabbing efficiency of a manipulator.
The invention provides a safety positioning method based on three-dimensional point cloud, which comprises the following steps:
s1, three-dimensional point cloud positioning: establishing a real coordinate system where an object to be detected is located, and converting the three-dimensional point cloud into the real coordinate system for representation;
s2, limiting three-dimensional point cloud: determining the position of the three-dimensional point cloud within a safety range according to the shape and the coordinates of the object to be detected;
s3, filtering the three-dimensional point cloud: and further smoothing the three-dimensional point cloud in the safety range to eliminate the interference of external light.
The invention provides a safe positioning method based on three-dimensional point cloud, and as a preferred mode, the step S1 further comprises the following steps:
s11, three-dimensional point cloud acquisition: acquiring three-dimensional point cloud by using a laser triangulation principle, and combining a camera with a fixed relative position and a laser generator into a shooting system;
s12, establishing a real coordinate system: taking the center position of the grabbed object on the plane where the image acquisition unit is located as the origin of the real coordinate system, taking the direction of the origin of the coordinate system pointing to the object to be detected as the Z direction, taking the moving shooting direction of the shooting system as the X direction, and determining the Y direction according to the right-hand rule, wherein the plane where the shooting system is located is the plane where the highest point of the object to be detected is located;
s13, point cloud coordinate mapping: the number of imaging images of an object to be detected captured by the shooting system is N, the resolution line number of a single image is M, so that a two-dimensional matrix A with M rows and N columns can be obtained after the imaging images are synthesized according to the shooting sequence:
wherein:
a is a two-dimensional matrix with height information;
amnthe depth value of the m row and n column points of the two-dimensional matrix A is obtained;
the m-th row and n-th column points in the two-dimensional matrix A have the following expressions in a real coordinate system:
wherein:
amnthe depth value of the mth row and the nth column of the two-dimensional matrix A is obtained;
x is the distance from the starting point to the ending point;
n is the number of imaging images of the object to be detected;
m is the number of lines of the resolution of a single image;
k is a constant coefficient in the imaging size relation;
the imaging size relationship is:
l=ksh
wherein:
l is the actual length;
s is the number of pixel points occupied by the imaging length;
k is a constant coefficient;
h is the shooting height;
the method has the advantages that the three-dimensional point cloud of the object to be detected is used for positioning, the two-dimensional image feature extraction process is avoided, the method is suitable for positioning the objects to be detected in different shapes, the algorithm is simplified, and the applicability is high.
The invention provides a safety positioning method based on three-dimensional point cloud, which is used as an optimal mode, and the method for determining the center of an object to be detected comprises the following steps:
making an externally tangent rectangle on the projection of the object to be measured on the XY plane, wherein the diagonal intersection point of the externally tangent rectangle is the center of the object to be measured; the center is determined through the circumscribed rectangle, the edge range of the object to be detected is determined to the maximum extent, and a foundation is provided for the safe limit of the subsequent three-dimensional point cloud.
The invention provides a safety positioning method based on three-dimensional point cloud, which is used as an optimal mode, when an object to be detected is positioned and moved according to the three-dimensional point cloud, in order to avoid collision with obstacles around the object to be detected, the safety limit is carried out on the three-dimensional point cloud, the points in the mth row and the nth column of a two-dimensional matrix A are represented in a coordinate system, and the points in a safety range satisfy the following relations:
wherein:
a is the projection distance of the circumscribed rectangle in the X direction;
b is the projection distance of the circumscribed rectangle in the Y direction;
h1the highest coordinate of the object to be measured in the Z direction is obtained;
h2the lowest coordinate of the object to be measured in the Z direction is obtained;
amnthe depth value of the mth row and the nth column of the two-dimensional matrix A is obtained;
x is the distance from the starting point to the ending point;
n is the number of imaging images of the object to be detected;
m is the number of lines of the resolution of a single image;
k is a constant coefficient in the imaging size relation;
the three-dimensional point cloud is limited, the point cloud display range is reduced, the three-dimensional point cloud is positioned in a safety range, and collision between a manipulator and surrounding obstacles in the process of grabbing the object to be detected is avoided.
The invention provides a three-dimensional point cloud-based safe positioning method, and as a preferred mode, the specific operation method of the step S3 is as follows:
establishing a two-dimensional matrix B on the basis of the two-dimensional matrix A, wherein the mth row and the n columns of elements B of the two-dimensional matrix BmnWhen the condition of the filter point is met, bmnAssigned a value of | h2|+|h1I, thereby filtering out points; in other cases, bmnIs assigned as the m-th row and n-column element a of the two-dimensional matrix AmnThe value is obtained.
The invention provides a safety positioning method based on three-dimensional point cloud, which is used as an optimal mode, wherein in the step S3, a point strip is matched and filteredThe element is an element B in the mth row and nth column of the two-dimensional matrix BmnAt the boundary of the two-dimensional matrix B, i.e., M is 1 or M is M, N is 1 or N is N; or element bmnThe height difference with any one of the surrounding 8 elements is larger than the Z-axis safety range | h2-h1One fifth of |, i.e.
And filtering the three-dimensional point cloud to remove edge points and outliers in the three-dimensional point cloud, thereby improving the data accuracy of the three-dimensional point cloud and avoiding the interference of external light.
The invention carries out positioning through the three-dimensional point cloud of the object to be detected, is not influenced by the surface shape and the size of the object to be detected, simplifies the point cloud algorithm, can effectively improve the grabbing efficiency when randomly stacked objects to be detected are grabbed in a disordered way, and carries out limiting and filtering on the three-dimensional point cloud so as to avoid the external light interference and grabbing collision faults.
Drawings
FIG. 1 is a flow chart of a safety positioning method based on three-dimensional point cloud;
FIG. 2 is a flowchart of a step S1 of a safety positioning method based on three-dimensional point cloud;
FIG. 3 is a schematic diagram of relative positions of a camera and a laser generator of a safety positioning method based on three-dimensional point cloud;
FIG. 4 is a schematic diagram of a real coordinate system;
Detailed Description
Example 1
As shown in fig. 1, the invention provides a safe positioning method based on three-dimensional point cloud, comprising the following steps:
s1, three-dimensional point cloud positioning: establishing a real coordinate system where an object to be detected is located, and converting the three-dimensional point cloud into the real coordinate system for representation; as shown in fig. 2, step S1 further includes the following steps:
s11, three-dimensional point cloud acquisition: acquiring three-dimensional point cloud by using a laser triangulation principle, and combining a camera and a laser generator with fixed relative positions into a shooting system, wherein the relative positions of the camera and the laser generator are shown in figure 3; the method comprises the steps that a shooting system is located at one side of the longitudinal edge of an object to be detected and horizontally moves along the transverse edge of the object to be detected, meanwhile, the shooting system continuously shoots at a fixed frequency until the complete surface imaging image collection of the object to be detected is completed, and the shooting system sequentially inputs obtained pictures into LabVIEW for distortion correction; extracting a picture pixel matrix after the distortion correction processing; binarizing the matrix by a certain threshold (the threshold parameter needs to be set according to the brightness), and performing denoising treatment; extracting and sequencing the points with the pixel value of 255 in each row, and substituting the maximum column number extraction into the formula of S1 to obtain the actual distance between each row of points and the camera; the shooting height and the imaging point of the current point on the camera chip have the following geometrical relationship according to the laser triangulation principle:
wherein:
theta is an included angle between a connecting line of the current point and the imaging point and the laser line;
alpha is the included angle between the camera lens axis and the laser line;
b is the horizontal distance from the center of the camera lens to the laser line;
a is the 1/2 size of the camera chip;
f is the focal length of the camera;
h is the shooting height;
x is the distance from the imaging point to the chip boundary;
wherein the distance x has the following relationship:
wherein:
u is the chip boundary length;
t is the number of pixels from the imaging point to the image boundary;
z is the total number of pixels in the image in the length direction.
A plurality of pictures are takenThe geometric relationship is substituted into the height data to derive the values of the constants b and alpha, and then the relation formula of the shooting height h is calculated according to x as follows:
s12, establishing a real coordinate system: taking the center position of the grabbed object on the plane where the image acquisition unit is located as the origin of the real coordinate system, taking the plane where the shooting system is located as the plane where the highest point of the object to be detected is located, taking the direction in which the origin of the coordinate system points at the object to be detected as the Z direction, taking the moving shooting direction of the shooting system as the X direction, and determining the Y direction according to the right-hand rule, namely taking the direction in which the origin of the coordinate system points at the shooting system as the Y direction; and synthesizing the processed surface imaging images according to a shooting sequence to generate a complete image of the three-dimensional point cloud on the surface of the object to be detected.
S13, point cloud coordinate mapping: the number of imaging images of an object to be detected captured by the shooting system is N, the resolution line number of a single image is M, so that a two-dimensional matrix A with M rows and N columns can be obtained after the imaging images are synthesized according to the shooting sequence:
wherein:
a is a two-dimensional matrix with height information;
amnthe depth value of the m row and n column points of the two-dimensional matrix A is obtained;
the m-th row and n-th column points in the two-dimensional matrix A have the following expressions in a real coordinate system:
wherein:
amnthe depth value of the mth row and the nth column of the two-dimensional matrix A is obtained;
x is the distance from the starting point to the ending point;
n is the number of imaging images of the object to be detected;
m is the number of lines of the resolution of a single image;
k is a constant coefficient in the imaging size relation;
the imaging size relationship is expressed as:
l=ksh
wherein the content of the first and second substances,
l is the actual length;
s is the number of pixel points occupied by the imaging length;
k is a constant coefficient;
h is the shooting height;
the method for determining the center of the object to be measured comprises the following steps: making an externally tangent rectangle on the projection of the object to be measured on the XY plane, wherein the diagonal intersection point of the externally tangent rectangle is the center of the object to be measured;
s2, limiting three-dimensional point cloud: determining the position of the three-dimensional point cloud within a safety range according to the shape and the coordinates of the object to be detected; the point within the safety range satisfies the following relationship:
wherein:
a is the projection distance of the circumscribed rectangle in the X direction;
b is the projection distance of the circumscribed rectangle in the Y direction;
h1the minimum coordinate of the object to be measured in the Z direction is obtained;
h2the maximum coordinate of the object to be measured in the Z direction is obtained;
amnthe depth value of the mth row and the nth column of the two-dimensional matrix A is obtained;
x is the distance from the starting point to the ending point;
n is the number of imaging images of the object to be detected;
m is the number of lines of the resolution of a single image;
k is a constant coefficient in the imaging size relation;
s3, filtering the three-dimensional point cloud: and further smoothing the three-dimensional point cloud in the safety range to eliminate the interference of external light. The specific operation method comprises the following steps: establishing a two-dimensional matrix B on the basis of the two-dimensional matrix A, wherein the mth row n of the two-dimensional matrix BColumn element bmnWhen the condition of the filter point is met, bmnAssigned a value of | h2|+|h1I or BmnThe value is assigned to be the maximum value of any photographing height, so that the aim of filtering points is fulfilled; in other cases, bmnIs assigned as the m-th row and n-column element a of the two-dimensional matrix AmnThe value is obtained. Wherein, meeting the condition of the filter point refers to the element B in the mth row and n columns of the two-dimensional matrix BmnAt the boundary of the two-dimensional matrix B, i.e., M is 1 or M is M or N is 1 or N is N; or element bmnThe height difference with any one of the surrounding 8 elements is larger than the Z-axis safety range | h2-h1One fifth of |, i.e.
In this embodiment, a real coordinate system established with the brake disc placed in the material frame as the grasping object and the center of the plane on the material frame as the origin is shown in fig. 4, where a and b are the ranges of the length and width directions of the material frame, respectively, and h1、h2The vertical distances from the upper edge and the lower edge of the material frame to the origin of the real coordinate system are respectively as the element B in the matrix B during point cloud filteringmnWhen the filtering condition is satisfied, let bmnThe value is 1500, the purpose of filtering is achieved, and a is used in other casesmnIs assigned to bmn(ii) a The filtering conditions are as follows: the elements being at matrix boundaries or bmnThe height difference with any one of the surrounding 8 elements is more than 50. And corresponding to different colors according to different depth values of the matrix elements, thereby obtaining a color three-dimensional point cloud image. The three-dimensional point cloud image after the safety limit is changed into black relative to the ground color of the original point cloud image, and points which cannot be safely captured are filtered, so that the three-dimensional point cloud image outline of the brake disc is more clear and prominent, the accuracy of point cloud data acquisition is influenced due to external light interference, and the point cloud image after the three-dimensional point cloud filtering is smoother.
The foregoing description is intended to be illustrative rather than limiting, and it will be appreciated by those skilled in the art that various modifications, changes, and equivalents may be made without departing from the spirit and scope of the invention as defined in the appended claims.
Claims (5)
1. A safety positioning method based on three-dimensional point cloud is characterized in that: the method comprises the following steps:
s1, three-dimensional point cloud positioning: establishing a real coordinate system where an object to be detected is located, and converting the three-dimensional point cloud into the real coordinate system for representation;
s2, limiting three-dimensional point cloud: determining the position of the three-dimensional point cloud within a safety range according to the shape and the coordinates of the object to be detected; when the object to be detected is moved according to the three-dimensional point cloud positioning, in order to avoid collision with obstacles around the object to be detected, the three-dimensional point cloud is safely limited, and points in a safety range satisfy the following relations by combining the coordinate representation of the mth row and the nth column of points of the two-dimensional matrix A in the real coordinate system:
wherein:
a is the projection distance of the circumscribed rectangle in the X direction;
b is the projection distance of the circumscribed rectangle in the Y direction;
h1the minimum coordinate of the object to be measured in the Z direction is obtained;
h2the maximum coordinate of the object to be measured in the Z direction is obtained;
amnthe depth value of the mth row and the nth column of the two-dimensional matrix A is obtained;
x is the distance from the starting point to the ending point;
n is the number of imaging images of the object to be detected;
m is the number of lines of the resolution of a single image;
k is a constant coefficient in the imaging size relation;
s3, filtering the three-dimensional point cloud: further smoothing the three-dimensional point cloud in a safety range to eliminate external light interference;
the step S1 further includes the steps of:
s11, three-dimensional point cloud acquisition: acquiring three-dimensional point cloud by using a laser triangulation principle, and combining a camera with a fixed relative position and a laser generator into a shooting system;
s12, establishing a real coordinate system: taking the central position of the object to be measured on the plane where the shooting system is located as the origin of the real coordinate system, taking the direction in which the origin of the coordinate system points at the object to be measured as the Z direction, taking the moving shooting direction of the shooting system as the X direction, and determining the Y direction according to the right-hand rule;
s13, point cloud coordinate mapping: the number of the imaging images of the object to be detected captured by the shooting system is N, the resolution line number of a single image is M, so that a two-dimensional matrix A with M rows and N columns can be obtained after the imaging images are synthesized according to the shooting sequence:
wherein:
a is a two-dimensional matrix with height information;
the m row and n column points in the two-dimensional matrix A have the following expressions in the real coordinate system:
2. the three-dimensional point cloud-based safe positioning method according to claim 1, wherein: the method for determining the center of the object to be detected comprises the following steps:
and drawing a circumscribed rectangle on the projection of the object to be detected on the XY plane, wherein the intersection point of the diagonals of the circumscribed rectangle is the center of the object to be detected.
3. The three-dimensional point cloud-based safe positioning method according to claim 1, wherein: the imaging size relationship is expressed as:
l=ksh
wherein:
l is the actual length;
s is the number of pixel points occupied by the imaging length;
h is the shooting height.
4. The three-dimensional point cloud-based safe positioning method according to claim 1, wherein: the specific operation method of step S3 is as follows:
establishing a two-dimensional matrix B on the basis of the two-dimensional matrix A, wherein the mth row and the n columns of elements B of the two-dimensional matrix BmnWhen the condition of the filter point is met, bmnAssigned a value of | h2|+|h1L, |; in other cases, bmnAssigning the value of the matrix to the mth row and the n columns of the elements a of the two-dimensional matrix AmnThe value is obtained.
5. The three-dimensional point cloud-based safe positioning method according to claim 4, wherein: the condition that the filter point is met in the step S3 means that the element B in the mth row and nth column of the two-dimensional matrix B ismnAt the boundary of the two-dimensional matrix B, i.e. M-1 or M-M or N-1 or N-N; or element bmnThe height difference with any one of the surrounding 8 elements is larger than the Z-axis safety range | h2-h1One fifth of |, i.e.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201811298254.0A CN109448057B (en) | 2018-11-02 | 2018-11-02 | Safety positioning method based on three-dimensional point cloud |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201811298254.0A CN109448057B (en) | 2018-11-02 | 2018-11-02 | Safety positioning method based on three-dimensional point cloud |
Publications (2)
Publication Number | Publication Date |
---|---|
CN109448057A CN109448057A (en) | 2019-03-08 |
CN109448057B true CN109448057B (en) | 2022-02-25 |
Family
ID=65550686
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201811298254.0A Active CN109448057B (en) | 2018-11-02 | 2018-11-02 | Safety positioning method based on three-dimensional point cloud |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN109448057B (en) |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102779280A (en) * | 2012-06-19 | 2012-11-14 | 武汉大学 | Traffic information extraction method based on laser sensor |
CN106558096A (en) * | 2015-09-30 | 2017-04-05 | 株式会社日立大厦系统 | The modeling method of the model building device, program and cage guide of three dimensional point cloud |
CN107016697A (en) * | 2017-04-11 | 2017-08-04 | 杭州光珀智能科技有限公司 | A kind of height measurement method and device |
CN108171748A (en) * | 2018-01-23 | 2018-06-15 | 哈工大机器人(合肥)国际创新研究院 | A kind of visual identity of object manipulator intelligent grabbing application and localization method |
Family Cites Families (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
DE112017006909T5 (en) * | 2017-02-22 | 2019-10-10 | Faro Technologies, Inc. | THREE DIMENSIONAL PICTURES |
-
2018
- 2018-11-02 CN CN201811298254.0A patent/CN109448057B/en active Active
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102779280A (en) * | 2012-06-19 | 2012-11-14 | 武汉大学 | Traffic information extraction method based on laser sensor |
CN106558096A (en) * | 2015-09-30 | 2017-04-05 | 株式会社日立大厦系统 | The modeling method of the model building device, program and cage guide of three dimensional point cloud |
CN107016697A (en) * | 2017-04-11 | 2017-08-04 | 杭州光珀智能科技有限公司 | A kind of height measurement method and device |
CN108171748A (en) * | 2018-01-23 | 2018-06-15 | 哈工大机器人(合肥)国际创新研究院 | A kind of visual identity of object manipulator intelligent grabbing application and localization method |
Non-Patent Citations (2)
Title |
---|
Automatic object detection in point clouds;friederike schwarzbach et al.;《classification approaches and method comparison》;20180927;第1-3页 * |
机载lidar点云数据滤波及建筑物提取技术研究;郭佳;《万方学位数据库》;20150415;第1-75页 * |
Also Published As
Publication number | Publication date |
---|---|
CN109448057A (en) | 2019-03-08 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN111007073B (en) | Method and system for online detection of part defects in additive manufacturing process | |
CN110378950B (en) | Tunnel structure crack identification method based on gray level and gradient fusion | |
CN110660104A (en) | Industrial robot visual identification positioning grabbing method, computer device and computer readable storage medium | |
CN110648367A (en) | Geometric object positioning method based on multilayer depth and color visual information | |
CN107471218B (en) | Binocular vision-based hand-eye coordination method for double-arm robot | |
CN110033431B (en) | Non-contact detection device and detection method for detecting corrosion area on surface of steel bridge | |
CN111126174A (en) | Visual detection method for robot to grab parts | |
CN105217324A (en) | A kind of novel de-stacking method and system | |
CN113643280B (en) | Computer vision-based plate sorting system and method | |
CN109166125A (en) | A kind of three dimensional depth image partitioning algorithm based on multiple edge syncretizing mechanism | |
CN112964724B (en) | Multi-target multi-region visual detection method and detection system | |
CN112833784B (en) | Steel rail positioning method combining monocular camera with laser scanning | |
CN114049557A (en) | Garbage sorting robot visual identification method based on deep learning | |
CN103824275A (en) | System and method for finding saddle point-like structures in an image and determining information from the same | |
CN114331986A (en) | Dam crack identification and measurement method based on unmanned aerial vehicle vision | |
CN115830018B (en) | Carbon block detection method and system based on deep learning and binocular vision | |
CN112669286A (en) | Infrared thermal image-based method for identifying defects and evaluating damage degree of external thermal insulation system of external wall | |
CN113822810A (en) | Method for positioning workpiece in three-dimensional space based on machine vision | |
CN113252103A (en) | Method for calculating volume and mass of material pile based on MATLAB image recognition technology | |
CN104614372B (en) | Detection method of solar silicon wafer | |
CN113936291A (en) | Aluminum template quality inspection and recovery method based on machine vision | |
CN109448057B (en) | Safety positioning method based on three-dimensional point cloud | |
CN110899147B (en) | Laser scanning-based online stone sorting method for conveyor belt | |
CN116188763A (en) | Method for measuring carton identification positioning and placement angle based on YOLOv5 | |
AU2021368390B2 (en) | Multi-target recognition system and method for follow-up robot based on coded thermal infrared mark |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant | ||
PE01 | Entry into force of the registration of the contract for pledge of patent right | ||
PE01 | Entry into force of the registration of the contract for pledge of patent right |
Denomination of invention: A secure localization method based on 3D point cloud Granted publication date: 20220225 Pledgee: Yinchuan SME Financing Guarantee Co.,Ltd. Pledgor: NINGXIA JUNENG ROBOT Co.,Ltd. Registration number: Y2024640000009 |