CN109448057B - Safety positioning method based on three-dimensional point cloud - Google Patents

Safety positioning method based on three-dimensional point cloud Download PDF

Info

Publication number
CN109448057B
CN109448057B CN201811298254.0A CN201811298254A CN109448057B CN 109448057 B CN109448057 B CN 109448057B CN 201811298254 A CN201811298254 A CN 201811298254A CN 109448057 B CN109448057 B CN 109448057B
Authority
CN
China
Prior art keywords
point cloud
dimensional point
dimensional
detected
dimensional matrix
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201811298254.0A
Other languages
Chinese (zh)
Other versions
CN109448057A (en
Inventor
靳开轩
郭强
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Ningxia Juneng Robot Co Ltd
Original Assignee
Ningxia Juneng Robot Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Ningxia Juneng Robot Co Ltd filed Critical Ningxia Juneng Robot Co Ltd
Priority to CN201811298254.0A priority Critical patent/CN109448057B/en
Publication of CN109448057A publication Critical patent/CN109448057A/en
Application granted granted Critical
Publication of CN109448057B publication Critical patent/CN109448057B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery
    • G06T7/521Depth or shape recovery from laser ranging, e.g. using interferometry; from the projection of structured light

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Optics & Photonics (AREA)
  • Length Measuring Devices By Optical Means (AREA)

Abstract

The invention discloses a safety positioning method based on three-dimensional point cloud, which comprises the following steps: s1, three-dimensional point cloud positioning: establishing a real coordinate system where an object to be detected is located, and converting the three-dimensional point cloud into the real coordinate system for representation; s2, limiting three-dimensional point cloud: determining the position of the three-dimensional point cloud within a safety range according to the shape and the coordinates of the object to be detected; s3, filtering the three-dimensional point cloud: and further smoothing the three-dimensional point cloud in the safety range to eliminate external light interference. According to the positioning method provided by the invention, the safety range of the point cloud corresponding to the real coordinate is set, and meanwhile, filtering processing is carried out, so that the precision of the point cloud data is improved, and the safety of the three-dimensional point cloud positioning method is improved.

Description

Safety positioning method based on three-dimensional point cloud
Technical Field
The invention relates to the technical field of machine vision positioning, in particular to a safety positioning method based on three-dimensional point cloud.
Background
In industrial automation production, a plurality of mechanical hands are adopted to complete the movement, the picking and the placing of workpieces, and the control of the mechanical hands is important for positioning and judging whether materials exist or not. In the prior art, one method based on machine vision positioning is to set label information on an object to be grabbed, and identify and position the label information through machine vision so as to drive a manipulator to grab, and the method needs manual participation and affects the working efficiency; the other method is to collect two-dimensional images of the objects to be grabbed and perform target identification and positioning by extracting and analyzing image features, the method is complex in algorithm and long in operation time, the positioning method is sensitive to the surface shapes of the objects to be grabbed, and different algorithms are required to be operated to extract the features of the objects to be grabbed in different shapes, so that the production efficiency is seriously influenced. In industrial production, a grabbing object is a randomly stacked workpiece under many conditions, when the grabbing sequence is not determined, the visual positioning method in the prior art is complex, the application requirement is high, safety limit is rarely mentioned in the visual positioning method in the prior art, and the influence on the service life of a manipulator due to collision between the manipulator and the surrounding environment cannot be effectively avoided.
Disclosure of Invention
The invention provides a safety positioning method based on three-dimensional point cloud, aiming at overcoming the problems of large calculation amount, high application requirement, no safety limit and the like of a vision positioning method in the prior art, and the safety positioning method can realize the safety positioning of randomly stacked objects to be detected and improve the grabbing efficiency of a manipulator.
The invention provides a safety positioning method based on three-dimensional point cloud, which comprises the following steps:
s1, three-dimensional point cloud positioning: establishing a real coordinate system where an object to be detected is located, and converting the three-dimensional point cloud into the real coordinate system for representation;
s2, limiting three-dimensional point cloud: determining the position of the three-dimensional point cloud within a safety range according to the shape and the coordinates of the object to be detected;
s3, filtering the three-dimensional point cloud: and further smoothing the three-dimensional point cloud in the safety range to eliminate the interference of external light.
The invention provides a safe positioning method based on three-dimensional point cloud, and as a preferred mode, the step S1 further comprises the following steps:
s11, three-dimensional point cloud acquisition: acquiring three-dimensional point cloud by using a laser triangulation principle, and combining a camera with a fixed relative position and a laser generator into a shooting system;
s12, establishing a real coordinate system: taking the center position of the grabbed object on the plane where the image acquisition unit is located as the origin of the real coordinate system, taking the direction of the origin of the coordinate system pointing to the object to be detected as the Z direction, taking the moving shooting direction of the shooting system as the X direction, and determining the Y direction according to the right-hand rule, wherein the plane where the shooting system is located is the plane where the highest point of the object to be detected is located;
s13, point cloud coordinate mapping: the number of imaging images of an object to be detected captured by the shooting system is N, the resolution line number of a single image is M, so that a two-dimensional matrix A with M rows and N columns can be obtained after the imaging images are synthesized according to the shooting sequence:
Figure BDA0001851668210000011
wherein:
a is a two-dimensional matrix with height information;
amnthe depth value of the m row and n column points of the two-dimensional matrix A is obtained;
the m-th row and n-th column points in the two-dimensional matrix A have the following expressions in a real coordinate system:
Figure BDA0001851668210000021
wherein:
amnthe depth value of the mth row and the nth column of the two-dimensional matrix A is obtained;
x is the distance from the starting point to the ending point;
n is the number of imaging images of the object to be detected;
m is the number of lines of the resolution of a single image;
k is a constant coefficient in the imaging size relation;
the imaging size relationship is:
l=ksh
wherein:
l is the actual length;
s is the number of pixel points occupied by the imaging length;
k is a constant coefficient;
h is the shooting height;
the method has the advantages that the three-dimensional point cloud of the object to be detected is used for positioning, the two-dimensional image feature extraction process is avoided, the method is suitable for positioning the objects to be detected in different shapes, the algorithm is simplified, and the applicability is high.
The invention provides a safety positioning method based on three-dimensional point cloud, which is used as an optimal mode, and the method for determining the center of an object to be detected comprises the following steps:
making an externally tangent rectangle on the projection of the object to be measured on the XY plane, wherein the diagonal intersection point of the externally tangent rectangle is the center of the object to be measured; the center is determined through the circumscribed rectangle, the edge range of the object to be detected is determined to the maximum extent, and a foundation is provided for the safe limit of the subsequent three-dimensional point cloud.
The invention provides a safety positioning method based on three-dimensional point cloud, which is used as an optimal mode, when an object to be detected is positioned and moved according to the three-dimensional point cloud, in order to avoid collision with obstacles around the object to be detected, the safety limit is carried out on the three-dimensional point cloud, the points in the mth row and the nth column of a two-dimensional matrix A are represented in a coordinate system, and the points in a safety range satisfy the following relations:
Figure BDA0001851668210000022
wherein:
a is the projection distance of the circumscribed rectangle in the X direction;
b is the projection distance of the circumscribed rectangle in the Y direction;
h1the highest coordinate of the object to be measured in the Z direction is obtained;
h2the lowest coordinate of the object to be measured in the Z direction is obtained;
amnthe depth value of the mth row and the nth column of the two-dimensional matrix A is obtained;
x is the distance from the starting point to the ending point;
n is the number of imaging images of the object to be detected;
m is the number of lines of the resolution of a single image;
k is a constant coefficient in the imaging size relation;
the three-dimensional point cloud is limited, the point cloud display range is reduced, the three-dimensional point cloud is positioned in a safety range, and collision between a manipulator and surrounding obstacles in the process of grabbing the object to be detected is avoided.
The invention provides a three-dimensional point cloud-based safe positioning method, and as a preferred mode, the specific operation method of the step S3 is as follows:
establishing a two-dimensional matrix B on the basis of the two-dimensional matrix A, wherein the mth row and the n columns of elements B of the two-dimensional matrix BmnWhen the condition of the filter point is met, bmnAssigned a value of | h2|+|h1I, thereby filtering out points; in other cases, bmnIs assigned as the m-th row and n-column element a of the two-dimensional matrix AmnThe value is obtained.
The invention provides a safety positioning method based on three-dimensional point cloud, which is used as an optimal mode, wherein in the step S3, a point strip is matched and filteredThe element is an element B in the mth row and nth column of the two-dimensional matrix BmnAt the boundary of the two-dimensional matrix B, i.e., M is 1 or M is M, N is 1 or N is N; or element bmnThe height difference with any one of the surrounding 8 elements is larger than the Z-axis safety range | h2-h1One fifth of |, i.e.
Figure BDA0001851668210000031
And filtering the three-dimensional point cloud to remove edge points and outliers in the three-dimensional point cloud, thereby improving the data accuracy of the three-dimensional point cloud and avoiding the interference of external light.
The invention carries out positioning through the three-dimensional point cloud of the object to be detected, is not influenced by the surface shape and the size of the object to be detected, simplifies the point cloud algorithm, can effectively improve the grabbing efficiency when randomly stacked objects to be detected are grabbed in a disordered way, and carries out limiting and filtering on the three-dimensional point cloud so as to avoid the external light interference and grabbing collision faults.
Drawings
FIG. 1 is a flow chart of a safety positioning method based on three-dimensional point cloud;
FIG. 2 is a flowchart of a step S1 of a safety positioning method based on three-dimensional point cloud;
FIG. 3 is a schematic diagram of relative positions of a camera and a laser generator of a safety positioning method based on three-dimensional point cloud;
FIG. 4 is a schematic diagram of a real coordinate system;
Detailed Description
Example 1
As shown in fig. 1, the invention provides a safe positioning method based on three-dimensional point cloud, comprising the following steps:
s1, three-dimensional point cloud positioning: establishing a real coordinate system where an object to be detected is located, and converting the three-dimensional point cloud into the real coordinate system for representation; as shown in fig. 2, step S1 further includes the following steps:
s11, three-dimensional point cloud acquisition: acquiring three-dimensional point cloud by using a laser triangulation principle, and combining a camera and a laser generator with fixed relative positions into a shooting system, wherein the relative positions of the camera and the laser generator are shown in figure 3; the method comprises the steps that a shooting system is located at one side of the longitudinal edge of an object to be detected and horizontally moves along the transverse edge of the object to be detected, meanwhile, the shooting system continuously shoots at a fixed frequency until the complete surface imaging image collection of the object to be detected is completed, and the shooting system sequentially inputs obtained pictures into LabVIEW for distortion correction; extracting a picture pixel matrix after the distortion correction processing; binarizing the matrix by a certain threshold (the threshold parameter needs to be set according to the brightness), and performing denoising treatment; extracting and sequencing the points with the pixel value of 255 in each row, and substituting the maximum column number extraction into the formula of S1 to obtain the actual distance between each row of points and the camera; the shooting height and the imaging point of the current point on the camera chip have the following geometrical relationship according to the laser triangulation principle:
Figure BDA0001851668210000032
wherein:
theta is an included angle between a connecting line of the current point and the imaging point and the laser line;
alpha is the included angle between the camera lens axis and the laser line;
b is the horizontal distance from the center of the camera lens to the laser line;
a is the 1/2 size of the camera chip;
f is the focal length of the camera;
h is the shooting height;
x is the distance from the imaging point to the chip boundary;
wherein the distance x has the following relationship:
Figure BDA0001851668210000041
wherein:
u is the chip boundary length;
t is the number of pixels from the imaging point to the image boundary;
z is the total number of pixels in the image in the length direction.
A plurality of pictures are takenThe geometric relationship is substituted into the height data to derive the values of the constants b and alpha, and then the relation formula of the shooting height h is calculated according to x as follows:
Figure BDA0001851668210000042
s12, establishing a real coordinate system: taking the center position of the grabbed object on the plane where the image acquisition unit is located as the origin of the real coordinate system, taking the plane where the shooting system is located as the plane where the highest point of the object to be detected is located, taking the direction in which the origin of the coordinate system points at the object to be detected as the Z direction, taking the moving shooting direction of the shooting system as the X direction, and determining the Y direction according to the right-hand rule, namely taking the direction in which the origin of the coordinate system points at the shooting system as the Y direction; and synthesizing the processed surface imaging images according to a shooting sequence to generate a complete image of the three-dimensional point cloud on the surface of the object to be detected.
S13, point cloud coordinate mapping: the number of imaging images of an object to be detected captured by the shooting system is N, the resolution line number of a single image is M, so that a two-dimensional matrix A with M rows and N columns can be obtained after the imaging images are synthesized according to the shooting sequence:
Figure BDA0001851668210000043
wherein:
a is a two-dimensional matrix with height information;
amnthe depth value of the m row and n column points of the two-dimensional matrix A is obtained;
the m-th row and n-th column points in the two-dimensional matrix A have the following expressions in a real coordinate system:
Figure BDA0001851668210000044
wherein:
amnthe depth value of the mth row and the nth column of the two-dimensional matrix A is obtained;
x is the distance from the starting point to the ending point;
n is the number of imaging images of the object to be detected;
m is the number of lines of the resolution of a single image;
k is a constant coefficient in the imaging size relation;
the imaging size relationship is expressed as:
l=ksh
wherein the content of the first and second substances,
l is the actual length;
s is the number of pixel points occupied by the imaging length;
k is a constant coefficient;
h is the shooting height;
the method for determining the center of the object to be measured comprises the following steps: making an externally tangent rectangle on the projection of the object to be measured on the XY plane, wherein the diagonal intersection point of the externally tangent rectangle is the center of the object to be measured;
s2, limiting three-dimensional point cloud: determining the position of the three-dimensional point cloud within a safety range according to the shape and the coordinates of the object to be detected; the point within the safety range satisfies the following relationship:
Figure BDA0001851668210000051
wherein:
a is the projection distance of the circumscribed rectangle in the X direction;
b is the projection distance of the circumscribed rectangle in the Y direction;
h1the minimum coordinate of the object to be measured in the Z direction is obtained;
h2the maximum coordinate of the object to be measured in the Z direction is obtained;
amnthe depth value of the mth row and the nth column of the two-dimensional matrix A is obtained;
x is the distance from the starting point to the ending point;
n is the number of imaging images of the object to be detected;
m is the number of lines of the resolution of a single image;
k is a constant coefficient in the imaging size relation;
s3, filtering the three-dimensional point cloud: and further smoothing the three-dimensional point cloud in the safety range to eliminate the interference of external light. The specific operation method comprises the following steps: establishing a two-dimensional matrix B on the basis of the two-dimensional matrix A, wherein the mth row n of the two-dimensional matrix BColumn element bmnWhen the condition of the filter point is met, bmnAssigned a value of | h2|+|h1I or BmnThe value is assigned to be the maximum value of any photographing height, so that the aim of filtering points is fulfilled; in other cases, bmnIs assigned as the m-th row and n-column element a of the two-dimensional matrix AmnThe value is obtained. Wherein, meeting the condition of the filter point refers to the element B in the mth row and n columns of the two-dimensional matrix BmnAt the boundary of the two-dimensional matrix B, i.e., M is 1 or M is M or N is 1 or N is N; or element bmnThe height difference with any one of the surrounding 8 elements is larger than the Z-axis safety range | h2-h1One fifth of |, i.e.
Figure BDA0001851668210000052
In this embodiment, a real coordinate system established with the brake disc placed in the material frame as the grasping object and the center of the plane on the material frame as the origin is shown in fig. 4, where a and b are the ranges of the length and width directions of the material frame, respectively, and h1、h2The vertical distances from the upper edge and the lower edge of the material frame to the origin of the real coordinate system are respectively as the element B in the matrix B during point cloud filteringmnWhen the filtering condition is satisfied, let bmnThe value is 1500, the purpose of filtering is achieved, and a is used in other casesmnIs assigned to bmn(ii) a The filtering conditions are as follows: the elements being at matrix boundaries or bmnThe height difference with any one of the surrounding 8 elements is more than 50. And corresponding to different colors according to different depth values of the matrix elements, thereby obtaining a color three-dimensional point cloud image. The three-dimensional point cloud image after the safety limit is changed into black relative to the ground color of the original point cloud image, and points which cannot be safely captured are filtered, so that the three-dimensional point cloud image outline of the brake disc is more clear and prominent, the accuracy of point cloud data acquisition is influenced due to external light interference, and the point cloud image after the three-dimensional point cloud filtering is smoother.
The foregoing description is intended to be illustrative rather than limiting, and it will be appreciated by those skilled in the art that various modifications, changes, and equivalents may be made without departing from the spirit and scope of the invention as defined in the appended claims.

Claims (5)

1. A safety positioning method based on three-dimensional point cloud is characterized in that: the method comprises the following steps:
s1, three-dimensional point cloud positioning: establishing a real coordinate system where an object to be detected is located, and converting the three-dimensional point cloud into the real coordinate system for representation;
s2, limiting three-dimensional point cloud: determining the position of the three-dimensional point cloud within a safety range according to the shape and the coordinates of the object to be detected; when the object to be detected is moved according to the three-dimensional point cloud positioning, in order to avoid collision with obstacles around the object to be detected, the three-dimensional point cloud is safely limited, and points in a safety range satisfy the following relations by combining the coordinate representation of the mth row and the nth column of points of the two-dimensional matrix A in the real coordinate system:
Figure FDA0003439107330000011
wherein:
a is the projection distance of the circumscribed rectangle in the X direction;
b is the projection distance of the circumscribed rectangle in the Y direction;
h1the minimum coordinate of the object to be measured in the Z direction is obtained;
h2the maximum coordinate of the object to be measured in the Z direction is obtained;
amnthe depth value of the mth row and the nth column of the two-dimensional matrix A is obtained;
x is the distance from the starting point to the ending point;
n is the number of imaging images of the object to be detected;
m is the number of lines of the resolution of a single image;
k is a constant coefficient in the imaging size relation;
s3, filtering the three-dimensional point cloud: further smoothing the three-dimensional point cloud in a safety range to eliminate external light interference;
the step S1 further includes the steps of:
s11, three-dimensional point cloud acquisition: acquiring three-dimensional point cloud by using a laser triangulation principle, and combining a camera with a fixed relative position and a laser generator into a shooting system;
s12, establishing a real coordinate system: taking the central position of the object to be measured on the plane where the shooting system is located as the origin of the real coordinate system, taking the direction in which the origin of the coordinate system points at the object to be measured as the Z direction, taking the moving shooting direction of the shooting system as the X direction, and determining the Y direction according to the right-hand rule;
s13, point cloud coordinate mapping: the number of the imaging images of the object to be detected captured by the shooting system is N, the resolution line number of a single image is M, so that a two-dimensional matrix A with M rows and N columns can be obtained after the imaging images are synthesized according to the shooting sequence:
Figure FDA0003439107330000012
wherein:
a is a two-dimensional matrix with height information;
the m row and n column points in the two-dimensional matrix A have the following expressions in the real coordinate system:
Figure FDA0003439107330000013
2. the three-dimensional point cloud-based safe positioning method according to claim 1, wherein: the method for determining the center of the object to be detected comprises the following steps:
and drawing a circumscribed rectangle on the projection of the object to be detected on the XY plane, wherein the intersection point of the diagonals of the circumscribed rectangle is the center of the object to be detected.
3. The three-dimensional point cloud-based safe positioning method according to claim 1, wherein: the imaging size relationship is expressed as:
l=ksh
wherein:
l is the actual length;
s is the number of pixel points occupied by the imaging length;
h is the shooting height.
4. The three-dimensional point cloud-based safe positioning method according to claim 1, wherein: the specific operation method of step S3 is as follows:
establishing a two-dimensional matrix B on the basis of the two-dimensional matrix A, wherein the mth row and the n columns of elements B of the two-dimensional matrix BmnWhen the condition of the filter point is met, bmnAssigned a value of | h2|+|h1L, |; in other cases, bmnAssigning the value of the matrix to the mth row and the n columns of the elements a of the two-dimensional matrix AmnThe value is obtained.
5. The three-dimensional point cloud-based safe positioning method according to claim 4, wherein: the condition that the filter point is met in the step S3 means that the element B in the mth row and nth column of the two-dimensional matrix B ismnAt the boundary of the two-dimensional matrix B, i.e. M-1 or M-M or N-1 or N-N; or element bmnThe height difference with any one of the surrounding 8 elements is larger than the Z-axis safety range | h2-h1One fifth of |, i.e.
Figure FDA0003439107330000021
Figure FDA0003439107330000022
CN201811298254.0A 2018-11-02 2018-11-02 Safety positioning method based on three-dimensional point cloud Active CN109448057B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811298254.0A CN109448057B (en) 2018-11-02 2018-11-02 Safety positioning method based on three-dimensional point cloud

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811298254.0A CN109448057B (en) 2018-11-02 2018-11-02 Safety positioning method based on three-dimensional point cloud

Publications (2)

Publication Number Publication Date
CN109448057A CN109448057A (en) 2019-03-08
CN109448057B true CN109448057B (en) 2022-02-25

Family

ID=65550686

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811298254.0A Active CN109448057B (en) 2018-11-02 2018-11-02 Safety positioning method based on three-dimensional point cloud

Country Status (1)

Country Link
CN (1) CN109448057B (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102779280A (en) * 2012-06-19 2012-11-14 武汉大学 Traffic information extraction method based on laser sensor
CN106558096A (en) * 2015-09-30 2017-04-05 株式会社日立大厦系统 The modeling method of the model building device, program and cage guide of three dimensional point cloud
CN107016697A (en) * 2017-04-11 2017-08-04 杭州光珀智能科技有限公司 A kind of height measurement method and device
CN108171748A (en) * 2018-01-23 2018-06-15 哈工大机器人(合肥)国际创新研究院 A kind of visual identity of object manipulator intelligent grabbing application and localization method

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
DE112017006909T5 (en) * 2017-02-22 2019-10-10 Faro Technologies, Inc. THREE DIMENSIONAL PICTURES

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102779280A (en) * 2012-06-19 2012-11-14 武汉大学 Traffic information extraction method based on laser sensor
CN106558096A (en) * 2015-09-30 2017-04-05 株式会社日立大厦系统 The modeling method of the model building device, program and cage guide of three dimensional point cloud
CN107016697A (en) * 2017-04-11 2017-08-04 杭州光珀智能科技有限公司 A kind of height measurement method and device
CN108171748A (en) * 2018-01-23 2018-06-15 哈工大机器人(合肥)国际创新研究院 A kind of visual identity of object manipulator intelligent grabbing application and localization method

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
Automatic object detection in point clouds;friederike schwarzbach et al.;《classification approaches and method comparison》;20180927;第1-3页 *
机载lidar点云数据滤波及建筑物提取技术研究;郭佳;《万方学位数据库》;20150415;第1-75页 *

Also Published As

Publication number Publication date
CN109448057A (en) 2019-03-08

Similar Documents

Publication Publication Date Title
CN111007073B (en) Method and system for online detection of part defects in additive manufacturing process
CN110378950B (en) Tunnel structure crack identification method based on gray level and gradient fusion
CN110660104A (en) Industrial robot visual identification positioning grabbing method, computer device and computer readable storage medium
CN110648367A (en) Geometric object positioning method based on multilayer depth and color visual information
CN107471218B (en) Binocular vision-based hand-eye coordination method for double-arm robot
CN110033431B (en) Non-contact detection device and detection method for detecting corrosion area on surface of steel bridge
CN111126174A (en) Visual detection method for robot to grab parts
CN105217324A (en) A kind of novel de-stacking method and system
CN113643280B (en) Computer vision-based plate sorting system and method
CN109166125A (en) A kind of three dimensional depth image partitioning algorithm based on multiple edge syncretizing mechanism
CN112964724B (en) Multi-target multi-region visual detection method and detection system
CN112833784B (en) Steel rail positioning method combining monocular camera with laser scanning
CN114049557A (en) Garbage sorting robot visual identification method based on deep learning
CN103824275A (en) System and method for finding saddle point-like structures in an image and determining information from the same
CN114331986A (en) Dam crack identification and measurement method based on unmanned aerial vehicle vision
CN115830018B (en) Carbon block detection method and system based on deep learning and binocular vision
CN112669286A (en) Infrared thermal image-based method for identifying defects and evaluating damage degree of external thermal insulation system of external wall
CN113822810A (en) Method for positioning workpiece in three-dimensional space based on machine vision
CN113252103A (en) Method for calculating volume and mass of material pile based on MATLAB image recognition technology
CN104614372B (en) Detection method of solar silicon wafer
CN113936291A (en) Aluminum template quality inspection and recovery method based on machine vision
CN109448057B (en) Safety positioning method based on three-dimensional point cloud
CN110899147B (en) Laser scanning-based online stone sorting method for conveyor belt
CN116188763A (en) Method for measuring carton identification positioning and placement angle based on YOLOv5
AU2021368390B2 (en) Multi-target recognition system and method for follow-up robot based on coded thermal infrared mark

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
PE01 Entry into force of the registration of the contract for pledge of patent right
PE01 Entry into force of the registration of the contract for pledge of patent right

Denomination of invention: A secure localization method based on 3D point cloud

Granted publication date: 20220225

Pledgee: Yinchuan SME Financing Guarantee Co.,Ltd.

Pledgor: NINGXIA JUNENG ROBOT Co.,Ltd.

Registration number: Y2024640000009