CN115100270A - Intelligent extraction method and device for various tracks based on 3D image information - Google Patents

Intelligent extraction method and device for various tracks based on 3D image information Download PDF

Info

Publication number
CN115100270A
CN115100270A CN202210668927.7A CN202210668927A CN115100270A CN 115100270 A CN115100270 A CN 115100270A CN 202210668927 A CN202210668927 A CN 202210668927A CN 115100270 A CN115100270 A CN 115100270A
Authority
CN
China
Prior art keywords
point
contour
inner edge
depth map
highest
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210668927.7A
Other languages
Chinese (zh)
Inventor
谢显飞
陈理辉
刘荣贵
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangzhou Aishiwei Intelligent Technology Co ltd
Original Assignee
Guangzhou Aishiwei Intelligent Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangzhou Aishiwei Intelligent Technology Co ltd filed Critical Guangzhou Aishiwei Intelligent Technology Co ltd
Priority to CN202210668927.7A priority Critical patent/CN115100270A/en
Publication of CN115100270A publication Critical patent/CN115100270A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery
    • G06T7/55Depth or shape recovery from multiple images
    • G06T7/593Depth or shape recovery from multiple images from stereo images
    • G06T7/596Depth or shape recovery from multiple images from stereo images from three or more stereo images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T1/00General purpose image data processing
    • G06T1/0014Image feed-back for automatic industrial control, e.g. robot with camera
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/20Image enhancement or restoration using local operators
    • G06T5/30Erosion or dilatation, e.g. thinning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/70Denoising; Smoothing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • G06T7/74Determining position or orientation of objects or cameras using feature-based methods involving reference images or patches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Robotics (AREA)
  • Image Analysis (AREA)

Abstract

The embodiment of the application provides a method and a device for intelligently extracting various tracks based on 3D image information, wherein the method comprises the following steps: carrying out omnibearing 3D scanning on the product and generating 3D point cloud data; creating a depth map based on the 3D point cloud data; determining a trajectory process rule based on the depth map; and extracting the robot motion track corresponding to the track process rule. The method and the device have the advantages that the highest point of the specified number and uniform inner edge profile is automatically obtained according to the three-dimensional image under the condition of no manual intervention, the stability and accuracy of the track are guaranteed, the damage of noise, dust and harmful gas to the body of a worker in the production process of the product is greatly reduced, and the processing efficiency is greatly improved.

Description

Intelligent extraction method and device for various tracks based on 3D image information
Technical Field
The application belongs to the technical field of automatic processing, and particularly relates to a method and a device for intelligently extracting various tracks based on 3D image information.
Background
In most processing and manufacturing enterprises, the spraying, polishing and other simple and repeated work is mainly carried out manually. In some factories with higher automation degree, such as furniture factories and welding factories, work such as gluing and welding can be carried out in a mode of teaching track points by using industrial robots.
The repeated and single work of pure manual operation work, and many factories have severe environment and are full of noise, dust, harmful gas and the like, so that workers working stably for a long time are difficult to be found in many factories at present, especially young people with strong richness.
Disclosure of Invention
The embodiment of the application provides a method and a device for intelligently extracting various tracks based on 3D image information, the method can solve the problems that the work is purely operated manually, the repeatability and the singleness are poor, the environment of a plurality of factories is severe, the factories are full of noise, dust, harmful gas and the like, and a plurality of factories can hardly bring workers working stably for a long time at present, especially young people with strong richness.
The first aspect of the embodiment of the invention provides a method for intelligently extracting various tracks based on 3D image information, which comprises the following steps:
carrying out omnibearing 3D scanning on the product and generating 3D point cloud data;
creating a depth map based on the 3D point cloud data;
determining a trajectory process rule based on the depth map;
and extracting the robot motion track corresponding to the track process rule.
In a possible implementation manner of the first aspect, the creating a depth map based on the 3D point cloud data includes:
only effective point clouds in the 3D point cloud data are reserved according to the threshold value in the Z-axis direction;
creating a minimum bounding box of the valid point cloud;
determining coordinates of the minimum bounding box;
creating a depth map based on the coordinates of the minimum bounding box.
In a possible implementation manner of the first aspect, the determining a track process rule based on the depth map includes:
extracting an inner and outer contour middle part depth map containing the highest inner edge;
removing a connected and slender noise point set at the edge of the point cloud;
determining 3 centers of the maximum outline, and dividing a maximum outline coordinate point set into 3 parts according to the centers;
and acquiring the maximum outer contour coordinate point of the inner and outer contour middle part graph.
In a possible implementation manner of the first aspect, the determining 3 centers of the maximum outline, and dividing the maximum outline coordinate point set into 3 parts according to the centers includes:
designating a highest inner edge contour point as a first bisector, drawing a circle by taking the first bisector as a circle center and taking a preset distance as a radius to obtain two intersection points of the circle and the highest inner edge contour;
setting a connecting line of the circle center and the contour center of the highest inner edge as AO, respectively BO and CO between two intersection points and the connecting line of the contour center, respectively calculating ≤ AOB and ≤ AOC, and taking the intersection point with the included angle greater than 0 as the next equally dividing point until the distance between the next equally dividing point and the initial equally dividing point does not exceed the radius of the circle;
and according to the input number of equal parts, points are taken at the same interval to generate a specified number of equally divided highest inner edge contour points.
In a possible implementation manner of the first aspect, the generating a specified number of equally divided highest inner edge contour points by taking points at equal intervals according to the input number of equally divided points includes:
obtaining the image area in the contour from the obtained highest inner edge contour, corroding the image in the contour by taking the specified retraction distance as a corrosion radius, and extracting the corroded maximum outer contour
And equally dividing the highest inner edge contour point set into a line segment set of the center of the highest inner edge contour, wherein the intersection point set of the line segment set and the maximum outer contour obtained after corrosion in the last step is the equally divided inner contraction edge contour point.
A second aspect of the embodiments of the present invention provides an apparatus for intelligently extracting various types of trajectories based on 3D image information, the apparatus including:
the 3D point cloud data generation module is used for carrying out omnibearing 3D scanning on the product and generating 3D point cloud data;
a depth map creation module for creating a depth map based on the 3D point cloud data;
the track process rule determining module is used for determining a track process rule based on the depth map;
and the robot motion track extraction module is used for extracting the robot motion track corresponding to the track process rule.
In a possible implementation manner of the second aspect, the depth map creating module includes:
the effective point cloud retaining submodule is used for retaining only effective point clouds in the 3D point cloud data according to the Z-axis direction threshold;
a minimum bounding box creating submodule for creating a minimum bounding box of the valid point cloud;
the coordinate determination submodule of the minimum bounding box is used for determining the coordinate of the minimum bounding box;
and the depth map creating submodule is used for creating a depth map based on the coordinates of the minimum bounding box.
In a possible implementation manner of the second aspect, the trajectory process rule includes a highest contour point, an edge point, a feature point, and a lowest point, and the trajectory process rule determining module includes:
the middle part depth map extraction submodule is used for extracting the middle part depth map of the inner and outer contours containing the highest inner edge;
the noise point set removing submodule is used for removing a communicated and slender noise point set at the edge of the point cloud;
the splitting submodule is used for determining 3 centers of the maximum outline and dividing the maximum outline coordinate point set into 3 parts according to the centers;
and the maximum outer contour coordinate point acquisition submodule is used for acquiring the maximum outer contour coordinate point of the middle part graph of the inner contour and the outer contour.
In a possible implementation manner of the second aspect, the splitting sub-module includes:
an intersection point obtaining unit, configured to designate a highest inner edge contour point as a first bisector, draw a circle with the first bisector as a center of the circle and a preset distance as a radius, and obtain two intersection points of the circle and the highest inner edge contour;
the distance division completion unit is used for setting a connecting line between the circle center and the center of the highest inner edge contour to be AO, setting connecting lines between two intersection points and the contour center to be BO and CO respectively, calculating angle AOB and angle AOC respectively, and finishing division work until the distance between the next division point and the initial division point does not exceed the radius of the circle, wherein the intersection point corresponding to the included angle greater than 0 is the next division point;
and a highest inner edge contour point generating unit, which is used for generating a specified number of equally divided highest inner edge contour points by taking points at the same interval according to the input equally divided number.
In a possible implementation manner of the second aspect, the highest inner edge contour point generating unit includes:
the maximum outer contour extraction subunit is used for obtaining an image area in the contour from the obtained highest inner edge contour, corroding the image in the contour by taking the specified retraction distance as a corrosion radius, and extracting the corroded maximum outer contour;
and the retraction edge contour point subunit is used for equally dividing the highest inner edge contour point set into a line segment set at the center of the highest inner edge contour, and the intersection point set of the line segment set and the maximum outer contour obtained after corrosion in the last step is the equally divided retraction edge contour point.
Compared with the prior art, the method and the device for intelligently extracting various tracks based on the 3D image information have the advantages that: the method comprises the steps of carrying out omnibearing 3D scanning on a product and generating 3D point cloud data; creating a depth map based on the 3D point cloud data; determining a trajectory process rule based on the depth map; the robot motion track corresponding to the track process rule is extracted, the highest points of the contour of the inner edge in specified quantity and uniform in quantity are automatically obtained according to the three-dimensional image under the condition of no manual intervention, the stability and accuracy of the track are guaranteed, the damage of noise, dust and harmful gas to the body of a worker in the production process of the product is greatly reduced, and the processing efficiency is greatly improved.
Drawings
Fig. 1 is a schematic flowchart of a method for intelligently extracting various types of tracks based on 3D image information according to an embodiment of the present invention;
fig. 2 is a schematic structural diagram of an intelligent extraction device for various types of tracks based on 3D image information according to an embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be obtained by a person skilled in the art without inventive step based on the embodiments of the present invention, are within the scope of protection of the present invention.
In most processing and manufacturing enterprises, the spraying, polishing and other simple and repeated work is mainly carried out manually. In some factories with higher automation degree, such as furniture factories and welding factories, the industrial robot can teach the mode of track points to perform gluing, welding and other works.
The repeated and single work of pure manual operation work, and many factories have severe environment and are full of noise, dust, harmful gas and the like, so that workers working stably for a long time are difficult to be found in many factories at present, especially young people with strong richness.
In order to solve the above problem, the following specific embodiments will describe and explain various track intelligent extraction methods based on 3D image information according to embodiments of the present application in detail.
Referring to fig. 1, a schematic flow chart of a method for intelligently extracting various types of tracks based on 3D image information according to an embodiment of the present invention is shown.
As an example, the method for intelligently extracting various types of tracks based on 3D image information may include:
and S11, carrying out omnibearing 3D scanning on the product and generating 3D point cloud data.
Firstly, the product is subjected to omnibearing 3D scanning through a 3D camera and 3D point cloud data are generated.
And S12, creating a depth map based on the 3D point cloud data.
In an alternative embodiment, step S12 may include the following sub-steps:
and S121, only keeping effective point clouds in the 3D point cloud data according to the Z-axis direction threshold.
And S122, creating a minimum bounding box of the effective point cloud.
And S123, determining the coordinate of the minimum bounding box.
And S124, creating a depth map based on the coordinates of the minimum bounding box.
Firstly, only effective point clouds are reserved according to a threshold value in the Z-axis direction; creating a minimum bounding Box of the three-dimensional point cloud, calculating a transformation matrix MatStd for transforming the three-dimensional point cloud to a standard position, wherein the longest axis of the minimum bounding Box of the point cloud under the standard position is parallel to the origin coordinate system + x axis of the standard position, the shortest axis is parallel to the origin coordinate system + z axis of the standard position, the minor axis is parallel to the y axis of the origin coordinate system of the standard position (after the orientation of the x axis and the z axis is determined, the orientation of the y axis is also determined), and recording the longest axis length Box3DLength, the minor axis length Box3DWidth and the shortest axis length Box3DHeight of the minimum bounding Box;
reconstructing the three-dimensional point cloud under the standard position into a new depth map: the width of the depth map is 1.2 times of the longest axis of the minimum bounding box, the height of the minimum bounding box is 1.2 times of the minor axis of the minimum bounding box, the row coordinate set of effective pixels in the depth map is the y coordinate of the three-dimensional point set, the column coordinate set of effective pixels is the x coordinate of the three-dimensional point set, the depth value (gray value) of the pixel point set in the depth map is the z coordinate of the three-dimensional point set, and finally the width and the height of the depth map and the row and column coordinates of the effective pixel point set are simultaneously multiplied by a ratio coefficient ratio which is 10 times, so that the whole is expanded by 10 times, and the precision of the acquired highest contour point is improved. When the product placing direction is changed at will, the original depth map may be seriously deformed, and the newly generated depth map is basically kept consistent, so that the stability of a subsequent processing result is greatly improved.
And S13, determining a track process rule based on the depth map.
In one embodiment, step S13 may include the following sub-steps:
s131, extracting the depth map of the middle part of the inner contour and the outer contour containing the highest inner edge.
And S132, removing the connected and slender noise point set at the edge of the point cloud.
S133, determining 3 centers of the maximum outline, and dividing the maximum outline coordinate point set into 3 parts according to the centers.
In an embodiment of the present application, the substep S133 includes:
s1331, designating a highest inner edge contour point as a first bisector, drawing a circle by taking the first bisector as a circle center and taking a preset distance as a radius, and obtaining two intersection points of the circle and the highest inner edge contour;
s1332, setting a connecting line of a circle center and the center of the highest inner edge contour as AO, respectively setting connecting lines of two intersection points and the contour center as BO and CO, respectively calculating ≤ AOB and ≤ AOC, and setting the intersection point with the included angle greater than 0 as the next equally dividing point until the distance between the next equally dividing point and the initial equally dividing point is not more than the radius of the circle;
and S1333, taking points at the same interval according to the input number of the equal parts, and generating the equal parts of the highest inner edge contour points with the appointed number.
In an embodiment of the present application, the substep S1333 includes:
and S13331, obtaining an image area in the contour from the obtained highest inner edge contour, corroding the image in the contour by using the specified retraction distance as a corrosion radius, and extracting the maximum outer contour after corrosion.
And S133312, equally dividing the highest inner edge contour point set into line segment sets, wherein the intersection point set of the line segment set and the maximum outer contour obtained after corrosion in the last step is the equally divided retracted edge contour point.
Extracting an inner and outer contour middle part depth map containing the highest inner edge: corroding and then expanding the original depth map, aiming at removing a communicated and slender noise point set at the edge of the point cloud, reducing the interference on the extraction of the highest point, and then taking a difference set image of an expanded image area and a corroded image area;
acquiring a maximum outer contour coordinate point of the inner and outer contour middle part graph: extracting a maximum outline according to the area size, and extracting a maximum outline coordinate point set;
determining 3 centers of the maximum outline, and dividing the maximum outline coordinate point set into 3 parts according to the centers: firstly, the minimum circumscribed rectangle of the maximum outline is obtained, the center of the minimum circumscribed rectangle is the first center, and the pixels on the longest edge of the minimum circumscribed rectangle are obtained.
Extracting the highest inner edge contour point: first, a small rectangle set with the maximum outer contour point as the center is obtained, the width of the small rectangle is approximately 1mm, the length of the small rectangle is 0.3 x Box3DWidth, the center of the maximum outer contour is calculated, and the direction of the connecting line of the center of the small rectangle and the center of the outer contour is regarded as the direction of the small rectangle. And then, acquiring the highest point in each small rectangle, and extracting the point with the minimum distance from the maximum outline center from a series of points close to the highest point, wherein the point is the highest inner edge point. And forming the highest inner edge contour point by the highest inner edge points in all the small rectangles, and finally, fitting again through a non-uniform rational B spline curve to generate the highest inner edge contour point.
For equally dividing the highest inner edge contour, 1, firstly, arbitrarily appointing a highest inner edge contour point as a first equally dividing point, drawing a circle (the circle is small enough) by taking the first equally dividing point as a circle center and appointing a distance as a radius, and obtaining two intersection points of the circle and the highest inner edge contour.
And setting a connecting line of the circle center and the contour center of the highest inner edge as AO, respectively BO and CO between two intersection points and the contour center, respectively calculating ≤ AOB and ≤ AOC, and taking the intersection point with the included angle greater than 0 as the next equally dividing point until the distance between the next equally dividing point and the initial equally dividing point is not more than the radius of the circle.
And finally, according to the input number of equal parts, taking points at equal intervals to generate an appointed number of equally divided highest inner edge contour points.
For determining the initial position of the contour point of the halved retracted edge, firstly creating the minimum 3D bounding box of the halved retracted contour point obtained by the third part, acquiring a three-dimensional transformation matrix of the current 3D bounding box and the standard position of the 3D bounding box, and transforming the halved retracted contour point to the standard position by using the three-dimensional transformation matrix.
For the smallest 3D bounding box that bisects the indented contour point, although the bounding box looks substantially the same, the three-dimensional transformation matrix that transforms to the standard position may not be unique, which may result in unstable coordinates when the same bisected indented contour point is transformed to the standard position.
The standard position center coordinate is (0,0), the transformation of the minimum circumscribed rectangle may not be unique, the mapped equant contracted-inward contour two-dimensional point needs to be divided into two parts which are close to each other by y >0 and y <0, if the absolute value of the difference of the maximum and minimum x coordinates of the equant contracted-inward contour point with y <0 is larger than the absolute value of the difference of the maximum and minimum x coordinates of the equant contracted-inward contour point with y >0, the mapped equant contracted-inward contour two-dimensional point rotates 180 degrees around the center coordinate (0,0), so that the equant contracted-inward contour two-dimensional points with any placing angles can be basically transformed to the same position, and the position error is smaller than the position error of the pure 3D bounding box transformation.
Calculating a connecting line set of an equally divided inner contracted outline two-dimensional point and a central coordinate (0,0) which are transformed to a two-dimensional standard position, calculating an included angle between the connecting line set and a horizontal axis of a coordinate system, searching an outline two-dimensional point index of which the included angle is closest to an appointed initial point angle, rearranging equally divided inner contracted edge outline three-dimensional points under a camera coordinate system according to the initial point index, and finally obtaining the equally divided inner contracted edge outline point of which the initial position is determined.
In addition, when determining the initial positions of the tracks of other workpieces, the three-dimensional images of the workpieces can be collected as templates, and the initial positions of the tracks can be determined by a deformed template matching method.
And S134, acquiring a maximum outer contour coordinate point of the middle part of the inner contour and the outer contour.
And S14, extracting the robot motion track corresponding to the track process rule.
In this embodiment, the embodiment of the present invention provides an intelligent extraction method for various tracks based on 3D image information, which has the following beneficial effects: the method comprises the steps of carrying out omnibearing 3D scanning on a product and generating 3D point cloud data; creating a depth map based on the 3D point cloud data; determining a trajectory process rule based on the depth map; the robot motion track corresponding to the track process rule is extracted, the highest points of the contour of the inner edge in specified quantity and uniform in quantity are automatically obtained according to the three-dimensional image under the condition of no manual intervention, the stability and accuracy of the track are guaranteed, the damage of noise, dust and harmful gas to the body of a worker in the production process of the product is greatly reduced, and the processing efficiency is greatly improved.
The embodiment of the invention also provides an intelligent extracting device for various tracks based on the 3D image information, and the structure diagram of the intelligent extracting device for various tracks based on the 3D image information is shown in the figure 2.
By way of example, the intelligent extraction device for various types of tracks based on 3D image information may include:
a 3D point cloud data generating module 201, configured to perform omnidirectional 3D scanning on a product and generate 3D point cloud data;
a depth map creation module 202 for creating a depth map based on the 3D point cloud data;
a track process rule determining module 203, configured to determine a track process rule based on the depth map;
and the robot motion track extraction module 204 is configured to extract a robot motion track corresponding to the track process rule.
Optionally, the depth map creating module includes:
the effective point cloud retaining submodule is used for retaining only effective point clouds in the 3D point cloud data according to the Z-axis direction threshold;
a minimum bounding box creating submodule for creating a minimum bounding box of the valid point cloud;
the coordinate determination submodule of the minimum bounding box is used for determining the coordinate of the minimum bounding box;
and the depth map creating submodule is used for creating a depth map based on the coordinates of the minimum bounding box.
Optionally, the trajectory process rule includes a highest contour point, an edge point, a feature point, and a lowest point, and the trajectory process rule determining module includes:
the middle part depth map extraction submodule is used for extracting the middle part depth map of the inner and outer contours containing the highest inner edge;
the noise point set removing submodule is used for removing a communicated and slender noise point set at the edge of the point cloud;
the splitting submodule is used for determining 3 centers of the maximum outline and dividing the maximum outline coordinate point set into 3 parts according to the centers;
and the maximum outer contour coordinate point acquisition submodule is used for acquiring the maximum outer contour coordinate point of the middle part graph of the inner contour and the outer contour.
Optionally, the splitting sub-module includes:
an intersection point obtaining unit, configured to designate a highest inner edge contour point as a first bisector, draw a circle with the first bisector as a center of the circle and a preset distance as a radius, and obtain two intersection points of the circle and the highest inner edge contour;
the distance division completion unit is used for setting a connecting line between the circle center and the center of the highest inner edge contour to be AO, setting connecting lines between two intersection points and the contour center to be BO and CO respectively, calculating angle AOB and angle AOC respectively, and finishing division work until the distance between the next division point and the initial division point does not exceed the radius of the circle, wherein the intersection point corresponding to the included angle greater than 0 is the next division point;
and a highest inner edge contour point generating unit, which is used for generating a specified number of equally divided highest inner edge contour points by taking points at the same interval according to the input equally divided number.
Optionally, the highest inner edge contour point generating unit includes:
the maximum outer contour extraction subunit is used for obtaining an image area in the contour from the obtained highest inner edge contour, corroding the image in the contour by taking the specified retraction distance as a corrosion radius, and extracting the corroded maximum outer contour;
and the retraction edge contour point subunit is used for equally dividing the highest inner edge contour point set into a line segment set at the center of the highest inner edge contour, and the intersection point set of the line segment set and the maximum outer contour obtained after corrosion in the last step is the equally divided retraction edge contour point.
It is clear to those skilled in the art that, for convenience and brevity, the specific working process of the apparatus described above may refer to the corresponding process in the foregoing method embodiment, and is not described herein again.
Further, an embodiment of the present application further provides an electronic device, including: the system comprises a memory, a processor and a computer program stored on the memory and capable of running on the processor, wherein the processor executes the program to realize the intelligent extraction method of various tracks based on 3D image information.
Further, an embodiment of the present application further provides a computer-readable storage medium, where computer-executable instructions are stored, and the computer-executable instructions are configured to enable a computer to execute the method for intelligently extracting various types of tracks based on 3D image information according to the embodiment.
While the foregoing is directed to the preferred embodiment of the present invention, it will be understood by those skilled in the art that various changes and modifications may be made without departing from the spirit and scope of the invention.

Claims (10)

1. A method for intelligently extracting various tracks based on 3D image information is characterized by comprising the following steps:
carrying out omnibearing 3D scanning on a product and generating 3D point cloud data;
creating a depth map based on the 3D point cloud data;
determining a trajectory process rule based on the depth map;
and extracting the robot motion track corresponding to the track process rule.
2. The method for intelligently extracting various types of tracks based on 3D image information according to claim 1, wherein the creating a depth map based on the 3D point cloud data comprises:
only effective point clouds in the 3D point cloud data are reserved according to the threshold value in the Z-axis direction;
creating a minimum bounding box of the valid point cloud;
determining coordinates of the minimum bounding box;
creating a depth map based on the coordinates of the minimum bounding box.
3. The method for intelligently extracting various types of trajectories based on 3D image information according to claim 1, wherein the trajectory process rules include highest contour points, edge points, feature points, and lowest points, and determining the trajectory process rules based on the depth map comprises:
extracting a depth map of the middle part of the inner and outer contours containing the highest inner edge;
removing a connected and slender noisy point set at the edge of the point cloud;
determining 3 centers of the maximum outline, and dividing a maximum outline coordinate point set into 3 parts according to the centers;
and acquiring the maximum outer contour coordinate point of the inner and outer contour middle part graph.
4. The method for intelligently extracting various types of tracks based on 3D image information according to claim 3, wherein the step of determining 3 centers of the maximum outline and dividing the maximum outline coordinate point set into 3 parts according to the centers comprises the following steps:
designating a highest inner edge contour point as a first bisector, drawing a circle by taking the first bisector as a circle center and taking a preset distance as a radius to obtain two intersection points of the circle and the highest inner edge contour;
setting a connecting line of a circle center and the center of the contour of the highest inner edge as AO, respectively setting BO and CO between two intersection points and the connecting line of the contour center, respectively calculating ≤ AOB and ≤ AOC, and setting the intersection point with the included angle greater than 0 as the next equally dividing point until the distance between the next equally dividing point and the initial equally dividing point is not more than the radius of the circle;
and according to the input number of equal parts, points are taken at the same interval to generate a specified number of equally divided highest inner edge contour points.
5. The method for intelligently extracting various types of tracks based on 3D image information according to claim 4, wherein the step of generating a specified number of equally divided highest inner edge contour points by taking points at equal intervals according to the input number of equally divided points comprises the following steps:
obtaining the image area in the contour from the obtained highest inner edge contour, corroding the image in the contour by taking the specified retraction distance as the corrosion radius, and extracting the maximum outer contour after corrosion
And equally dividing the highest inner edge contour point set into a line segment set of the center of the highest inner edge contour, wherein the intersection point set of the line segment set and the maximum outer contour obtained after corrosion in the last step is the equally divided inner contraction edge contour point.
6. The utility model provides a all kinds of orbit intelligence extraction element based on 3D image information which characterized in that, the device includes:
the 3D point cloud data generation module is used for carrying out omnibearing 3D scanning on the product and generating 3D point cloud data;
a depth map creation module for creating a depth map based on the 3D point cloud data;
the track process rule determining module is used for determining a track process rule based on the depth map;
and the robot motion track extraction module is used for extracting the robot motion track corresponding to the track process rule.
7. The device for intelligently extracting various types of tracks based on 3D image information according to claim 6, wherein the depth map creating module comprises:
the effective point cloud retaining submodule is used for retaining only effective point clouds in the 3D point cloud data according to the Z-axis direction threshold;
a minimum bounding box creating submodule for creating a minimum bounding box of the valid point cloud;
the coordinate determination submodule of the minimum bounding box is used for determining the coordinate of the minimum bounding box;
and the depth map creating submodule is used for creating a depth map based on the coordinates of the minimum bounding box.
8. The apparatus of claim 6, wherein the trajectory process rule comprises a highest contour point, an edge point, a feature point, and a lowest point, and the trajectory process rule determining module comprises:
the middle part depth map extraction submodule is used for extracting the middle part depth map of the inner and outer contours containing the highest inner edge;
the noise point set removing submodule is used for removing a communicated and slender noise point set at the edge of the point cloud;
the splitting submodule is used for determining 3 centers of the maximum outline and dividing the maximum outline coordinate point set into 3 parts according to the centers;
and the maximum outer contour coordinate point acquisition submodule is used for acquiring the maximum outer contour coordinate point of the middle part graph of the inner contour and the outer contour.
9. The device for intelligently extracting various types of tracks based on 3D image information according to claim 8, wherein the splitting sub-module comprises:
an intersection point obtaining unit, configured to designate a highest inner edge contour point as a first bisector, draw a circle with the first bisector as a center of the circle and a preset distance as a radius, and obtain two intersection points of the circle and the highest inner edge contour;
the distance division completion unit is used for setting a connecting line between the circle center and the center of the highest inner edge contour to be AO, setting connecting lines between two intersection points and the contour center to be BO and CO respectively, calculating angle AOB and angle AOC respectively, and finishing division work until the distance between the next division point and the initial division point does not exceed the radius of the circle, wherein the intersection point corresponding to the included angle greater than 0 is the next division point;
and a highest inner edge contour point generating unit, which is used for generating a specified number of equally divided highest inner edge contour points by taking points at the same interval according to the input equally divided number.
10. The apparatus for intelligently extracting various kinds of trajectories based on 3D image information according to claim 9, wherein the highest inner edge contour point generating unit comprises:
the maximum outer contour extraction subunit is used for obtaining an image area in the contour from the obtained highest inner edge contour, corroding the image in the contour by taking the specified retraction distance as a corrosion radius, and extracting the corroded maximum outer contour;
and the retraction edge contour point subunit is used for equally dividing the highest inner edge contour point set into a line segment set at the center of the highest inner edge contour, and the intersection point set of the line segment set and the maximum outer contour obtained after corrosion in the last step is the equally divided retraction edge contour point.
CN202210668927.7A 2022-06-14 2022-06-14 Intelligent extraction method and device for various tracks based on 3D image information Pending CN115100270A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210668927.7A CN115100270A (en) 2022-06-14 2022-06-14 Intelligent extraction method and device for various tracks based on 3D image information

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210668927.7A CN115100270A (en) 2022-06-14 2022-06-14 Intelligent extraction method and device for various tracks based on 3D image information

Publications (1)

Publication Number Publication Date
CN115100270A true CN115100270A (en) 2022-09-23

Family

ID=83290624

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210668927.7A Pending CN115100270A (en) 2022-06-14 2022-06-14 Intelligent extraction method and device for various tracks based on 3D image information

Country Status (1)

Country Link
CN (1) CN115100270A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116956960A (en) * 2023-07-28 2023-10-27 武汉市万睿数字运营有限公司 Community visitor visit path restoration method and system based on cloud edge end collaboration

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116956960A (en) * 2023-07-28 2023-10-27 武汉市万睿数字运营有限公司 Community visitor visit path restoration method and system based on cloud edge end collaboration

Similar Documents

Publication Publication Date Title
CN110340891B (en) Mechanical arm positioning and grabbing system and method based on point cloud template matching technology
CN109514133B (en) 3D curve welding seam autonomous teaching method of welding robot based on line structure light perception
CN106845515B (en) Robot target identification and pose reconstruction method based on virtual sample deep learning
CN111805051B (en) Groove cutting method, device, electronic equipment and system
CN112348864B (en) Three-dimensional point cloud automatic registration method for laser contour features of fusion line
CN112509063A (en) Mechanical arm grabbing system and method based on edge feature matching
CN106651894B (en) Automatic spraying system coordinate transformation method based on point cloud and image matching
CN108994844B (en) Calibration method and device for hand-eye relationship of polishing operation arm
CN113034600B (en) Template matching-based texture-free planar structure industrial part identification and 6D pose estimation method
CN112669385B (en) Industrial robot part identification and pose estimation method based on three-dimensional point cloud features
CN104040590A (en) Method for estimating pose of object
CN115147437B (en) Intelligent robot guiding machining method and system
CN114926699B (en) Indoor three-dimensional point cloud semantic classification method, device, medium and terminal
CN110618653B (en) Method and device for automatically generating aircraft skin mirror image milling tool path track
CN108876852B (en) Online real-time object identification and positioning method based on 3D vision
CN112907735B (en) Flexible cable identification and three-dimensional reconstruction method based on point cloud
CN108508845B (en) A kind of complex-curved quick numerical control engraving and milling method based on geometric self-adaptation
CN104574432A (en) Three-dimensional face reconstruction method and three-dimensional face reconstruction system for automatic multi-view-angle face auto-shooting image
CN115100270A (en) Intelligent extraction method and device for various tracks based on 3D image information
WO2023169337A1 (en) Target object speed estimation method and apparatus, vehicle, and storage medium
TWI731604B (en) Three-dimensional point cloud data processing method
CN112802101A (en) Hierarchical template matching method based on multi-dimensional pyramid
CN115213038B (en) Polygonal frame selection method for point cloud of automobile sheet metal
CN112084875A (en) Multi-laser radar coordinate system method
Chen et al. Model-based point cloud alignment with principle component analysis for robot welding

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination