CN114494588A - Automatic three-dimensional modeling method - Google Patents

Automatic three-dimensional modeling method Download PDF

Info

Publication number
CN114494588A
CN114494588A CN202210030566.3A CN202210030566A CN114494588A CN 114494588 A CN114494588 A CN 114494588A CN 202210030566 A CN202210030566 A CN 202210030566A CN 114494588 A CN114494588 A CN 114494588A
Authority
CN
China
Prior art keywords
point cloud
dimensional
target workpiece
plate
pixel
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210030566.3A
Other languages
Chinese (zh)
Inventor
魏洪兴
谢肇阳
崔元洋
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Aubo Beijing Robotics Technology Co ltd
Original Assignee
Aubo Beijing Robotics Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Aubo Beijing Robotics Technology Co ltd filed Critical Aubo Beijing Robotics Technology Co ltd
Priority to CN202210030566.3A priority Critical patent/CN114494588A/en
Publication of CN114494588A publication Critical patent/CN114494588A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/70Denoising; Smoothing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery
    • G06T7/55Depth or shape recovery from multiple images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Graphics (AREA)
  • Geometry (AREA)
  • Software Systems (AREA)
  • Computer Vision & Pattern Recognition (AREA)

Abstract

The invention discloses an automatic three-dimensional modeling method, which comprises the steps of arranging a 3D camera and a coding plate, and placing a target workpiece on the coding plate; acquiring a first point cloud set and a first pixel point set which comprise an encoding plate and a target workpiece, wherein the first point cloud set and the first pixel point set are in a mapping relation; identifying a two-dimensional code on the coding board, which is not covered by the target workpiece, based on the first pixel point set; acquiring three-dimensional point cloud information of the target workpiece based on the two-dimensional code pixel information and the first point cloud set; the invention realizes the distinguishing of point sets corresponding to a target object and point sets corresponding to a non-target object in point cloud collection by arranging the coding plate and placing a target workpiece on the coding plate; the work automates manual processes, the modeling speed is increased, manual work is saved, and human errors are reduced.

Description

Automatic three-dimensional modeling method
Technical Field
The invention relates to the technical field of computers, in particular to an automatic three-dimensional modeling method.
Background
The robot is widely applied to industry to realize automation, digitalization and unmanned production line, and typical work of the robot comprises unordered sorting, automatic feeding and discharging, carrying, polishing, welding, assembling and the like of workpieces; three-dimensional modeling of workpieces is the basis for the above work: the three-dimensional model of the workpiece is used for automatic identification and pose estimation of the workpiece, and is a leading work of the robot for grabbing the workpiece.
The three-dimensional modeling work is completed by the cooperation of a 3D visual camera and a worker, the 3D visual camera has the functions of scanning a workpiece and obtaining three-dimensional point cloud of a scene, wherein the scene comprises the workpiece, but not only the workpiece, but also objects such as a bracket, a platform, a material frame, a tray and the like for supporting the workpiece, and noise points; in order to obtain a three-dimensional model of a workpiece, the above-mentioned point cloud sets that are not workpieces need to be removed from the scene, and the point cloud sets that belong to the workpiece are retained, which is called screening of the point cloud sets.
The point cloud processing is the core content for establishing the template and the work content consuming the most time, and aims to obtain a point cloud model of the workpiece. The point cloud processing difficulty is that a point cloud set corresponding to a workpiece is identified from the point cloud, and because the point cloud is obtained by scanning of a 3D camera, objects which can enter a 3D camera scanning scene are not only the workpiece, but also other objects existing in the scene, and typical scene objects include a test platform, a tray, a robot body and the like. The scene objects are various and large in number, and the variety, number, position and posture of the scene objects are unpredictable. These factors increase the difficulty of point cloud processing, so the prior art adopts a manual method to process the point cloud.
The existing technical scheme can not automatically remove point cloud sets which do not belong to target objects in a scene, and the removing function is manually completed, the difficulty of a manual method is the operation of three-dimensional modeling software, the functions of the three-dimensional modeling software comprise point cloud display, point cloud set selection and point cloud set deletion, the learning cost of the three-dimensional modeling software is high, the professional requirement is high, the software branches are multiple, the three-dimensional modeling software specially used for point cloud processing belongs to the subdivision field of the three-dimensional modeling software, and generally industrial manufacturers can not engage professional engineers to perform point cloud processing work; under the scene, a professional engineer of a vision manufacturer needs to perform point cloud processing, and an industrial manufacturer needs to wait for the professional engineer to go to the site, so that the debugging period is prolonged, and the debugging cost is increased.
Disclosure of Invention
In order to solve the above technical problem, embodiments of the present disclosure at least provide a method, an apparatus, a device, and a storage medium for automatic three-dimensional modeling;
the embodiment of the present disclosure provides a method for automatic three-dimensional modeling, which includes:
arranging a 3D camera and a coding plate, and placing a target workpiece on the coding plate;
acquiring a first point cloud set and a first pixel point set which comprise an encoding plate and a target workpiece, wherein the first point cloud set and the first pixel point set are in a mapping relation;
identifying a two-dimensional code on the coding board, which is not covered by the target workpiece, based on the first pixel point set;
and acquiring three-dimensional point cloud information of the target workpiece based on the two-dimensional code pixel information and the first point cloud set.
Preferably, the identifying the two-dimensional code on the coding board, which is not covered by the target workpiece, based on the first set of pixel points includes:
presetting a pixel set corresponding to each two-dimensional code on the coding board;
traversing the first pixel point set, and determining a plurality of groups of pixel sets consistent with the preset pixel sets;
the multiple groups of pixel sets correspond to two-dimensional codes which are not covered by the target workpiece.
Preferably, the acquiring three-dimensional point cloud information of the target workpiece based on the two-dimensional code pixel information and the first point cloud set includes:
acquiring the coordinates of the central point of each two-dimensional code based on the pixel information of the two-dimensional code not covered by the target workpiece;
constructing a minimum external polygon of the coding plate based on the central point coordinates of the two-dimensional codes;
acquiring a second pixel point set in the minimum circumscribed polygon;
acquiring a second point cloud set corresponding to the second pixel point set mapping in the first point cloud set;
and acquiring the three-dimensional point cloud information of the target workpiece based on the second point cloud set.
Preferably, the acquiring three-dimensional point cloud information of the target workpiece based on the second point cloud set comprises:
and identifying point cloud information in the second point cloud set, which is positioned on the same plane as the coding plate, eliminating the point cloud information positioned on the same plane as the coding plate, and acquiring the three-dimensional point cloud information of the target workpiece.
Preferably, the rejecting the point cloud information located on the same plane as the encoding plate includes:
and obtaining a space plane equation of the coding plate through plane fitting, identifying the point cloud information which is positioned on the same plane with the coding plate through the space plane equation, and rejecting the point cloud information which is positioned on the same plane with the coding plate.
Preferably, the eliminating the point cloud information located on the same plane as the encoding plate to obtain the three-dimensional point cloud information of the target workpiece includes:
identifying point cloud information in the second point cloud set, which is located on the same plane as the coding plate, and acquiring a third point cloud set after eliminating the point cloud information in the same plane as the coding plate;
and carrying out noise reduction treatment on the third point cloud set to obtain the three-dimensional point cloud information of the target workpiece.
In an alternative embodiment, the code plate is a quadrilateral, and the code plate includes four two-dimensional codes respectively located at four vertices of the quadrilateral.
In another optional embodiment, the coding plate is integrally arranged with a two-dimensional code.
Preferably, the two-dimensional code is an ArUco two-dimensional code.
In the method, the target workpiece is placed on the coding plate and covers part of the two-dimensional code, and in another optional embodiment, the target workpiece is placed in a blank without the code on the surface of the coding plate.
The embodiment of the present disclosure further provides an automatic three-dimensional modeling apparatus, which includes:
the camera module is arranged on the carrying support and is a 3D visual camera used for scanning the area where the target workpiece is located;
the point cloud acquisition module is used for acquiring 3D point cloud information and 2D image information of the target object based on the camera module;
the coding plate rectangular frame identification module is used for determining a rectangular frame of an area where the coding plate is located in an area where the target workpiece is located based on the 3D point cloud information and the 2D image information of the target object, which are acquired by the point cloud acquisition module;
the code identification module is used for acquiring 3D point cloud information and 2D image information of a target object based on the point cloud acquisition module and determining a space coordinate of a code on the plane of the code plate;
the noise point removing module is used for removing the noise points of the rectangular frame based on the 3D point cloud information and the 2D image information acquired by the point cloud acquiring module and the rectangular frame of the region where the coding plate is located in the region where the target workpiece is located determined by the coding plate rectangular frame identifying module;
the code removing module is used for removing points on the coding plate plane in the rectangle based on the 3D point cloud information and the 2D image information acquired by the point cloud acquiring module and the codes on the coding plate plane determined by the code identifying module;
in an optional implementation manner, the point cloud obtaining module is specifically configured to:
determining two point cloud sets based on a 3D visual camera in a camera module, wherein one point cloud set comprises a target workpiece and a coding plate for bearing the target workpiece, and the second point cloud set comprises all noise points;
in an optional implementation manner, the code plate rectangular frame identification module is specifically configured to:
determining a rectangular frame corresponding to the frame of the coding plate based on the second point cloud set acquired by the point cloud acquisition module and the serial numbers corresponding to the ArUco codes at the four corners in the coding plate;
in another optional implementation manner, the code plate rectangular frame identification module is specifically configured to:
determining a complete code which is not covered by the target workpiece based on the second point cloud set acquired by the point cloud acquisition module and the code on the plane of the coding plate determined by the code identification module, further determining the coordinates of the central points of all the complete two-dimensional codes, and determining a rectangular frame corresponding to the frame of the coding plate by performing minimum external rectangle identification;
in an optional implementation manner, the noise removing module is specifically configured to:
and performing plane fitting based on the first point cloud set acquired by the point cloud acquisition module and the space coordinates of the codes on the plane of the coding plate determined by the code identification module to obtain a space plane equation of the coding plate, screening all points in the first point cloud set, considering the points on the plane as the points of the coding plate, and removing the points.
An embodiment of the present disclosure further provides a computer device, including: a processor, a memory and a bus, the memory storing machine-readable instructions executable by the processor, the processor and the memory communicating via the bus when the computer device is running, the machine-readable instructions when executed by the processor performing the steps of the above-described method of automatic three-dimensional modeling.
The disclosed embodiments also provide a computer-readable storage medium having a computer program stored thereon, where the computer program is executed by a processor to perform the steps of the above-mentioned automatic three-dimensional modeling method.
According to the automatic three-dimensional modeling method, the automatic three-dimensional modeling device, the automatic three-dimensional modeling equipment and the storage medium, the target workpiece is placed on the coding plate, and the non-workpiece object is prevented from appearing between the coding plate and the camera.
For a first point cloud set, the workpiece is excluded from the first point cloud set, thus: firstly, Aruco codes on a code plate are identified, and space coordinates of the codes are obtained; then carrying out plane fitting to obtain a space plane equation of the coding plate; screening all points in the first point cloud set, regarding the points on the plane as the points of the coding plate, and deleting the points; thus, after all the points are traversed, a point cloud set which only contains the workpiece, namely the point cloud set which is regarded as the workpiece, namely the three-dimensional model of the workpiece is obtained.
After the scheme is adopted, the invention has the following advantages: the invention realizes the distinguishing of point cloud sets corresponding to a target object and point cloud sets corresponding to a non-target object in point cloud sets by arranging the coding plate and placing a target workpiece on the coding plate; the work automates manual processes, the modeling speed is increased, manual work is saved, and human errors are reduced.
Drawings
FIG. 1 is a schematic flow diagram of the process of the present invention.
FIG. 2 is a schematic diagram of an encoder board in the method of the present invention.
Fig. 3 is a schematic flow chart of S103 in the method of the present invention.
Fig. 4 is a schematic flow chart of S104 in the method of the present invention.
FIG. 5 is a schematic flow chart of an example of constructing a polygon of an encoding plate in the method of the present invention.
FIG. 6 is a schematic diagram of an automated three-dimensional modeling apparatus in the method of the present invention.
FIG. 7 is a schematic diagram of a computer apparatus for use in the method of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present disclosure more clear, the technical solutions of the embodiments of the present disclosure will be described clearly and completely with reference to the drawings in the embodiments of the present disclosure, and it is obvious that the described embodiments are only a part of the embodiments of the present disclosure, not all of the embodiments. The components of the embodiments of the present disclosure, generally described and illustrated in the figures herein, can be arranged and designed in a wide variety of different configurations. Thus, the following detailed description of the embodiments of the present disclosure, presented in the figures, is not intended to limit the scope of the claimed disclosure, but is merely representative of selected embodiments of the disclosure. All other embodiments, which can be derived by a person skilled in the art from the embodiments of the disclosure without making creative efforts, shall fall within the protection scope of the disclosure.
It should be noted that: like reference numbers and letters refer to like items in the following figures, and thus, once an item is defined in one figure, it need not be further defined and explained in subsequent figures.
The term "and/or" herein merely describes an associative relationship, meaning that three relationships may exist, e.g., a and/or B, may mean: a exists alone, A and B exist simultaneously, and B exists alone. In addition, the term "at least one" herein means any one of a plurality or any combination of at least two of a plurality, for example, including at least one of A, B, C, and may mean including any one or more elements selected from the group consisting of A, B and C.
According to researches, the existing technical scheme can not automatically remove point cloud sets which do not belong to target objects in a scene, and the removing function is manually completed, the difficulty of a manual method is the operation of three-dimensional modeling software, the functions of the three-dimensional modeling software comprise point cloud display, point cloud set selection and point cloud set deletion, the learning cost of the three-dimensional modeling software is high, the professional requirement is high, the software branches are multiple, the three-dimensional modeling software specially used for point cloud processing belongs to the subdivision field of the three-dimensional modeling software, and generally industrial manufacturers can not engage full-time engineers to perform point cloud processing work; under the scene, a professional engineer of a vision manufacturer needs to perform point cloud processing, and an industrial manufacturer needs to wait for the professional engineer to go to the site, so that the debugging period is prolonged, and the debugging cost is increased.
Based on the research, the application discloses an automatic three-dimensional modeling method, a 3D camera and an encoding plate are arranged, a target workpiece is placed on the encoding plate, a first point cloud set and a first pixel point set which comprise the encoding plate and the target workpiece are obtained, the first point cloud set and the first pixel point set are in a mapping relation, then two-dimensional codes which are not covered by the target workpiece on the encoding plate are identified based on the first pixel point set, and three-dimensional point cloud information of the target workpiece is obtained based on the two-dimensional code pixel information and the first pixel cloud set.
Specifically, with reference to fig. 1, fig. 1 is a flowchart of an automatic three-dimensional modeling method provided by the embodiment of the present disclosure, and as shown in fig. 1, the automatic three-dimensional modeling method provided by the embodiment of the present disclosure includes:
s101: arranging a 3D camera, a target workpiece and a coding plate, placing the coding plate in the visual field of the 3D visual camera, placing the target workpiece on the coding plate, and ensuring that no non-workpiece object is arranged between the coding plate and the camera;
in this step, when selecting the encoding board, the pattern shape on the encoding board is formed by arranging a two-dimensional code, which may be an ArUco two-dimensional code, an IGD two-dimensional code, an SCR two-dimensional code, or any one of the two-dimensional codes listed in fig. 2, but is not limited to the two-dimensional code type listed in fig. 2;
the coding board can be a coding board fully paved with the two-dimensional code, and can also be a coding board on which coding patterns are regularly arranged (illustratively, the coding patterns can be located on the periphery or the top corner of the coding board). The target workpiece is placed on the coding plate, and the target workpiece may be placed in a blank of the coding plate, namely, the coding plate is not provided with codes at the place where the target workpiece is placed, or the target workpiece is placed on the coding plate full of coding patterns.
S102: starting a 3D camera, and acquiring a first point cloud set and a first pixel point set which comprise an encoding plate and a target workpiece, wherein the first point cloud set and the first pixel point set are in a mapping relation;
in the step, the 3D camera takes a plane where the coding board is located as a mark to obtain a first point cloud set and a first point cloud set;
the first point cloud set is a 3D point cloud set, the first pixel point set is a pixel point set of a two-dimensional image, the first point cloud set and the first pixel point set are data acquired by the 3D camera under the same shooting condition, and the first point cloud set and the first pixel point set have one-to-one correspondence relationship.
S103: identifying a two-dimensional code on the coding board, which is not covered by the target workpiece, based on the first pixel point set;
with reference to fig. 3, this step is implemented by the following steps:
(1) presetting a pixel set corresponding to each two-dimensional code on the coding board;
before the encoding board is shot, pixel information corresponding to each two-dimensional code on the encoding board is obtained in advance, specifically, each two-dimensional code comprises a group of pixel sets, each two-dimensional code corresponds to one group of pixel sets, and the specific two-dimensional code can be identified by a characteristic value (the characteristic value is calculated by matching pixel set points) of the corresponding group of pixel sets.
(2) Traversing the first pixel point set, and determining a plurality of groups of pixel sets consistent with each preset pixel set;
and traversing the first pixel point set, and determining a plurality of groups of pixel sets consistent with the preset groups of pixel sets. The way to traverse the pixel point set can be through OpenCV function, reduce markers (). The specific implementation method is that the OpenCV function acuco calls a first pixel point set (the first pixel point set can be in a 2D image form) and a preset plurality of groups of pixel sets, and then corresponding groups of two-dimensional code characteristic values and coordinates can be screened out. The two-dimensional code characteristic values are preset characteristic values of a plurality of groups of pixel sets, and each two-dimensional code characteristic value corresponds to one group of pixel sets. And simultaneously obtaining coordinate values of the coordinate system (called as a pixel coordinate system) of each two-dimensional code in the first pixel point set, wherein the coordinate values representing each two-dimensional code can be coordinate values of four corners of the two-dimensional code.
(3) And the multiple groups of pixel sets correspond to the two-dimensional code which is not covered by the target workpiece.
And outputting the two-dimensional code corresponding to each two-dimensional code characteristic value, and correspondingly corresponding to a group of pixel sets. The two-dimensional code is completely acquired and is not occupied or covered by the target workpiece placed on the coding board.
S104: and acquiring three-dimensional point cloud information of the target workpiece based on the two-dimensional code pixel information and the first point cloud set.
With reference to fig. 4, this step is implemented by the following steps:
(1) acquiring the coordinates of the central point of each two-dimensional code based on the pixel information of the two-dimensional code;
acquiring the coordinates of the central point of each two-dimensional code which is not covered by the target workpiece based on the acquired pixel information of the two-dimensional code which is not covered by the target workpiece; specifically, in step S103, the OpenCV function aruco:: detectMarkers () calls a first pixel point set (the first pixel point set can be in the form of a 2D image) and a preset plurality of groups of pixel sets to screen out corresponding groups of two-dimensional code characteristic values and coordinates. The coordinate value of each two-dimensional code may be a coordinate value of four corners of the two-dimensional code. And the coordinates of the center point of the two-dimensional code are calculated, namely the coordinate values of four corners are calculated to obtain the average value.
(2) Constructing a minimum external polygon of the coding plate based on the central point coordinates of the two-dimensional codes;
and constructing a minimum circumscribed polygon surrounding the workpiece based on the central point coordinates of the two-dimensional codes which are not covered by the target workpiece, and specifically realizing the minimum circumscribed polygon by using an OpenCV function.
The specific minimum circumscribed polygon can be different according to the shape of the coding plate and the arrangement mode of the two-dimensional code pattern of the coding plate. The specific embodiment proposes two types of code boards, which are respectively as follows, but not limited to the following two types:
in this embodiment, the process of placing the target workpiece on the code plate and constructing the minimum circumscribed polygon of the code plate based on the coordinates of the center point of each two-dimensional code includes the following steps:
in some embodiments, in order to increase the efficiency of identifying the two-dimensional code and reduce the amount of computation, a coding board with a rectangular frame may be used. The further two-dimensional code is located at four top corners of the rectangular frame, and all the other positions of the rectangular frame coding plate except the four top corners are not covered by the two-dimensional code and are blank. When the target workpiece is placed, the target workpiece is placed in a blank of the coding plate, and a first point cloud set and a first pixel point set which comprise the coding plate, the target workpiece and the surrounding environment are obtained through shooting of the 3D camera. And traversing the pixel point set, finding the pixel point set of the two-dimensional code at the four vertex angles, confirming the central point coordinate of the two-dimensional code based on the four pieces of pixel point information of the two-dimensional code, and constructing an external quadrangle based on the four pieces of central point coordinates.
In other embodiments, a polygonal coding plate may be adopted, a two-dimensional code is fully arranged on the whole plane of the further coding plate, and with reference to fig. 5, a target workpiece is placed on the coding plate, and since the coding plate is fully filled with all two-dimensional code identifiers, some two-dimensional codes are inevitably covered at random after the target workpiece is placed, and some fully exposed two-dimensional codes are retained. Traversing the pixel point set, identifying all uncovered complete two-dimensional codes, and acquiring the coordinates of the central points of all the complete two-dimensional codes; the recognition of the minimum circumscribed polygon is realized based on an OpenCV function, the polygon can be set to be a rectangle, a pentagon or a hexagon, the limitation is not required, and the specific shape of the polygon can be set by the user according to the appearance of a workpiece or the calculation precision requirement;
in the above embodiment, a bounding rectangle () or minAreaRect () function may be used to draw an approximate rectangle around the binary image, the found shape is wrapped by a minimum rectangle, and after the point set is input, the minimum regular rectangle wrapping the input information is output; the minAreaRect () function mainly obtains a rectangle containing the minimum area of a point set, wherein the rectangle can have a deflection angle and can be not parallel to the boundary of an image, and four point coordinates of the rectangle are output after the point set is input;
(3) acquiring a second pixel point set in the minimum circumscribed polygon;
after the minimum external polygon surrounding the target workpiece is obtained, the pointpolygon outside the rectangle can be continuously removed by adopting pointpolygon test () of the OpenCV function, and a second pixel point set is obtained, namely the second pixel point set is a set of all pixel points in the minimum external polygon. Specifically, the first pixel point set is traversed through a pointPolygontest () function, pixel points outside a rectangle are deleted, and a second pixel point set is obtained. By constructing the minimum circumscribed polygon, the pixel point set which obviously does not belong to the workpiece is rapidly removed, and the processing efficiency of the point cloud is improved.
(4) Acquiring a second point cloud set corresponding to the second pixel point set mapping in the first point cloud set;
based on the mapping relationship between the pixel point sets and the point cloud sets, a second point cloud set which is located in the first point cloud set and corresponds to the second pixel point set one by one can be obtained, and the three-dimensional point cloud set of the target workpiece is contained in the second point cloud set.
(5) Acquiring three-dimensional point cloud information of the target workpiece based on the second point cloud set:
and further processing the second point cloud set, and screening to obtain the three-dimensional point cloud information of the target workpiece. Since the target workpiece is placed on the coding plate, and the three-dimensional point cloud information of the target workpiece and the three-dimensional point cloud information of the coding plate are not on the same plane, this embodiment provides a processing method, by processing the second point cloud set, point cloud information located on the same plane as the coding plate is obtained, and the point cloud information located on the same plane is removed, and the rest is the three-dimensional point cloud information of the target workpiece.
And identifying point cloud information in the second point cloud set, which is located on the same plane as the coding plate, specifically, obtaining a spatial plane equation of the coding plate through plane fitting, removing points on the coding plate from the second point cloud set, and obtaining three-dimensional point cloud information of the target workpiece. For removing the points on the encoding plate from the second point cloud set, the PCL function is used in the embodiment to identify the plane and remove the points of the plane;
the method can further accurately process the three-dimensional point cloud information of the target workpiece, and comprises the steps of identifying the point cloud information in the second point cloud set, which is positioned on the same plane as the coding plate, and acquiring a third point cloud set after eliminating the point cloud information which is positioned on the same plane as the coding plate; and carrying out noise reduction treatment on the third point cloud set to obtain the three-dimensional point cloud information of the target workpiece. For the noise reduction processing of the point cloud, in this embodiment, a statistical noise reduction function of the PCL, that is, a statistical outlierremovalfilter of the PCL, is used.
The automatic three-dimensional modeling method provided by the embodiment of the disclosure can be divided into two point sets under the condition that a target workpiece, a coding plate and noise points exist, the first point set comprises the workpiece and the coding plate, the second point set comprises all the noise points, the noise points and the plane points of the coding plate are removed by identifying the coding on the rectangular frame of the coding plate and the coding plate, the manual process is automated, the modeling speed is increased, the manual work is saved, and the human factor error is reduced.
It will be understood by those skilled in the art that in the method of the present invention, the order of writing the steps does not imply a strict order of execution and any limitations on the implementation, and the specific order of execution of the steps should be determined by their function and possible inherent logic.
Based on the same inventive concept, an automatic three-dimensional modeling device corresponding to the automatic three-dimensional modeling method is also provided in the embodiment of the present disclosure, and because the principle of solving the problem of the device in the embodiment of the present disclosure is similar to the automatic three-dimensional modeling method in the embodiment of the present disclosure, the implementation of the device can refer to the implementation of the method, and repeated details are not repeated.
As shown in fig. 6, an automatic three-dimensional modeling apparatus 100 provided by an embodiment of the present disclosure includes:
the camera module 101 is installed on the carrying support, is a 3D visual camera and is used for scanning the area where the target workpiece is located;
a point cloud obtaining module 102, configured to obtain 3D point cloud information and 2D image information of a target object based on a camera module;
the encoding plate rectangular frame identification module 103 is used for determining a rectangular frame of an area where the encoding plate is located in an area where the target workpiece is located based on the 3D point cloud information and the 2D image information of the target object, which are acquired by the point cloud acquisition module;
the code identification module 104 is used for acquiring 3D point cloud information and 2D image information of a target object based on the point cloud acquisition module and determining a space coordinate of a code on the plane of the code plate;
the noise point removing module 105 is used for removing rectangular frame noise points based on the 3D point cloud information and the 2D image information acquired by the point cloud acquiring module and the rectangular frame of the region where the coding plate is located in the region where the target workpiece is located determined by the coding plate rectangular frame identifying module;
the code removing module 106 is used for removing points on the coding plate plane in the rectangle based on the 3D point cloud information and the 2D image information acquired by the point cloud acquiring module and the code on the coding plate plane determined by the code identifying module;
in an optional implementation manner, the point cloud obtaining module is specifically configured to:
determining two point cloud sets based on a 3D visual camera in the camera module 101, wherein one point cloud set comprises a target workpiece and a coding plate bearing the target workpiece, and the second point cloud set comprises all noise points;
in an optional implementation manner, the code plate rectangular frame identification module 103 is specifically configured to:
determining a rectangular frame corresponding to the frame of the coding plate based on the second point cloud set acquired by the point cloud acquisition module 102 and the number corresponding to the ArUco codes at the four corners in the coding plate;
in another optional implementation manner, the code plate rectangular frame identification module 103 is specifically configured to:
determining a complete code which is not covered by the target workpiece based on the second point cloud set acquired by the point cloud acquisition module 102 and the code on the plane of the code plate determined by the code identification module 104, further determining coordinates of center points of all complete two-dimensional codes, and determining a rectangular frame corresponding to a frame of the code plate by performing minimum circumscribed rectangle identification;
in an optional implementation, the noise removing module 105 is specifically configured to:
based on the first point cloud set acquired by the point cloud acquisition module 102 and the spatial coordinates of the codes on the plane of the code plate determined by the code identification module 104, performing plane fitting to obtain a spatial plane equation of the code plate, screening all points in the first point cloud set, regarding the points on the plane as the points of the code plate, and removing the points.
Corresponding to the automatic three-dimensional modeling method in fig. 1, an embodiment of the present disclosure further provides a computer device 200, as shown in fig. 7, a schematic structural diagram of the computer device 200 provided in the embodiment of the present disclosure includes: a processor 201, a memory 202, and a bus 203. The memory 202 stores machine-readable instructions executable by the processor 201, the processor 201 and the memory 202 communicating via the bus 203 when the computer device 200 is running, the machine-readable instructions, when executed by the processor 201, being capable of performing the steps of the aforementioned automated three-dimensional modeling method.
The embodiments of the present disclosure also provide a computer-readable storage medium, on which a computer program is stored, where the computer program, when executed by a processor, performs the steps of the automatic three-dimensional modeling method described in the above method embodiments. The storage medium may be a volatile or non-volatile computer-readable storage medium. The embodiments of the present disclosure also provide a computer program product, where the computer program product carries a program code, and instructions included in the program code may be used to execute the steps of the automatic three-dimensional modeling method in the foregoing method embodiments, which may be referred to specifically in the foregoing method embodiments, and are not described herein again.
The computer program product may be implemented by hardware, software or a combination thereof. In an alternative embodiment, the computer program product is embodied as a computer storage medium, and in another alternative embodiment, an embodiment of the present disclosure may also be a computer program product comprising computer program instructions that, when executed by a processor, cause the processor to perform the steps in the method of acquiring a point cloud collection of a target object according to various embodiments of the present disclosure described in the "exemplary methods" section above in this specification.
The computer program product may write program code for carrying out operations for embodiments of the present disclosure in any combination of one or more programming languages, including an object oriented programming language such as Java, C + + or the like and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computing device, partly on the user's device, as a stand-alone software package, partly on the user's computing device and partly on a remote computing device, or entirely on the remote computing device or server.
Furthermore, embodiments of the present disclosure may also be a computer-readable storage medium having stored thereon computer program instructions that, when executed by a processor, cause the processor to perform the steps in the method of acquiring a point cloud collection of a target object according to various embodiments of the present disclosure described in the "exemplary methods" section above in this specification.
The computer-readable storage medium may take any combination of one or more readable media. The readable medium may be a readable signal medium or a readable storage medium. A readable storage medium may include, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or a combination of any of the foregoing. More specific examples (a non-exhaustive list) of the readable storage medium include: an electrical connection having one or more wires, a portable disk, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
The foregoing describes the general principles of the present disclosure in conjunction with specific embodiments, however, it is noted that the advantages, effects, etc. mentioned in the present disclosure are merely examples and are not limiting, and they should not be considered essential to the various embodiments of the present disclosure. Furthermore, the foregoing disclosure of specific details is for the purpose of illustration and description and is not intended to be limiting, since the disclosure is not intended to be limited to the specific details so described.
The block diagrams of devices, apparatuses, devices, systems involved in the present disclosure are only given as illustrative examples and are not intended to require or imply that the connections, arrangements, configurations must be made in the manner shown in the block diagrams. These devices, apparatuses, devices, systems may be connected, arranged, configured in any manner, as will be appreciated by those skilled in the art. Words such as "including," "comprising," "having," and the like are open-ended words that mean "including, but not limited to," and are used interchangeably therewith. The words "or" and "as used herein mean, and are used interchangeably with, the word" and/or, "unless the context clearly dictates otherwise. The word "such as" is used herein to mean, and is used interchangeably with, the phrase "such as but not limited to".
It is also noted that in the devices, apparatuses, and methods of the present disclosure, each component or step can be decomposed and/or recombined. These decompositions and/or recombinations are to be considered equivalents of the present disclosure.
The previous description of the disclosed aspects is provided to enable any person skilled in the art to make or use the present disclosure. Various modifications to these aspects will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other aspects without departing from the scope of the disclosure. Thus, the present disclosure is not intended to be limited to the aspects shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.
The foregoing description has been presented for purposes of illustration and description. Furthermore, this description is not intended to limit embodiments of the disclosure to the form disclosed herein. While a number of example aspects and embodiments have been discussed above, those of skill in the art will recognize certain variations, modifications, alterations, additions and sub-combinations thereof.

Claims (10)

1. A method of automated three-dimensional modeling, the method comprising:
arranging a 3D camera and a coding plate, and placing a target workpiece on the coding plate;
acquiring a first point cloud set and a first pixel point set which comprise an encoding plate and a target workpiece, wherein the first point cloud set and the first pixel point set are in a mapping relation;
identifying a two-dimensional code on the coding board, which is not covered by the target workpiece, based on the first pixel point set;
and acquiring three-dimensional point cloud information of the target workpiece based on the two-dimensional code pixel information and the first point cloud set.
2. The method of claim 1, wherein the identifying a two-dimensional code on the code plate that is not covered by the target workpiece based on the first set of pixels comprises:
presetting a pixel set corresponding to each two-dimensional code on the coding board;
traversing the first pixel point set, and determining a plurality of groups of pixel sets consistent with the preset pixel sets;
the multiple groups of pixel sets correspond to two-dimensional codes which are not covered by the target workpiece.
3. The method of claim 1, wherein the obtaining three-dimensional point cloud information of the target workpiece based on the two-dimensional code pixel information and the first point cloud set comprises:
acquiring the coordinates of the central point of each two-dimensional code based on the pixel information of the two-dimensional code not covered by the target workpiece;
constructing a minimum external polygon of the coding plate based on the central point coordinates of the two-dimensional codes;
acquiring a second pixel point set in the minimum circumscribed polygon;
acquiring a second point cloud set corresponding to the second pixel point set mapping in the first point cloud set;
and acquiring the three-dimensional point cloud information of the target workpiece based on the second point cloud set.
4. The method of claim 3, wherein the obtaining three-dimensional point cloud information of the target workpiece based on the second point cloud set comprises:
and identifying point cloud information in the second point cloud set, which is positioned on the same plane as the coding plate, eliminating the point cloud information positioned on the same plane as the coding plate, and acquiring the three-dimensional point cloud information of the target workpiece.
5. The method of claim 4, wherein the culling the point cloud information located in the same plane as the code plate comprises:
and obtaining a space plane equation of the coding plate through plane fitting, identifying the point cloud information which is positioned on the same plane with the coding plate through the space plane equation, and rejecting the point cloud information which is positioned on the same plane with the coding plate.
6. The method of claim 4, wherein the removing the point cloud information located on the same plane as the encoding plate to obtain the three-dimensional point cloud information of the target workpiece comprises:
identifying point cloud information in the second point cloud set, which is located on the same plane as the coding plate, and acquiring a third point cloud set after eliminating the point cloud information in the same plane as the coding plate;
and carrying out noise reduction treatment on the third point cloud set to obtain the three-dimensional point cloud information of the target workpiece.
7. The method according to claim 3, wherein the code plate is a quadrilateral, and the code plate comprises four two-dimensional codes respectively located at four vertices of the quadrilateral.
8. The method according to claim 3, wherein the code plate is integrally arranged with a two-dimensional code.
9. The method according to any one of claims 4 to 5, wherein the two-dimensional code is ArUco two-dimensional code.
10. The method of claim 7, wherein the placing the target workpiece on the code plate comprises: the target workpiece is placed in a blank without codes on the surface of the code plate.
CN202210030566.3A 2022-01-12 2022-01-12 Automatic three-dimensional modeling method Pending CN114494588A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210030566.3A CN114494588A (en) 2022-01-12 2022-01-12 Automatic three-dimensional modeling method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210030566.3A CN114494588A (en) 2022-01-12 2022-01-12 Automatic three-dimensional modeling method

Publications (1)

Publication Number Publication Date
CN114494588A true CN114494588A (en) 2022-05-13

Family

ID=81511754

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210030566.3A Pending CN114494588A (en) 2022-01-12 2022-01-12 Automatic three-dimensional modeling method

Country Status (1)

Country Link
CN (1) CN114494588A (en)

Similar Documents

Publication Publication Date Title
CN109388093B (en) Robot attitude control method and system based on line feature recognition and robot
CN111015655B (en) Mechanical arm grabbing method and device, computer readable storage medium and robot
DE102019130902B4 (en) A robot system with a dynamic packing mechanism
CN108364311B (en) Automatic positioning method for metal part and terminal equipment
CN111168686A (en) Object grabbing method, device, equipment and storage medium
CN110176078B (en) Method and device for labeling training set data
CN113610921B (en) Hybrid workpiece gripping method, apparatus, and computer readable storage medium
CN110751620B (en) Method for estimating volume and weight, electronic device, and computer-readable storage medium
CN111259854B (en) Method and device for identifying structured information of table in text image
CN111860060A (en) Target detection method and device, terminal equipment and computer readable storage medium
CN112847375B (en) Workpiece grabbing method and device, computer equipment and storage medium
CN112509145B (en) Material sorting method and device based on three-dimensional vision
CN112464410A (en) Method and device for determining workpiece grabbing sequence, computer equipment and medium
CN114742789B (en) General part picking method and system based on surface structured light and electronic equipment
CN114757878A (en) Welding teaching method, device, terminal equipment and computer readable storage medium
CN112621765B (en) Automatic equipment assembly control method and device based on manipulator
CN114494588A (en) Automatic three-dimensional modeling method
CN111985338A (en) Binding point identification method, device, terminal and medium
CN109388131B (en) Robot attitude control method and system based on angular point feature recognition and robot
CN111260723B (en) Barycenter positioning method of bar and terminal equipment
CN113313803B (en) Stack type analysis method, apparatus, computing device and computer storage medium
CN114638888A (en) Position determination method and device, electronic equipment and readable storage medium
CN114155291A (en) Box body pose identification method and device, terminal and storage medium
CN113538582A (en) Method and device for determining workpiece grabbing sequence, computer equipment and medium
CN114897997B (en) Camera calibration method, device, equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information
CB02 Change of applicant information

Address after: Room 407, building 5, yard 98, lianshihu West Road, Mentougou District, Beijing 102300

Applicant after: AUBO (Beijing) Intelligent Technology Co.,Ltd.

Address before: 100000 301a1, building 5, Shilong Sunshine Building, No. 98, lianshihu West Road, Mentougou District, Beijing

Applicant before: AUBO (BEIJING) ROBOTICS TECHNOLOGY Co.,Ltd.