CN112488910B - Point cloud optimization method, device and equipment - Google Patents

Point cloud optimization method, device and equipment Download PDF

Info

Publication number
CN112488910B
CN112488910B CN202011279945.3A CN202011279945A CN112488910B CN 112488910 B CN112488910 B CN 112488910B CN 202011279945 A CN202011279945 A CN 202011279945A CN 112488910 B CN112488910 B CN 112488910B
Authority
CN
China
Prior art keywords
filtering
pixel
point cloud
dimensional
point
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202011279945.3A
Other languages
Chinese (zh)
Other versions
CN112488910A (en
Inventor
李玉成
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangzhou Shiyuan Electronics Thecnology Co Ltd
Original Assignee
Guangzhou Shiyuan Electronics Thecnology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangzhou Shiyuan Electronics Thecnology Co Ltd filed Critical Guangzhou Shiyuan Electronics Thecnology Co Ltd
Priority to CN202011279945.3A priority Critical patent/CN112488910B/en
Publication of CN112488910A publication Critical patent/CN112488910A/en
Application granted granted Critical
Publication of CN112488910B publication Critical patent/CN112488910B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/06Topological mapping of higher dimensional structures onto lower dimensional surfaces
    • G06T3/067Reshaping or unfolding 3D tree structures onto 2D planes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Computer Graphics (AREA)
  • Geometry (AREA)
  • Software Systems (AREA)
  • Image Processing (AREA)

Abstract

The invention relates to a point cloud optimization method, a device and equipment, wherein the point cloud optimization method comprises the following steps: acquiring a three-dimensional point cloud corresponding to a target object; projecting the three-dimensional point cloud to a camera imaging plane to obtain a two-dimensional depth map corresponding to the three-dimensional point cloud; the original pixel value of the pixel point in the two-dimensional depth map is the depth value of the corresponding pixel point in the three-dimensional point cloud; respectively carrying out convolution filtering operation on the two-dimensional depth map according to a plurality of preset direction filtering kernels to obtain a plurality of filtering pixel values corresponding to each pixel point in the two-dimensional depth map; the filtering boundary of each direction filtering core respectively represents the image edges of different forms; and replacing the original pixel value of each pixel point with a filtered pixel value with the smallest difference value between the original pixel value of each pixel point and the original pixel value of each pixel point, and obtaining the optimized two-dimensional depth map. Compared with the prior art, the method and the device have the advantages that smooth filtering processing of the point cloud is realized by presetting a plurality of direction filtering kernels, and accuracy of point cloud optimization is improved.

Description

Point cloud optimization method, device and equipment
Technical Field
The embodiment of the application relates to the technical field of optical detection, in particular to a point cloud optimization method, a point cloud optimization device and point cloud optimization equipment.
Background
Three-dimensional automated optical inspection (3D Automatic Optic Inspection,3D AOI) systems typically utilize structured light imaging principles to obtain a high-precision three-dimensional point cloud of a target object. However, in actual measurement, due to limitation of resolution of camera pixels, a great amount of noise and voids exist in the finally obtained three-dimensional point cloud, so that optimization processing is required to be performed on the point cloud after the point cloud is obtained.
The original point cloud optimization mode generally uses methods such as mobile least square and statistical filtering to smooth and denoise the three-dimensional point cloud, and the methods can improve the quality of the three-dimensional point cloud, but can involve a large number of normal vector quantities, so that the operation process is very time-consuming and difficult to meet the requirements of real-time detection optimization.
Disclosure of Invention
The embodiment of the application provides a point cloud optimization method, a device and equipment, which can efficiently realize the optimization processing of point cloud and solve the problems of high cost and poor real-time performance of point cloud optimization calculation, and the technical scheme is as follows:
in a first aspect, an embodiment of the present application provides a point cloud optimization method, including:
acquiring a three-dimensional point cloud corresponding to a target object;
projecting the three-dimensional point cloud to a camera imaging plane to obtain a two-dimensional depth map corresponding to the three-dimensional point cloud; the original pixel value of the pixel point in the two-dimensional depth map is the depth value of the corresponding pixel point in the three-dimensional point cloud;
Respectively carrying out convolution filtering operation on the two-dimensional depth map according to a plurality of preset direction filtering kernels to obtain a plurality of filtering pixel values corresponding to each pixel point in the two-dimensional depth map; wherein, the filtering boundary of each direction filtering kernel respectively represents the image edges of different forms;
and replacing the original pixel value of each pixel point with a filtering pixel value with the smallest difference value between the original pixel value of the pixel point and the original pixel value of the pixel point, and obtaining the optimized two-dimensional depth map.
In a second aspect, an embodiment of the present application provides a method for optimizing a point cloud of a circuit board, including the steps of:
acquiring a three-dimensional point cloud corresponding to a target circuit board;
projecting the three-dimensional point cloud to a camera imaging plane to obtain a two-dimensional depth map corresponding to the three-dimensional point cloud; the original pixel value of the pixel point in the two-dimensional depth map is the depth value of the corresponding pixel point in the three-dimensional point cloud;
respectively carrying out convolution filtering operation on the two-dimensional depth map according to a plurality of preset direction filtering kernels to obtain a plurality of filtering pixel values corresponding to each pixel point in the two-dimensional depth map; wherein the filtering boundary of each direction filtering kernel represents the image edge of different forms respectively.
Replacing the original pixel value of each pixel point with a filter pixel value with the smallest difference value between the original pixel value of the pixel point, and obtaining an optimized two-dimensional depth map corresponding to the target circuit board;
and converting the pixel value of the pixel point in the optimized two-dimensional depth map into the depth value of the corresponding pixel point in the three-dimensional point cloud, and obtaining the optimized three-dimensional point cloud corresponding to the target circuit board.
In a third aspect, an embodiment of the present application provides a point cloud optimization apparatus, including:
the first point cloud acquisition unit is used for acquiring a three-dimensional point cloud corresponding to the target object;
the first projection unit is used for projecting the three-dimensional point cloud to a camera imaging plane to obtain a two-dimensional depth map corresponding to the three-dimensional point cloud; the original pixel value of the pixel point in the two-dimensional depth map is the depth value of the corresponding pixel point in the three-dimensional point cloud;
the first filtering unit is used for respectively carrying out convolution filtering operation on the two-dimensional depth map according to a plurality of preset direction filtering kernels to obtain a plurality of filtering pixel values corresponding to each pixel point in the two-dimensional depth map; wherein, the filtering boundary of each direction filtering kernel respectively represents the image edges of different forms;
And the first optimizing unit is used for replacing the original pixel value of each pixel point with a filtering pixel value with the smallest difference value with the original pixel value of the pixel point to obtain an optimized two-dimensional depth map.
In a fourth aspect, an embodiment of the present application provides a point cloud optimization apparatus for a circuit board, including:
the second point cloud acquisition unit is used for acquiring the three-dimensional point cloud corresponding to the target circuit board;
the second projection unit is used for projecting the three-dimensional point cloud to a camera imaging plane to obtain a two-dimensional depth map corresponding to the three-dimensional point cloud; the original pixel value of the pixel point in the two-dimensional depth map is the depth value of the corresponding pixel point in the three-dimensional point cloud;
the second filtering unit is used for respectively carrying out convolution filtering operation on the two-dimensional depth map according to a plurality of preset direction filtering kernels to obtain a plurality of filtering pixel values corresponding to each pixel point in the two-dimensional depth map; wherein, the filtering boundary of each direction filtering kernel respectively represents the image edges of different forms;
the second optimizing unit is used for replacing the original pixel value of each pixel point with a filtering pixel value with the smallest difference value between the original pixel value of the pixel point and the original pixel value of the pixel point to obtain an optimized two-dimensional depth map corresponding to the target circuit board;
And the third point cloud acquisition unit is used for converting the pixel value of the pixel point in the optimized two-dimensional depth image into the depth value of the corresponding pixel point in the three-dimensional point cloud to obtain the optimized three-dimensional point cloud corresponding to the target circuit board.
In a fifth aspect, an embodiment of the present application provides a point cloud optimization apparatus, including: a processor, a memory and a computer program stored in the memory and executable on the processor, the processor implementing the steps of the point cloud optimization method as in the first aspect or the steps of the point cloud optimization method of the circuit board of the second aspect when the computer program is executed.
In a sixth aspect, embodiments of the present application provide a computer readable storage medium storing a computer program which, when executed by a processor, implements a step of a point cloud optimization method as in the first aspect or a step of a point cloud optimization method of a circuit board of the second aspect.
According to the embodiment of the application, the three-dimensional point cloud corresponding to the target object is projected to the camera imaging plane, so that the dimension reduction processing of the point cloud data is realized, and the two-dimensional depth map corresponding to the three-dimensional point cloud is obtained, so that the instantaneity of the follow-up point cloud optimization can be ensured. And then, combining the difference of the image edge forms, presetting a plurality of direction filter kernels, enabling the filter boundaries of the different direction filter kernels to fully reflect the image edges of different forms, then respectively carrying out convolution filtering on the two-dimensional depth image through each preset direction filter kernel to obtain a plurality of filter pixel values corresponding to each pixel point, replacing the original pixel value of each pixel point with the filter pixel value with the smallest difference value between the original pixel value of the pixel point, and obtaining the optimized two-dimensional depth image, thereby not only avoiding complex parameter adjustment in the conventional filtering process and improving the robustness of the method, but also realizing smooth filter processing of point cloud and improving the accuracy of point cloud optimization under the condition of not influencing the image edge forms.
For a better understanding and implementation, the technical solutions of the present application are described in detail below with reference to the accompanying drawings.
Drawings
Fig. 1 is a schematic flow chart of a point cloud optimization method according to an embodiment of the present application;
FIG. 2 is a schematic diagram of a directional filter kernel including vertical filter boundaries according to one embodiment of the present application;
FIG. 3 is a schematic diagram of a directional filter kernel including a horizontal filter boundary according to an embodiment of the present application;
FIG. 4 is a schematic diagram of a directional filter kernel including a sloped filter boundary according to one embodiment of the present application;
FIG. 5 is a schematic diagram of a direction filter kernel including corner filter boundaries according to an embodiment of the present application;
fig. 6 is a schematic flow chart of S103 in the point cloud optimization method according to an embodiment of the present application;
fig. 7 is a schematic flow chart of S104 in the point cloud optimization method according to an embodiment of the present application;
FIG. 8 is a schematic diagram of comparison of optimized results of a two-dimensional depth map according to an embodiment of the present application;
fig. 9 is a flow chart of a point cloud optimization method according to another embodiment of the present application;
fig. 10 is a flow chart of a point cloud optimization method of a circuit board according to an embodiment of the present application;
FIG. 11 is a schematic diagram illustrating comparison of the results of the point cloud optimization of a circuit board according to one embodiment of the present disclosure;
fig. 12 is a schematic structural diagram of a point cloud optimizing device according to an embodiment of the present application;
fig. 13 is a schematic structural diagram of a point cloud optimizing device of a circuit board according to an embodiment of the present application;
fig. 14 is a schematic structural diagram of a point cloud optimizing device according to an embodiment of the present application.
Detailed Description
Reference will now be made in detail to exemplary embodiments, examples of which are illustrated in the accompanying drawings. When the following description refers to the accompanying drawings, the same numbers in different drawings refer to the same or similar elements, unless otherwise indicated. The implementations described in the following exemplary examples are not representative of all implementations consistent with the present application. Rather, they are merely examples of apparatus and methods consistent with some aspects of the present application as detailed in the accompanying claims.
The terminology used in the present application is for the purpose of describing particular embodiments only and is not intended to be limiting of the present application. As used in this application and the appended claims, the singular forms "a," "an," and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. It should also be understood that the term "and/or" as used herein refers to and encompasses any or all possible combinations of one or more of the associated listed items.
It should be understood that although the terms first, second, third, etc. may be used herein to describe various information, these information should not be limited by these terms. These terms are only used to distinguish one type of information from another. For example, a first message may also be referred to as a second message, and similarly, a second message may also be referred to as a first message, without departing from the scope of the present application. The word "if"/"if" as used herein may be interpreted as "at … …" or "at … …" or "in response to a determination", depending on the context.
Referring to fig. 1, a flow chart of a point cloud optimization method according to an embodiment of the present application is shown, and the method includes the following steps:
s101: and acquiring a three-dimensional point cloud corresponding to the target object.
In an alternative embodiment, the execution body of the point cloud optimization method may be a device capable of directly collecting three-dimensional point cloud, such as a three-dimensional laser scanner or a three-dimensional optical detector, or may be a component in the device, such as a processor or a microprocessor in the device; in another optional embodiment, the execution body of the point cloud optimization method may be other devices that establish data connection with devices such as a three-dimensional laser scanner or a three-dimensional optical detector, and the other devices indirectly acquire the three-dimensional point cloud through the devices such as the three-dimensional laser scanner or the three-dimensional optical detector; in other optional embodiments, the execution body of the point cloud optimization method may also be an integrated device integrated with a three-dimensional laser scanning function or a three-dimensional optical detection function, and may also be a component in the integrated device.
In the embodiment of the application, the device for establishing data connection with the three-dimensional optical detector (hereinafter referred to as a point cloud optimization device) is taken as an execution subject, and the point cloud optimization method is executed.
Specifically, the point cloud optimizing device establishes data connection with the three-dimensional optical detector, and acquires the three-dimensional point cloud corresponding to the target object from the three-dimensional optical detector.
Wherein the target object may be an object of any morphology or shape, in an alternative embodiment the target object may be a printed circuit board (Printed circuit boards, PCB).
The three-dimensional point cloud corresponding to the target object is a collection of massive points representing the surface characteristics of the target object.
The process of the point cloud optimizing device for acquiring the three-dimensional point cloud corresponding to the target object through the three-dimensional optical detector is as follows: firstly, irradiating structural light on a target object by a three-dimensional optical detector; then, imaging in a camera after being reflected by the surface of the target object is obtained, and three-dimensional coordinates of each point on the surface of the target object are obtained by analyzing phase values of each pixel point in the imaging; and finally, obtaining the three-dimensional point cloud corresponding to the target object.
In this embodiment of the present application, when a three-dimensional point cloud corresponding to a target object is acquired by using a three-dimensional optical detector, a camera imaging plane of the three-dimensional optical detector is parallel to a plane on which the target object is placed, so that three-dimensional coordinates of each point on a surface of the target object detected by the three-dimensional optical detector include two-dimensional coordinates of each point and a distance value between each point and the camera imaging plane (which may also be understood as a depth value of each point relative to the camera imaging plane).
S102: projecting the three-dimensional point cloud to a camera imaging plane to obtain a two-dimensional depth map corresponding to the three-dimensional point cloud; the original pixel value of the pixel point in the two-dimensional depth map is the depth value of the corresponding pixel point in the three-dimensional point cloud.
The point cloud optimizing equipment projects the three-dimensional point cloud corresponding to the target object to the camera imaging plane, namely projects the three-dimensional point cloud to the two-dimensional plane, and a two-dimensional depth map corresponding to the three-dimensional point cloud is obtained, so that the purpose of reducing the operation amount is achieved, and the requirement of real-time optimization is met.
In the two-dimensional depth map, the original pixel value of the pixel point is the depth value of the corresponding pixel point in the three-dimensional point cloud, and the two-dimensional coordinate of the pixel point is the two-dimensional coordinate of the corresponding pixel point in the three-dimensional point cloud.
For example, the three-dimensional coordinate of a certain point in the three-dimensional point cloud is (x 1, y1, z 1), where z1 represents the depth value of the point relative to the camera imaging plane, and then after the three-dimensional point cloud is projected onto the camera imaging plane, the two-dimensional coordinate of the pixel point corresponding to the point in the two-dimensional depth map is (x 1, y 1), and the pixel value is the depth value z1.
S103: respectively carrying out convolution filtering operation on the two-dimensional depth map according to a plurality of preset direction filtering kernels to obtain a plurality of filtering pixel values corresponding to each pixel point in the two-dimensional depth map; wherein the filtering boundary of each direction filtering kernel represents the image edge of different forms respectively.
Since the edges of the object have irregularities, their corresponding image edges will also have a variety of different morphologies. In the embodiment of the application, a plurality of direction filter kernels are preset in the point cloud optimizing device, the filter boundaries of the plurality of direction filter kernels are used for representing the image edges of different forms, and convolution filter operation is carried out on the two-dimensional depth map by using the different direction filter kernels respectively, so that the technical problem of low accuracy of a filter result is solved.
In the field of image processing, each filter kernel includes a plurality of filter elements, and the values of the filter elements are used for performing a weighting operation on the pixel values of the pixel points. After convolution filtering operation is performed on an input image through a filter kernel, a filter pixel value of each pixel point in an output image is an average value obtained by performing weighted operation on a pixel value of a pixel point in a coverage area of the filter kernel and a value of a filter element in the input image, so that it can be understood that the magnitude of the value of the filter element in the filter kernel directly affects the obtained filter pixel value, the value of the filter element in the direction filter kernel is adjusted based on image edges in different forms, and further, a filter boundary of the filter kernel is represented by a change boundary of the value of the filter element.
In an alternative embodiment, to better represent the boundary of the direction filter kernel, the direction filter kernel is divided into an effective filter area and an ineffective filter area, specifically, before the convolution filtering operation is performed on the two-dimensional depth map, the point cloud optimizing device divides the effective filter area and the ineffective filter area in the direction filter kernel based on different forms of the image edge, and generates a plurality of direction filter kernels, and the division boundary of the effective filter area and the ineffective filter area represents the filter boundary. Wherein, the value of the filter element in the effective filter area in the direction filter kernel is 1, and the value of the filter element in the ineffective filter area is 0.
It should be noted that, the setting of the values of the filter elements in the effective filter area and the values of the filter elements in the ineffective filter area in the direction filter kernel is not limited, and in other alternative embodiments, the values of the filter elements in the effective filter area and the values of the filter elements in the ineffective filter area in the direction filter kernel may be adaptively adjusted.
In another alternative embodiment, the image edge morphology is divided in more detail, including in particular vertical morphology, horizontal morphology, oblique morphology, and corner morphology. Further, when dividing the effective filter region and the ineffective filter region in the direction filter kernel based on different forms of the image edge to generate a plurality of the direction filter kernels, the following division method may be specifically divided:
(1) And dividing an effective filtering area and an ineffective filtering area in the direction filtering kernel in the vertical direction according to the vertical form of the image edge, and generating the direction filtering kernel comprising the vertical filtering boundary.
Specifically, referring to fig. 2, fig. 2 is a schematic structural diagram of a directional filtering kernel including a vertical filtering boundary according to an embodiment of the present application, where the directional filtering kernel shown in fig. 2 respectively represents a right vertical form and a left vertical form of an image edge, a value of a filtering element in an effective filtering area is 1, a value of a filtering element in an ineffective filtering area is 0, and a dividing boundary of the effective filtering area and the ineffective filtering area is the vertical filtering boundary.
In order to better observe the filtering boundaries of the directional filtering kernels, fig. 2 shows the effective filtering area and the ineffective filtering area with different gray values, it can be seen that the boundaries where the gray values change significantly are filtering boundaries, which also coincide with the vertical morphology of the image edges.
(2) And dividing an effective filtering area and an ineffective filtering area in the direction filtering kernel in the horizontal direction according to the horizontal form of the image edge, and generating the direction filtering kernel comprising the horizontal filtering boundary.
Specifically, referring to fig. 3, fig. 3 is a schematic structural diagram of a direction filtering kernel including a horizontal filtering boundary according to an embodiment of the present application, where the direction filtering kernel shown in fig. 3 respectively represents an upper horizontal aspect and a lower horizontal aspect of an image edge, a value of a filtering element in an effective filtering area is 1, a value of a filtering element in an ineffective filtering area is 0, and a dividing boundary of the effective filtering area and the ineffective filtering area is the horizontal filtering boundary.
In order to better observe the filtering boundary of the directional filtering kernel, fig. 3 also shows the effective filtering region and the ineffective filtering region using different gray values, and it can be seen that the boundary where the gray value is significantly changed is the filtering boundary, which also coincides with the horizontal form of the image edge.
(3) According to the oblique form of the image edge, dividing an effective filtering area and an ineffective filtering area in the direction filtering kernel in an oblique direction to generate the direction filtering kernel comprising an oblique filtering boundary.
Specifically, referring to fig. 4, fig. 4 is a schematic structural diagram of a direction filtering kernel including a slant filtering boundary according to an embodiment of the present application, where the direction filtering kernel shown in fig. 4 respectively represents a lower right angle slant form, an upper left angle slant form, a lower left angle slant form and an upper right angle slant form of an image edge, a value of a filtering element in an effective filtering area is 1, a value of a filtering element in an ineffective filtering area is 0, and a dividing boundary of the effective filtering area and the ineffective filtering area is the slant filtering boundary.
In order to better observe the filtering boundary of the directional filtering kernel, fig. 4 also shows the effective filtering region and the ineffective filtering region using different gray values, and it can be seen that the boundary where the gray value is significantly changed is the filtering boundary, which also coincides with the oblique form of the image edge.
(4) And dividing an effective filtering area and an ineffective filtering area in the direction filtering kernel by changing the direction according to the corner shape of the image edge to generate the direction filtering kernel comprising a corner filtering boundary.
Specifically, referring to fig. 5, fig. 5 is a schematic structural diagram of a direction filtering kernel including a corner filtering boundary according to an embodiment of the present application, where the direction filtering kernel shown in fig. 5 respectively represents a lower right corner form, a lower left corner form, an upper right corner form, and an upper left corner form of an image edge, a value of a filtering element in an effective filtering region is 1, a value of a filtering element in an ineffective filtering region is 0, and a dividing boundary between the effective filtering region and the ineffective filtering region is the corner filtering boundary.
In order to better observe the filtering boundary of the directional filtering kernel, fig. 5 also shows the effective filtering region and the ineffective filtering region using different gray values, and it can be seen that the boundary where the gray value is significantly changed is the filtering boundary, which also coincides with the corner shape of the image edge.
The 4 modes of dividing the effective filtering area and the ineffective filtering area in the direction filtering kernel to generate a plurality of the direction filtering kernels fully consider the vertical form, the horizontal form, the inclined form and the corner form of the image edge, and can effectively improve the accuracy of the convolution filtering result.
Next, a specific description will be given of how to perform convolution filtering operation on the two-dimensional depth map according to a plurality of preset direction filtering checks, referring to fig. 6, step S103 includes S1031 to S1034, which are specifically as follows:
s1031: and obtaining a target pixel point corresponding to the filter center according to the corresponding target position of the filter center of the ith direction filter core in the two-dimensional depth map.
And the point cloud optimizing equipment obtains a target pixel point corresponding to the filter center according to the corresponding target position of the filter center of the ith directional filter core in the two-dimensional depth map.
The filtering center of the direction filtering kernel is at a corresponding target position in the two-dimensional depth map, and can be used for indicating which pixel point in the current two-dimensional depth map is subjected to filtering operation, wherein the pixel point at the target position is the target pixel point and is also the pixel point to be subjected to filtering.
For the sake of understanding, please refer to fig. 2 to 5, in which the star positions are the filtering centers of the filtering kernels in each direction.
S1032: and obtaining a target area covered by the direction filter kernel in the two-dimensional depth map according to the target position and the filter radius of the ith direction filter kernel.
The filter radius of the i-th direction filter kernel does not indicate that the shape of the direction filter kernel is circular, but is used only to calculate the target area in the two-dimensional depth map that the direction filter kernel can cover.
Taking the directional filter kernels shown in fig. 2 to 5 as an example, the filter radius of the directional filter kernel is 2, that is, the size 5*5 of the directional filter kernel can cover a target area including 5*5 pixels.
S1033: and acquiring an ith filtering pixel value corresponding to the target pixel point based on the values of all the filtering elements in the ith direction filtering core, the original pixel values of the pixel points in the target area and the number of effective filtering elements in the ith direction filtering core.
The point cloud optimizing device firstly carries out weighted accumulation operation based on the values of all filtering elements in the ith direction filtering core and the original pixel values of the pixel points in the target area to obtain weighted pixel values of the target pixel points, and then divides the weighted pixel values by the number of effective filtering elements in the ith direction filtering core to obtain the ith filtering pixel values corresponding to the target pixel points.
Specifically, the point cloud optimizing device obtains an ith filter pixel value corresponding to the target pixel according to values of all filter elements in the ith direction filter kernel, original pixel values of the pixel points in the target area, the number of effective filter elements in the ith direction filter kernel and a preset filter pixel value calculation formula.
The preset filter pixel value calculation formula is as follows:
Q i (center X, center Y) represents the i-th filter pixel value corresponding to the target pixel point, and (center X, center Y) represents the corresponding target position of the filter center of the direction filter kernel in the two-dimensional depth map, namely the position of the target pixel point in the two-dimensional depth map, and (center X+k, center Y+l) represents the position of each filter element in the direction filter kernel, namely the position of the pixel point in the target region in the pixel region, W i (center X+k, center Y+l) represents the value of the filter element located at (center X+k, center Y+l) in the i-th direction filter kernel, P (center X+k, center Y+l) represents the original pixel value of the pixel point located at (center X+k, center Y+l) in the target area, r represents the filter radius of the i-th direction filter kernel, |k|r and |l|r, |k|represents the horizontal distance between the filter element and the filter center, |l|represents the vertical distance between the filter element and the filter center, and 1.ltoreq.i.ltoreq.n represents the number of direction filter kernels.
S1034: and moving the ith direction filtering kernel to repeatedly execute the steps until an ith filtering pixel value corresponding to each pixel point in the two-dimensional depth map is obtained.
And the point cloud optimizing equipment moves the ith direction filtering core to repeatedly execute the steps until the ith filtering pixel value corresponding to each pixel point in the two-dimensional depth map is obtained.
Specifically, the i-th filtering pixel value corresponding to the next target pixel point is calculated by changing the next target pixel point corresponding to the filtering center of the direction filtering kernel until the i-th filtering pixel value corresponding to each pixel point is obtained.
Since there are multiple direction filter kernels, there are multiple filter pixel values corresponding to each pixel point.
S104: and replacing the original pixel value of each pixel point with a filtering pixel value with the smallest difference value between the original pixel value of the pixel point and the original pixel value of the pixel point, and obtaining the optimized two-dimensional depth map.
The point cloud optimizing device obtains a filtered pixel value with the smallest difference value between all the filtered pixel values of each pixel point and the original pixel value of the pixel point, and replaces the original pixel value of the pixel point with the filtered pixel value, so that an optimized two-dimensional depth map is obtained.
Specifically, the point cloud optimizing device performs the following steps to obtain a filtered pixel value with the smallest difference value between the filtered pixel value and the original pixel value, which corresponds to each pixel point.
Initializing a setting, let i=1, diff min =+∞,Q min (centerX,centerY)=+∞,
Acquiring an ith filtered pixel value Q corresponding to a pixel point with a position (center X, center Y) i (centerX,centerY),
If Q i (centerX,centerY)-P(centerX,centerY)<diff min
Then the first time period of the first time period,
otherwise, let i=i+1,
if i is less than or equal to n, jumping back to the second step, otherwise, outputting Q min (centerX,centerY),
Replacing the original pixel value of the pixel point with the position (centrX, centrY) with Q min (centerX,centerY)。
And the point cloud optimizing equipment repeatedly executes the steps until the filtered pixel value with the minimum difference value between the pixel points at each position and the original pixel value is obtained, and the replacement operation is completed, so that the optimized two-dimensional depth map is finally obtained.
In the second step of executing the above steps, the i-th filter pixel value Q corresponding to the pixel point at the position (center x, center y) is obtained i (center X, center Y) may be performed when the second step is performedTo calculate Q i (center X, center Y) or other threads to complete Q i The computation of (centrX, centrY) is directly obtained and used when the second step is executed, and both implementations are within the scope of the application, and relatively speaking, Q is completed in other threads i The calculation of (centrX, centrY), where the direct acquisition uses a mode algorithm that performs more efficiently.
According to the embodiment of the application, the three-dimensional point cloud corresponding to the target object is projected to the camera imaging plane, so that the dimension reduction processing of the point cloud data is realized, and the two-dimensional depth map corresponding to the three-dimensional point cloud is obtained, so that the instantaneity of the follow-up point cloud optimization can be ensured. And then, combining the difference of the image edge forms, presetting a plurality of direction filter kernels, enabling the filter boundaries of the different direction filter kernels to fully reflect the image edges of different forms, then respectively carrying out convolution filtering on the two-dimensional depth image through each preset direction filter kernel to obtain a plurality of filter pixel values corresponding to each pixel point, replacing the original pixel value of each pixel point with the filter pixel value with the smallest difference value between the original pixel value of the pixel point, and obtaining the optimized two-dimensional depth image, thereby not only avoiding complex parameter adjustment in the conventional filtering process and improving the robustness of the method, but also realizing smooth filter processing of point cloud and improving the accuracy of point cloud optimization under the condition of not influencing the image edge forms.
In an alternative embodiment, referring to fig. 7, in order to perform smoothing processing on the point cloud and effectively remove noise pixels, step S104 further includes steps S1041 to S1043:
s1041: and obtaining a filtered pixel value with the smallest difference value with the original pixel value of the pixel point.
The point cloud optimization apparatus obtains a filtered pixel value having the smallest difference from the original pixel value of the pixel point, but does not directly perform the replacement.
S1042: and if the minimum difference value is not greater than the preset invalid difference value threshold value, replacing the original pixel value of the pixel point with a filtered pixel value with the minimum difference value between the original pixel value of the pixel point and the original pixel value of the pixel point, and obtaining the optimized two-dimensional depth map.
And the point cloud optimizing equipment judges whether the minimum difference value is not larger than a preset invalid difference value threshold value, and if so, replaces the original pixel value of the pixel point with a filtered pixel value with the minimum difference value between the original pixel value of the pixel point and the original pixel value of the pixel point.
In this embodiment of the present application, the preset invalid difference threshold is a threshold selected according to prior data, and may specifically be set according to actual situations, which is not limited herein.
S1043: and if the minimum difference value is larger than a preset invalid difference value threshold value, replacing the original pixel value of the pixel point with a null value.
If the minimum difference value is larger than the preset invalid difference value threshold value, the result shows that after the pixel point is filtered, the obtained filtered pixel value has larger phase difference with the original pixel value, and the probability that the pixel point is a noise pixel point is larger, so that the original pixel value of the pixel point is replaced by the point cloud optimizing equipment by a null value, and the point cloud optimizing equipment also shows that the point has no depth value, and further, the pixel point is null when the optimized two-dimensional depth map is formed.
In this embodiment, after obtaining the filtered pixel value with the smallest difference between the original pixel values of the pixel points, the point cloud optimizing device does not directly replace the original pixel value with the filtered pixel value, but judges the smallest difference to determine whether the smallest difference is greater than a preset invalid difference threshold, if so, judges that the pixel point is a noise pixel point, so as to remove the noise pixel point, further, it is ensured that the noise pixel point can be effectively removed while the point cloud is subjected to smoothing processing, and further the subsequent point cloud optimizing effect is improved.
Referring to fig. 8, fig. 8 is a schematic diagram illustrating comparison of optimized results of a two-dimensional depth map according to an embodiment of the present application. In fig. 8, the left side is the original two-dimensional depth map, and the right side is the optimized two-dimensional depth map. As can be seen from fig. 8, in the optimized two-dimensional depth map, the point cloud is smoother, and the noise pixels are effectively removed.
In another alternative embodiment, referring to fig. 9, after step S104 is performed, step S105 is further included to implement three-dimensional reconstruction of the target object:
s105: and converting the pixel value of the pixel point in the optimized two-dimensional depth map into the depth value of the corresponding pixel point in the three-dimensional point cloud to obtain the optimized three-dimensional point cloud corresponding to the target object.
And the point cloud optimizing equipment converts the pixel value of the pixel point in the optimized two-dimensional depth map into the depth value of the corresponding pixel point in the three-dimensional point cloud, so as to obtain the optimized three-dimensional point cloud corresponding to the target object.
The optimized three-dimensional point cloud can more accurately reflect the condition of the object surface, noise points can be effectively removed, and two-dimensional data are processed in the whole operation process, so that the real-time performance of optimization can be met.
Because the PCB circuit board can have certain defects in the production process, the defects of the PCB circuit board can be detected through the point cloud of the PCB circuit board. However, when the point cloud of the circuit board is obtained, since there is smooth solder or components on the circuit board, specular reflection is caused, and a large amount of noise point clouds are generated, and these noise point clouds may seriously affect the detection result, in an alternative embodiment of the present application, the optimization is performed for the point cloud of the circuit board, and a method for optimizing the point cloud of the circuit board is provided, please refer to fig. 10, which includes steps S201 to S205, specifically as follows:
S201: and acquiring the three-dimensional point cloud corresponding to the target circuit board.
The target circuit board may be any type of circuit board, and the type thereof is not limited herein.
It should be emphasized that, when the three-dimensional point cloud corresponding to the target circuit board is acquired, the camera imaging plane is parallel to the plane where the target circuit board is located.
S202: projecting the three-dimensional point cloud to a camera imaging plane to obtain a two-dimensional depth map corresponding to the three-dimensional point cloud; the original pixel value of the pixel point in the two-dimensional depth map is the depth value of the corresponding pixel point in the three-dimensional point cloud.
S203: respectively carrying out convolution filtering operation on the two-dimensional depth map according to a plurality of preset direction filtering kernels to obtain a plurality of filtering pixel values corresponding to each pixel point in the two-dimensional depth map; wherein the filtering boundary of each direction filtering kernel represents the image edge of different forms respectively.
S204: and replacing the original pixel value of each pixel point with a filter pixel value with the smallest difference value between the original pixel value of the pixel point and the original pixel value of the pixel point to obtain an optimized two-dimensional depth map corresponding to the target circuit board.
S205: and converting the pixel value of the pixel point in the optimized two-dimensional depth map into the depth value of the corresponding pixel point in the three-dimensional point cloud, and obtaining the optimized three-dimensional point cloud corresponding to the target circuit board.
The execution subject and explanation of steps S201 to S205 have been described in steps S102 to S105, and the difference is only that the current target object is a circuit board, and the detailed process is not repeated. The point cloud optimization method for the circuit board can realize smooth processing of point clouds, does not need to set complex parameters, and can process outlier noise points with large variation.
Referring to fig. 11, fig. 11 is a schematic diagram illustrating a comparison of the point cloud optimization result of the circuit board according to an embodiment of the present application. The upper two graphs in fig. 11 are an original circuit board point cloud graph and a circuit board point cloud graph after bilateral filtering, and the lower one in fig. 11 is a point cloud graph of a circuit board obtained by applying the point cloud optimization method of the circuit board provided by the embodiment of the application. As can be seen from fig. 11, when the point cloud optimization method for the circuit board provided by the embodiment of the present application performs the point cloud optimization for the circuit board, noise points in the area 1 are removed, and the point cloud in the area 2 can better reflect the actual situation of the circuit board.
In an alternative embodiment, after the step S205 is performed, the point cloud optimizing device may further reconstruct a three-dimensional image of the target circuit board according to the optimized three-dimensional point cloud, and perform defect detection on the target circuit board.
The point cloud optimizing equipment reconstructs a three-dimensional image of the target circuit board according to the optimized three-dimensional point cloud, so that the defect detection of the target circuit board is realized, the detection accuracy is improved, the detection speed is ensured, and the requirements of high-speed and high-precision defect detection on a circuit board production line can be met.
Fig. 12 is a schematic structural diagram of a point cloud optimizing apparatus according to an embodiment of the present application. The apparatus may be implemented as all or part of a point cloud optimization device by software, hardware, or a combination of both. The apparatus 12 includes a first point cloud acquisition unit 121, a first projection unit 122, a first filtering unit 123, and a first optimization unit 124:
a first point cloud obtaining unit 121, configured to obtain a three-dimensional point cloud corresponding to a target object;
the first projection unit 122 is configured to project the three-dimensional point cloud onto a camera imaging plane, so as to obtain a two-dimensional depth map corresponding to the three-dimensional point cloud; the original pixel value of the pixel point in the two-dimensional depth map is the depth value of the corresponding pixel point in the three-dimensional point cloud;
the first filtering unit 123 is configured to perform convolution filtering operation on the two-dimensional depth map according to a plurality of preset direction filtering kernels, so as to obtain a plurality of filtered pixel values corresponding to each pixel point in the two-dimensional depth map; wherein, the filtering boundary of each direction filtering kernel respectively represents the image edges of different forms;
The first optimizing unit 124 is configured to replace the original pixel value of each pixel point with a filtered pixel value with the smallest difference value between the original pixel value and the original pixel value of the pixel point, so as to obtain an optimized two-dimensional depth map.
According to the embodiment of the application, the three-dimensional point cloud corresponding to the target object is projected to the camera imaging plane, so that the dimension reduction processing of the point cloud data is realized, and the two-dimensional depth map corresponding to the three-dimensional point cloud is obtained, so that the instantaneity of the follow-up point cloud optimization can be ensured. And then, combining the difference of the image edge forms, presetting a plurality of direction filter kernels, enabling the filter boundaries of the different direction filter kernels to fully reflect the image edges of different forms, then respectively carrying out convolution filtering on the two-dimensional depth image through each preset direction filter kernel to obtain a plurality of filter pixel values corresponding to each pixel point, replacing the original pixel value of each pixel point with the filter pixel value with the smallest difference value between the original pixel value of the pixel point, and obtaining the optimized two-dimensional depth image, thereby not only avoiding complex parameter adjustment in the conventional filtering process and improving the robustness of the method, but also realizing smooth filter processing of point cloud and improving the accuracy of point cloud optimization under the condition of not influencing the image edge forms.
Optionally, the device 12 further comprises:
a filter kernel generating unit, configured to divide an effective filter area and an ineffective filter area in the direction filter kernel based on different forms of the image edge, and generate a plurality of the direction filter kernels; the dividing boundary of the effective filtering area and the ineffective filtering area is the filtering boundary.
Optionally, the filter kernel generating unit includes:
the first filter kernel generation unit is used for dividing an effective filter area and an ineffective filter area in the direction filter kernel in the vertical direction according to the vertical form of the image edge to generate a direction filter kernel comprising a vertical filter boundary;
a second filter kernel generating unit, configured to divide an effective filter area and an ineffective filter area in the direction filter kernel in a horizontal direction according to a horizontal shape of the image edge, and generate a direction filter kernel including a horizontal filter boundary;
a third filter kernel generating unit for dividing an effective filter region and an ineffective filter region in the direction filter kernel in an oblique direction according to the oblique form of the image edge, and generating a direction filter kernel including an oblique filter boundary;
and the fourth filter kernel generating unit is used for dividing an effective filter area and an ineffective filter area in the direction filter kernel by changing the direction according to the corner form of the image edge to generate a direction filter kernel comprising a corner filter boundary.
Optionally, the first filtering unit 123 includes the steps of:
the first acquisition unit is used for obtaining a target pixel point corresponding to the filter center according to the corresponding target position of the filter center of the ith filter core in the direction in the two-dimensional depth map;
the second acquisition unit is used for acquiring a target area covered by the direction filter kernel in the two-dimensional depth map according to the target position and the filter radius of the ith direction filter kernel;
a third obtaining unit, configured to obtain an ith filtering pixel value corresponding to the target pixel point based on the value of each filtering element in the ith direction filtering core, the original pixel value of the pixel point in the target area, and the number of effective filtering elements in the ith direction filtering core;
and a fourth obtaining unit, configured to move the ith direction filtering kernel to repeatedly perform the above steps until an ith filtering pixel value corresponding to each pixel point in the two-dimensional depth map is obtained.
Optionally, the first optimizing unit 124 includes:
a fifth obtaining unit, configured to obtain a filtered pixel value with a smallest difference value from an original pixel value of the pixel point;
and the third optimizing unit is used for replacing the original pixel value of the pixel point with the filtered pixel value with the smallest difference value between the original pixel value of the pixel point and the original pixel value of the pixel point if the minimum difference value is not larger than the preset invalid difference value threshold value, so as to obtain the optimized two-dimensional depth map.
Optionally, the first optimizing unit 124 further includes:
and the replacing unit is used for replacing the original pixel value of the pixel point with a null value if the minimum difference value is larger than a preset invalid difference value threshold.
Optionally, the device 12 further comprises:
and the conversion unit is used for converting the pixel value of the pixel point in the optimized two-dimensional depth map into the depth value of the corresponding pixel point in the three-dimensional point cloud, so as to obtain the optimized three-dimensional point cloud corresponding to the target object.
Fig. 13 is a schematic structural diagram of a point cloud optimizing apparatus for a circuit board according to an embodiment of the present application. The apparatus may be implemented as all or part of a point cloud optimization device by software, hardware, or a combination of both. The apparatus 13 includes a second point cloud acquisition unit 131, a second projection unit 132, a second filtering unit 133, a second optimizing unit 134, and a third point cloud acquisition unit 135:
a second point cloud obtaining unit 131, configured to obtain a three-dimensional point cloud corresponding to the target circuit board;
the second projection unit 132 is configured to project the three-dimensional point cloud onto a camera imaging plane, so as to obtain a two-dimensional depth map corresponding to the three-dimensional point cloud; the original pixel value of the pixel point in the two-dimensional depth map is the depth value of the corresponding pixel point in the three-dimensional point cloud;
The second filtering unit 133 is configured to perform convolution filtering operation on the two-dimensional depth map according to a plurality of preset direction filtering kernels, so as to obtain a plurality of filtered pixel values corresponding to each pixel point in the two-dimensional depth map; wherein, the filtering boundary of each direction filtering kernel respectively represents the image edges of different forms;
the second optimizing unit 134 is configured to replace an original pixel value of each pixel point with a filtered pixel value with a smallest difference value between the original pixel value and the original pixel value of the pixel point, so as to obtain an optimized two-dimensional depth map corresponding to the target circuit board;
and the third point cloud obtaining unit 135 is configured to convert the pixel value of the pixel point in the optimized two-dimensional depth map into the depth value of the corresponding pixel point in the three-dimensional point cloud, so as to obtain an optimized three-dimensional point cloud corresponding to the target circuit board.
According to the embodiment of the application, the three-dimensional point cloud corresponding to the circuit board is projected to the camera imaging plane, so that dimension reduction processing of the point cloud data is realized, and the two-dimensional depth map corresponding to the three-dimensional point cloud is obtained, so that instantaneity of subsequent point cloud optimization can be ensured. And then, combining the difference of the image edge forms, presetting a plurality of direction filter kernels to enable the filter boundaries of the different direction filter kernels to fully reflect the image edges of different forms, and then respectively carrying out convolution filtering on the two-dimensional depth image through each preset direction filter kernel to obtain a plurality of filter pixel values corresponding to each pixel point, replacing the original pixel value of each pixel point with the filter pixel value with the smallest difference value between the original pixel values of the pixel points to obtain an optimized two-dimensional depth image, converting the pixel values of the pixel points in the optimized two-dimensional depth image into the depth values of the corresponding pixel points in the three-dimensional point cloud to obtain the optimized three-dimensional point cloud corresponding to the target circuit board, thereby not only avoiding complex parameter adjustment in the conventional filtering process and improving the robustness of the method, but also realizing the filter processing of the circuit board point cloud and improving the accuracy of the circuit board point cloud optimization under the condition that the image edge forms are not influenced.
Fig. 14 is a schematic structural diagram of a point cloud optimizing apparatus according to an embodiment of the present application. As shown in fig. 14, the point cloud optimization apparatus 14 may include: a processor 140, a memory 141, and a computer program 142 stored in the memory 140 and executable on the processor 140, such as: a point cloud optimization program or a point cloud optimization program of a circuit board; the processor 140, when executing the computer program 142, implements the steps of the method embodiments described above, such as steps S101 to S104 shown in fig. 1. Alternatively, the processor 140, when executing the computer program 142, performs the functions of the modules/units of the apparatus embodiments described above, such as the functions of the modules 121 to 124 shown in fig. 12, or the functions of the modules 131 to 135 shown in fig. 13.
Wherein the processor 140 may include one or more processing cores. The processor 140 connects the various parts within the point cloud optimization device 14 using various interfaces and lines, performs various functions of the point cloud optimization device 14 and processes data by executing or executing instructions, programs, code sets, or instruction sets stored in the memory 141, and invoking data in the memory 141, alternatively, the processor 140 may be implemented in at least one hardware form of digital signal processing (Digital Signal Processing, DSP), field-programmable gate array (Field-Programmable Gate Array, FPGA), programmable logic array (Programble Logic Array, PLA). The processor 140 may integrate one or a combination of several of a central processing unit (Central Processing Unit, CPU), an image processor (Graphics Processing Unit, GPU), and a modem, etc. The CPU mainly processes an operating system, a user interface, an application program and the like; the GPU is used for rendering and drawing the content required to be displayed by the touch display screen; the modem is used to handle wireless communications. It will be appreciated that the modem may not be integrated into the processor 140 and may be implemented by a single chip.
The Memory 141 may include a random access Memory (Random Access Memory, RAM) or a Read-Only Memory (Read-Only Memory). Optionally, the memory 141 includes a non-transitory computer readable medium (non-transitory computer-readable storage medium). Memory 141 may be used to store instructions, programs, code, sets of codes, or sets of instructions. The memory 141 may include a stored program area and a stored data area, wherein the stored program area may store instructions for implementing an operating system, instructions for at least one function (such as touch instructions, etc.), instructions for implementing the various method embodiments described above, etc.; the storage data area may store data or the like referred to in the above respective method embodiments. Memory 141 may also optionally be at least one storage device located remotely from the aforementioned processor 140.
The embodiments of the present application further provide a computer storage medium, where a plurality of instructions may be stored, where the instructions are adapted to be loaded on a processor and executed by the processor to perform the method steps of the embodiments shown in fig. 1, fig. 6, fig. 7, and fig. 9 or fig. 10, and the specific execution process may refer to the specific description of the embodiments shown in fig. 1, fig. 6, fig. 7, and fig. 9 or fig. 10, which is not repeated herein.
It will be apparent to those skilled in the art that, for convenience and brevity of description, only the above-described division of the functional units and modules is illustrated, and in practical application, the above-described functional distribution may be performed by different functional units and modules according to needs, i.e. the internal structure of the apparatus is divided into different functional units or modules to perform all or part of the above-described functions. The functional units and modules in the embodiment may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit, where the integrated units may be implemented in a form of hardware or a form of a software functional unit. In addition, specific names of the functional units and modules are only for convenience of distinguishing from each other, and are not used for limiting the protection scope of the present application. The specific working process of the units and modules in the above system may refer to the corresponding process in the foregoing method embodiment, which is not described herein again.
In the foregoing embodiments, the descriptions of the embodiments are emphasized, and in part, not described or illustrated in any particular embodiment, reference is made to the related descriptions of other embodiments.
Those of ordinary skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware, or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present invention.
In the embodiments provided in the present invention, it should be understood that the disclosed apparatus/terminal device and method may be implemented in other manners. For example, the apparatus/terminal device embodiments described above are merely illustrative, e.g., the division of the modules or units is merely a logical function division, and there may be additional divisions when actually implemented, e.g., multiple units or components may be combined or integrated into another system, or some features may be omitted or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed may be an indirect coupling or communication connection via interfaces, devices or units, which may be in electrical, mechanical or other forms.
The units described as separate units may or may not be physically separate, and units shown as units may or may not be physical units, may be located in one place, or may be distributed over a plurality of network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
In addition, each functional unit in the embodiments of the present invention may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit. The integrated units may be implemented in hardware or in software functional units.
The integrated module/unit may be stored in a computer readable storage medium if implemented in the form of a software functional unit and sold or used as a stand alone product. Based on such understanding, the present invention may implement all or part of the flow of the method of the above embodiment, or may be implemented by a computer program to instruct related hardware, where the computer program may be stored in a computer readable storage medium, and when the computer program is executed by a processor, the steps of each method embodiment described above may be implemented. Wherein the computer program comprises computer program code which may be in the form of source code, object code, executable files or in some intermediate form, etc.
The present invention is not limited to the above-described embodiments, but, if various modifications or variations of the present invention are not departing from the spirit and scope of the present invention, the present invention is intended to include such modifications and variations as fall within the scope of the claims and the equivalents thereof.

Claims (12)

1. The point cloud optimization method is characterized by comprising the following steps of:
acquiring a three-dimensional point cloud corresponding to a target object;
projecting the three-dimensional point cloud to a camera imaging plane to obtain a two-dimensional depth map corresponding to the three-dimensional point cloud; the original pixel value of the pixel point in the two-dimensional depth map is the depth value of the corresponding pixel point in the three-dimensional point cloud;
respectively carrying out convolution filtering operation on the two-dimensional depth map according to a plurality of preset direction filtering kernels to obtain a plurality of filtering pixel values corresponding to each pixel point in the two-dimensional depth map; the filtering boundary of each direction filtering kernel respectively represents the image edges of different forms; each direction filter kernel comprises a plurality of filter elements, and the plurality of filter pixel values corresponding to each pixel point are average values obtained by weighting the pixel values of the pixel points in the coverage area of the plurality of direction filter kernels and the values of the corresponding filter elements;
And replacing the original pixel value of each pixel point with a filtering pixel value with the smallest difference value between the original pixel value of each pixel point and the original pixel value of each pixel point, and obtaining the optimized two-dimensional depth map.
2. The method according to claim 1, wherein before the convolution filtering operation is performed on the two-dimensional depth map according to a plurality of preset direction filtering kernels, the method comprises the steps of:
dividing an effective filtering area and an ineffective filtering area in the direction filtering kernels based on different forms of the image edge to generate a plurality of direction filtering kernels; the dividing boundary of the effective filtering area and the ineffective filtering area is the filtering boundary.
3. The method of optimizing a point cloud as recited in claim 2, wherein the image edge forms include a vertical form, a horizontal form, a tilted form, and a corner form,
dividing an effective filtering area and an ineffective filtering area in the direction filtering kernel based on different forms of the image edge to generate a plurality of direction filtering kernels, wherein the method comprises the following steps:
dividing an effective filtering area and an ineffective filtering area in the direction filtering kernel in the vertical direction according to the vertical form of the image edge to generate a direction filtering kernel comprising a vertical filtering boundary;
Dividing an effective filtering area and an ineffective filtering area in the direction filtering kernel in the horizontal direction according to the horizontal form of the image edge to generate a direction filtering kernel comprising a horizontal filtering boundary;
dividing an effective filtering area and an ineffective filtering area in the direction filtering kernel in an inclined direction according to the inclined form of the image edge to generate a direction filtering kernel comprising an inclined filtering boundary;
dividing an effective filtering area and an ineffective filtering area in the direction filtering kernel by changing the direction according to the corner shape of the image edge, and generating a direction filtering kernel comprising a corner filtering boundary.
4. The method for optimizing point cloud according to claim 1, wherein the step of performing convolution filtering operation on the two-dimensional depth map according to a plurality of preset direction filtering kernels to obtain a plurality of filtered pixel values corresponding to each pixel point in the two-dimensional depth map includes the steps of:
obtaining a target pixel point corresponding to the filter center according to the corresponding target position of the filter center of the ith direction filter kernel in the two-dimensional depth map;
obtaining a target area covered by the direction filter kernel in the two-dimensional depth map according to the target position and the filter radius of the ith direction filter kernel;
Acquiring an ith filter pixel value corresponding to the target pixel point based on values of all filter elements in the ith direction filter kernel, original pixel values of the pixel points in the target area and the number of effective filter elements in the ith direction filter kernel;
and moving the ith direction filtering core to repeatedly execute the steps until an ith filtering pixel value corresponding to each pixel point in the two-dimensional depth map is obtained.
5. The method of optimizing a point cloud according to claim 1, wherein replacing the original pixel value of each pixel point with a filtered pixel value having a smallest difference value from the original pixel value of the pixel point, to obtain an optimized two-dimensional depth map, comprises the steps of:
acquiring a filtered pixel value and a minimum difference value with the minimum difference value between the filtered pixel value and the original pixel value of the pixel point;
and if the minimum difference value is not greater than a preset invalid difference value threshold, replacing the original pixel value of the pixel point with a filtered pixel value with the minimum difference value between the original pixel value of the pixel point and the original pixel value of the pixel point, and obtaining the optimized two-dimensional depth map.
6. The point cloud optimization method as claimed in claim 5, further comprising the steps of:
And if the minimum difference value is larger than a preset invalid difference value threshold value, replacing the original pixel value of the pixel point with a null value.
7. The method of optimizing a point cloud according to claim 1, further comprising, after the obtaining the optimized two-dimensional depth map, the steps of:
and converting the pixel value of the pixel point in the optimized two-dimensional depth map into the depth value of the corresponding pixel point in the three-dimensional point cloud to obtain the optimized three-dimensional point cloud corresponding to the target object.
8. The point cloud optimization method of the circuit board is characterized by comprising the following steps of:
acquiring a three-dimensional point cloud corresponding to a target circuit board;
projecting the three-dimensional point cloud to a camera imaging plane to obtain a two-dimensional depth map corresponding to the three-dimensional point cloud; the original pixel value of the pixel point in the two-dimensional depth map is the depth value of the corresponding pixel point in the three-dimensional point cloud;
respectively carrying out convolution filtering operation on the two-dimensional depth map according to a plurality of preset direction filtering kernels to obtain a plurality of filtering pixel values corresponding to each pixel point in the two-dimensional depth map; the filtering boundary of each direction filtering kernel respectively represents the image edges of different forms; each direction filter kernel comprises a plurality of filter elements, and the plurality of filter pixel values corresponding to each pixel point are average values obtained by weighting the pixel values of the pixel points in the coverage area of the plurality of direction filter kernels and the values of the corresponding filter elements;
Replacing the original pixel value of each pixel point with a filter pixel value with the smallest difference value between the original pixel value of each pixel point and the original pixel value of each pixel point to obtain an optimized two-dimensional depth map corresponding to the target circuit board;
and converting the pixel value of the pixel point in the optimized two-dimensional depth map into the depth value of the corresponding pixel point in the three-dimensional point cloud to obtain the optimized three-dimensional point cloud corresponding to the target circuit board.
9. A point cloud optimization apparatus, comprising:
the first point cloud acquisition unit is used for acquiring a three-dimensional point cloud corresponding to the target object;
the first projection unit is used for projecting the three-dimensional point cloud to a camera imaging plane to obtain a two-dimensional depth map corresponding to the three-dimensional point cloud; the original pixel value of the pixel point in the two-dimensional depth map is the depth value of the corresponding pixel point in the three-dimensional point cloud;
the first filtering unit is used for respectively carrying out convolution filtering operation on the two-dimensional depth map according to a plurality of preset direction filtering kernels to obtain a plurality of filtering pixel values corresponding to each pixel point in the two-dimensional depth map; the filtering boundary of each direction filtering kernel respectively represents the image edges of different forms; each direction filter kernel comprises a plurality of filter elements, and the plurality of filter pixel values corresponding to each pixel point are average values obtained by weighting the pixel values of the pixel points in the coverage area of the plurality of direction filter kernels and the values of the corresponding filter elements;
And the first optimizing unit is used for replacing the original pixel value of each pixel point with a filtering pixel value with the smallest difference value between the original pixel value of the pixel point and the original pixel value of the pixel point, so as to obtain an optimized two-dimensional depth map.
10. A point cloud optimization apparatus for a circuit board, comprising:
the second point cloud acquisition unit is used for acquiring the three-dimensional point cloud corresponding to the target circuit board;
the second projection unit is used for projecting the three-dimensional point cloud to a camera imaging plane to obtain a two-dimensional depth map corresponding to the three-dimensional point cloud; the original pixel value of the pixel point in the two-dimensional depth map is the depth value of the corresponding pixel point in the three-dimensional point cloud;
the second filtering unit is used for respectively carrying out convolution filtering operation on the two-dimensional depth map according to a plurality of preset direction filtering kernels to obtain a plurality of filtering pixel values corresponding to each pixel point in the two-dimensional depth map; the filtering boundary of each direction filtering kernel respectively represents the image edges of different forms; each direction filter kernel comprises a plurality of filter elements, and the plurality of filter pixel values corresponding to each pixel point are average values obtained by weighting the pixel values of the pixel points in the coverage area of the plurality of direction filter kernels and the values of the corresponding filter elements;
The second optimizing unit is used for replacing the original pixel value of each pixel point with a filtering pixel value with the smallest difference value between the original pixel value of the pixel point and the original pixel value of the pixel point to obtain an optimized two-dimensional depth map corresponding to the target circuit board;
and the third point cloud acquisition unit is used for converting the pixel value of the pixel point in the optimized two-dimensional depth image into the depth value of the corresponding pixel point in the three-dimensional point cloud to obtain the optimized three-dimensional point cloud corresponding to the target circuit board.
11. A point cloud optimization apparatus, comprising: a processor, a memory and a computer program stored in the memory and executable on the processor, characterized in that the processor implements the steps of the method according to any one of claims 1 to 8 when the computer program is executed.
12. A computer readable storage medium storing a computer program, characterized in that the computer program when executed by a processor implements the steps of the method according to any one of claims 1 to 8.
CN202011279945.3A 2020-11-16 2020-11-16 Point cloud optimization method, device and equipment Active CN112488910B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011279945.3A CN112488910B (en) 2020-11-16 2020-11-16 Point cloud optimization method, device and equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011279945.3A CN112488910B (en) 2020-11-16 2020-11-16 Point cloud optimization method, device and equipment

Publications (2)

Publication Number Publication Date
CN112488910A CN112488910A (en) 2021-03-12
CN112488910B true CN112488910B (en) 2024-02-13

Family

ID=74931111

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011279945.3A Active CN112488910B (en) 2020-11-16 2020-11-16 Point cloud optimization method, device and equipment

Country Status (1)

Country Link
CN (1) CN112488910B (en)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113298761B (en) * 2021-05-07 2023-07-04 奥比中光科技集团股份有限公司 Image filtering method, device, terminal and computer readable storage medium
CN114066779B (en) * 2022-01-13 2022-05-06 杭州蓝芯科技有限公司 Depth map filtering method and device, electronic equipment and storage medium
CN114723796A (en) * 2022-04-24 2022-07-08 北京百度网讯科技有限公司 Three-dimensional point cloud generation method and device and electronic equipment
CN116527663B (en) * 2023-04-10 2024-04-26 北京城市网邻信息技术有限公司 Information processing method, information processing device, electronic equipment and storage medium

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110264425A (en) * 2019-06-21 2019-09-20 杭州一隅千象科技有限公司 Based on the separate unit TOF camera human body noise-reduction method and system for being angled downward direction
CN111583663A (en) * 2020-04-26 2020-08-25 宁波吉利汽车研究开发有限公司 Monocular perception correction method and device based on sparse point cloud and storage medium
CN111612806A (en) * 2020-01-10 2020-09-01 江西理工大学 Building facade window extraction method and device

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110264425A (en) * 2019-06-21 2019-09-20 杭州一隅千象科技有限公司 Based on the separate unit TOF camera human body noise-reduction method and system for being angled downward direction
CN111612806A (en) * 2020-01-10 2020-09-01 江西理工大学 Building facade window extraction method and device
CN111583663A (en) * 2020-04-26 2020-08-25 宁波吉利汽车研究开发有限公司 Monocular perception correction method and device based on sparse point cloud and storage medium

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
基于点云深度映射颜色的导轨表面损伤识别;王振春等;中国激光(10);第1-9页 *

Also Published As

Publication number Publication date
CN112488910A (en) 2021-03-12

Similar Documents

Publication Publication Date Title
CN112488910B (en) Point cloud optimization method, device and equipment
CN110349195B (en) Depth image-based target object 3D measurement parameter acquisition method and system and storage medium
CN110264573B (en) Three-dimensional reconstruction method and device based on structured light, terminal equipment and storage medium
WO2016068869A1 (en) Three dimensional object recognition
CN111080662A (en) Lane line extraction method and device and computer equipment
CN110619660A (en) Object positioning method and device, computer readable storage medium and robot
CN113962306A (en) Image processing method, image processing device, electronic equipment and computer readable storage medium
CN110942506A (en) Object surface texture reconstruction method, terminal device and system
CN115641337A (en) Linear defect detection method, device, medium, equipment and system
CN114581331A (en) Point cloud noise reduction method and device suitable for multiple scenes
AU2021229124B2 (en) Denoising for interactive monte-carlo rendering using pairwise affinity of deep features
CN116824070B (en) Real-time three-dimensional reconstruction method and system based on depth image
CN117115358A (en) Automatic digital person modeling method and device
CN114626118A (en) Building indoor model generation method and device
CN106910196B (en) Image detection method and device
CN114170367B (en) Method, apparatus, storage medium, and device for infinite-line-of-sight pyramidal heatmap rendering
CN116310060A (en) Method, device, equipment and storage medium for rendering data
CN115184362B (en) Rapid defect detection method based on structured light projection
WO2023060927A1 (en) 3d grating detection method and apparatus, computer device, and readable storage medium
KR101927861B1 (en) Method and apparatus for removing noise based on mathematical morphology from geometric data of 3d space
CN115861403A (en) Non-contact object volume measurement method and device, electronic equipment and medium
CN114841943A (en) Part detection method, device, equipment and storage medium
CN114494404A (en) Object volume measurement method, system, device and medium
CN112802175B (en) Large-scale scene shielding and eliminating method, device, equipment and storage medium
CN112539712A (en) Three-dimensional imaging method, device and equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant