CN109242809B - Point cloud filtering system and filtering method based on RGB-D information - Google Patents

Point cloud filtering system and filtering method based on RGB-D information Download PDF

Info

Publication number
CN109242809B
CN109242809B CN201811284681.3A CN201811284681A CN109242809B CN 109242809 B CN109242809 B CN 109242809B CN 201811284681 A CN201811284681 A CN 201811284681A CN 109242809 B CN109242809 B CN 109242809B
Authority
CN
China
Prior art keywords
image
rgb
point cloud
module
pixel
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201811284681.3A
Other languages
Chinese (zh)
Other versions
CN109242809A (en
Inventor
李晓风
许金林
谭海波
赵赫
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Anhui Zhongkezhilian Information Technology Co ltd
Original Assignee
Anhui Zhongkezhilian Information Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Anhui Zhongkezhilian Information Technology Co ltd filed Critical Anhui Zhongkezhilian Information Technology Co ltd
Priority to CN201811284681.3A priority Critical patent/CN109242809B/en
Publication of CN109242809A publication Critical patent/CN109242809A/en
Application granted granted Critical
Publication of CN109242809B publication Critical patent/CN109242809B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/20Image enhancement or restoration by the use of local operators
    • G06T5/70
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/136Segmentation; Edge detection involving thresholding
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
    • G06T7/85Stereo camera calibration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20024Filtering details
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20172Image enhancement details
    • G06T2207/20192Edge enhancement; Edge preservation

Abstract

The invention relates to the technical field of point cloud filtering, in particular to a point cloud filtering system and a filtering method based on RGB-D information, wherein the system comprises a calibration module, a mapping module, a conversion module, a first processing module, a repairing module, a second processing module and a point cloud acquisition module; the method can effectively obtain the filtered point cloud, and has high efficiency.

Description

Point cloud filtering system and filtering method based on RGB-D information
Technical Field
The invention relates to the technical field of point cloud filtering, in particular to a point cloud filtering system and a filtering method based on RGB-D information.
Background
With the development of depth camera technology, cameras capable of capturing color images and depth images simultaneously have been widely used in acquisition systems of three-dimensional point cloud data. However, due to the complex structure and complex method of the imaging principle of the depth image and the interference of the external environment, the three-dimensional point cloud data obtained by the depth camera contains a great amount of noise, so that the filtering process is a crucial link in the three-dimensional point cloud acquisition system.
The current filtering method can generate certain filtering effect aiming at different noise types. However, for non-isolated outliers, the existing filtering methods cannot obtain a good filtering effect. The current filtering method is directly applied to three-dimensional data information, so that the efficiency is low, and real-time performance required by the engineering field is difficult to achieve aiming at a large amount of point cloud data.
Disclosure of Invention
The invention provides a point cloud filtering system and a filtering method based on RGB-D information, which can overcome certain or certain defects in the prior art.
According to the invention, a point cloud filtering system based on RGB-D information comprises:
the calibration module is used for calibrating the RGB camera and the depth camera on the depth camera, and obtaining an internal reference matrix of the RGB camera and a pose transformation matrix between a depth camera coordinate system and an RGB camera coordinate system;
the mapping module is used for mapping the RGB image onto the depth image or mapping the depth image onto the RGB image, obtaining corresponding points in the pixel points in the RGB image and the depth image according to the internal reference matrix and the pose transformation matrix, and generating an XYZ point cloud image;
the conversion module is used for converting the RGB image into an HSV image, and extracting V channel data to form a V image;
the first processing module is used for processing the V image and generating a binary image;
the patching module is used for patching the binary image to obtain a smooth binary image;
the second processing module is used for processing the smoothed binary image and generating a filtered binary image;
and the point cloud acquisition module is used for extracting the corresponding XYZ value from the XYZ point cloud image at the position with the pixel value of 1 in the filtered binary image to obtain the filtered point cloud.
In this embodiment, the first processing module includes a segmentation module, where the segmentation module is configured to segment the V image to obtain a segmented binary image.
In this embodiment, the patching module includes a hole repairing module and a boundary smoothing module, where the hole repairing module is used to repair a hole of the segmented binary image, and the boundary smoothing module is used to smooth a boundary of the segmented binary image.
The system can effectively obtain the filtered point cloud through the cooperation among the calibration module, the mapping module, the conversion module, the first processing module, the repair module, the second processing module and the point cloud acquisition module, thereby avoiding a large amount of statistical calculation and having high efficiency; the filtering of the three-dimensional point cloud is converted to the two-dimensional color image for filtering, so that the effect of dimension reduction is achieved; the RGB image is converted into the HSV image, the segmentation threshold value of the V channel is adaptively selected by using the maximum inter-class variance method, any filtering parameter is not required to be set, and the artificial subjective factors in the existing filtering method can be effectively avoided.
The invention also provides a point cloud filtering method based on RGB-D information, which comprises the following steps:
1. the calibration module is used for calibrating the RGB camera and the depth camera on the depth camera, and obtaining an internal reference matrix of the RGB camera and a pose transformation matrix between a depth camera coordinate system and an RGB camera coordinate system;
2. the mapping module maps the RGB image onto the depth image or maps the depth image onto the RGB image, and obtains corresponding points in the pixel points in the RGB image and the depth image according to the internal reference matrix and the pose transformation matrix to generate an XYZ point cloud image;
3. the conversion module converts the RGB image into an HSV image, and extracts V channel data to form a V image;
4. the first processing module processes the V image and generates a binary image;
5. the patching module patches the binary image to obtain a smooth binary image;
6. the second processing module processes the smoothed binary image and generates a filtered binary image;
7. and the point cloud acquisition module extracts a corresponding XYZ value from the XYZ point cloud image at a position with a pixel value of 1 in the filtered binary image to obtain a filtered point cloud.
Preferably, the calibration module is used for manufacturing a plane checkerboard calibration plate, and a plane calibration method is used for calibrating the depth camera.
Preferably, the mapping formula in the second step is:
Figure BDA0001848775090000031
wherein, (u, v) is the coordinate value of the corresponding point on the RGB image, I is the color camera internal reference, T is the [ R T ] pose transformation matrix between the two coordinate systems.
Preferably, the V-channel data extraction formula in the third step is:
V=max(R,G,B),
Figure BDA0001848775090000032
Figure BDA0001848775090000033
the V image composition formula is:
Figure BDA0001848775090000034
Figure BDA0001848775090000041
the gray level in the V image is L (0-L-1) level, n i The number of times that the pixel with the gray level i appears in the V image is N, so that the pixel number of the whole V image is N, and the occurrence probability of the pixel with the gray level i in the V image is p i
If the initial segmentation threshold is k, the V image can be segmented into C according to the threshold 0 = {0 to k-1} and C 1 Two parts = { k-L-1 }, let C 0 The pixel count of the pixel count is omega in proportion to the V image 0 ,C 1 The pixel count of the pixel count is omega in proportion to the V image 1 The formula is as follows:
Figure BDA0001848775090000042
Figure BDA0001848775090000043
the total average gray level μ of the V image is:
Figure BDA0001848775090000044
C 0 pixel average gray scale mu of (2) 0 The method comprises the following steps:
Figure BDA0001848775090000045
C 1 pixel average gray scale mu of (2) 1 The method comprises the following steps:
Figure BDA0001848775090000046
wherein ,
Figure BDA0001848775090000047
Figure BDA0001848775090000048
the method can obtain: μ=ω 0 μ 01 μ 1
Then C 0 Class C 1 Inter-class variance sigma of classes 2 (k) The method comprises the following steps:
σ 2 (k)=ω 00 -μ) 211 -μ) 2
=ω 0 ω 101 ) 2
k is changed from 0 to L-1 to take a value, and the inter-class variance sigma under different k values is calculated 2 (k) So that sigma 2 (k) The k value at maximum is the required optimal threshold Vth.
Preferably, in the fourth step, the V image is divided according to the threshold Vth, and when the pixel value in the V image is equal to or greater than Vth, the corresponding position pixel value in the V image is updated to 1, and when the pixel value in the V image is less than Vth, the corresponding position pixel value in the V image is updated to 0, so that a V binary image can be generated.
According to the invention, a depth camera is fixed on a mounting structure, a checkerboard calibration flat plate is manufactured, an RGB camera and a depth camera on the depth camera are calibrated by using a Zhang Zhengyou calibration method, an internal reference of the RGB camera and a pose transformation matrix between the depth camera and an RGB color camera are obtained, an RGB image is mapped onto a depth image by using the pose transformation matrix and the internal reference of the RGB camera, a one-to-one correspondence between the RGB image and an XYZ depth image is obtained, then the RGB image is converted into an HSV image, V channel data in the HSV image is extracted to form a V image, the V image is segmented by adopting a maximum inter-class method threshold segmentation method, the segmented V binary image is subjected to operations such as cavity restoration and boundary smoothing, and the smoothed V binary image is filtered again according to a set connected domain pixel point number threshold value, so that the latest V binary image is obtained; and extracting (x, y, z) numerical values of corresponding positions in the XYZ depth image according to the position with the pixel value of 1 in the latest V binary image, and finally obtaining the filtered three-dimensional point cloud.
Compared with the traditional point cloud filtering method, the method indirectly applies the two-dimensional RGB image filtering method to the three-dimensional point cloud filtering, avoids a great amount of statistical calculation in the three-dimensional space, and improves the filtering efficiency; the RGB image is converted into the HSV image, the numerical value of the V channel is extracted to construct the V image, the influence of factors such as illumination intensity on the filtering effect is avoided, and the robustness is improved.
Drawings
Fig. 1 is a block diagram of a point cloud filtering system based on RGB-D information in embodiment 1;
FIG. 2 is a flow chart of a point cloud filtering method based on RGB-D information in the embodiment 1;
FIG. 3 is a point cloud of RGB bands before filtering in example 1;
FIG. 4 is an RGB map depth map of example 1;
FIG. 5 is a binary image after color filtering in example 1;
FIG. 6 is the RGB image after filtering in example 1;
FIG. 7 is a filtered point cloud with RGB of example 1;
fig. 8 is a point cloud after filtering in example 1.
Detailed Description
For a further understanding of the present invention, the present invention will be described in detail with reference to the drawings and examples. It is to be understood that the examples are illustrative of the present invention and are not intended to be limiting.
Example 1
As shown in fig. 1, the present embodiment provides a point cloud filtering system based on RGB-D information, which includes:
the calibration module is used for calibrating the RGB camera and the depth camera on the depth camera, and obtaining an internal reference matrix of the RGB camera and a pose transformation matrix between a depth camera coordinate system and an RGB camera coordinate system;
the mapping module is used for mapping the RGB image onto the depth image or mapping the depth image onto the RGB image, obtaining corresponding points in the pixel points in the RGB image and the depth image according to the internal reference matrix and the pose transformation matrix, and generating an XYZ point cloud image;
the conversion module is used for converting the RGB image into an HSV image, and extracting V channel data to form a V image;
the first processing module is used for processing the V image and generating a binary image;
the patching module is used for patching the binary image to obtain a smooth binary image;
the second processing module is used for processing the smoothed binary image and generating a filtered binary image;
and the point cloud acquisition module is used for extracting the corresponding XYZ value from the XYZ point cloud image at the position with the pixel value of 1 in the filtered binary image to obtain the filtered point cloud.
In this embodiment, the first processing module includes a segmentation module, where the segmentation module is configured to segment the V image to obtain a segmented binary image.
In this embodiment, the patching module includes a hole repairing module and a boundary smoothing module, where the hole repairing module is used to repair a hole of the segmented binary image, and the boundary smoothing module is used to smooth a boundary of the segmented binary image.
As shown in fig. 2, the present embodiment further provides a point cloud filtering method based on RGB-D information, which includes the following steps:
1. the calibration module is used for calibrating the RGB camera and the depth camera on the depth camera, and obtaining an internal reference matrix of the RGB camera and a pose transformation matrix between a depth camera coordinate system and an RGB camera coordinate system;
2. the mapping module maps the RGB image onto the depth image or maps the depth image onto the RGB image, and obtains corresponding points in the pixel points in the RGB image and the depth image according to the internal reference matrix and the pose transformation matrix to generate an XYZ point cloud image;
3. the conversion module converts the RGB image into an HSV image, and extracts V channel data to form a V image;
4. the first processing module processes the V image and generates a binary image;
5. the patching module patches the binary image to obtain a smooth binary image; because of partial cavity phenomenon in the generated V binary image, cavity repair is needed for the V binary image; after the cavity is repaired, a V-repaired binary image is obtained, then morphological open operation is carried out on the V-repaired binary image, and boundary smoothing is carried out, so that a V-smoothed binary image is obtained;
6. the second processing module processes the smoothed binary image and generates a filtered binary image; setting a pixel number threshold Sth, counting the number Si of pixels in each connected domain in the V-smooth binary image, and updating all pixel values in the corresponding connected domain in the V-smooth binary image to 0 when Si is smaller than Sth to obtain a filtered binary image Vnew;
7. and the point cloud acquisition module extracts a corresponding XYZ value from the XYZ point cloud image at a position with a pixel value of 1 in the filtered binary image to obtain a filtered point cloud. Since the spatial topology of the V image is not destroyed in all the operations, the size of the V image and the size of the RGB image are in a corresponding relationship, and as known from step S1, the RGB image and the XYZ depth image have a one-to-one correspondence, so that the V image and the XYZ depth image also have a one-to-one correspondence, and further, the binary image Vnew generated in step S7 also has a one-to-one correspondence with the XYZ depth image, so that the binary image and the XYZ point cloud image are multiplied according to the position where the pixel value in the binary image Vnew is 1, and the value of (x, y, z) at the same position of the XYZ depth image is extracted, and finally, the obtained point cloud is the point cloud of the filtered object.
In this embodiment, the calibration module makes a planar checkerboard calibration plate, and uses a planar calibration method to calibrate the depth camera. Firstly, a plane checkerboard calibration plate is manufactured, a Zhang Zhengyou plane calibration method is used for calibrating a depth camera, and an internal reference I with high accuracy of an RGB camera and a [ R t ] pose transformation matrix between a depth camera coordinate system and the RGB camera coordinate system are obtained. This calibration process is only required to be performed once for the depth camera apparatus. Because of the stability of the camera hardware device, these parameters do not change for a considerable period of time.
In this embodiment, the mapping formula in the second step is:
Figure BDA0001848775090000081
wherein, (u, v) is the coordinate value of the corresponding point on the RGB image, I is the color camera internal reference, T is the [ R T ] pose transformation matrix between the two coordinate systems. The bridge between the depth image information and the color image information can be obtained as the pose relation between the depth camera and the RGB camera coordinate system is calculated through camera calibration. The RGB image can be mapped onto the depth image or the depth image can be mapped onto the RGB image by the formula 1, and thus, the pixel points in the RGB image and the corresponding points in the depth image can be obtained. In the formula 1, (u, v) is the coordinate value of the corresponding point on the RGB image, I is the color camera internal reference, T is the [ R T ] pose transformation matrix between the two coordinate systems. Then, only one point on the RGB image coordinate system is given, the coordinate point corresponding to the depth image can be found, and the three-dimensional coordinate information of the point is obtained.
In this embodiment, the V-channel data extraction formula in the third step is:
V=max(R,G,B),
Figure BDA0001848775090000082
Figure BDA0001848775090000091
the RGB image can be converted into the HSV image through a formula, and then the data of the V channel can be extracted;
the V image composition formula is:
Figure BDA0001848775090000092
Figure BDA0001848775090000093
according to the obtained V channel data, forming a V image, wherein the gray level in the V image is L (0-L-1) level, n i The number of times that the pixel with the gray level i appears in the V image is N, so that the pixel number of the whole V image is N, and the occurrence probability of the pixel with the gray level i in the V image is p i The specific form is shown in a V image composition formula;
if the initial segmentation threshold is k, the V image can be segmented into C according to the threshold 0 = {0 to k-1} and C 1 Two parts = { k-L-1 }, let C 0 The pixel count of the pixel count is omega in proportion to the V image 0 ,C 1 Is the number of pixels of (a)The proportion of the image with V is omega 1 The formula is as follows:
Figure BDA0001848775090000094
Figure BDA0001848775090000095
the total average gray level μ of the V image is:
Figure BDA0001848775090000096
C 0 pixel average gray scale mu of (2) 0 The method comprises the following steps:
Figure BDA0001848775090000097
C 1 pixel average gray scale mu of (2) 1 The method comprises the following steps:
Figure BDA0001848775090000101
wherein ,
Figure BDA0001848775090000102
Figure BDA0001848775090000103
the method can obtain: μ=ω 0 μ 01 μ 1
Then C 0 Class C 1 Inter-class variance sigma of classes 2 (k) The method comprises the following steps:
σ 2 (k)=ω 00 -μ) 211 -μ) 2
=ω 0 ω 101 ) 2
k is changed from 0 to L-1 to take a value, and the inter-class variance sigma under different k values is calculated 2 (k) So that sigma 2 (k) The k value at maximum is the required optimal threshold Vth.
In the fourth embodiment, in the step, the V image is divided according to the threshold Vth, when the pixel value in the V image is greater than or equal to Vth, the corresponding position pixel value in the V image is updated to 1, and when the pixel value in the V image is less than Vth, the corresponding position pixel value in the V image is updated to 0, so that a V binary image can be generated.
Fig. 3-8 are schematic views of effects in each step, in this embodiment, a depth camera is fixed on a mounting structure, a checkerboard calibration plate is manufactured, an RGB camera and a depth camera on the depth camera are calibrated by using a Zhang Zhengyou calibration method, internal parameters of the RGB camera and a pose transformation matrix between the depth camera and an RGB color camera are obtained, then an RGB image is mapped onto a depth image by using the pose transformation matrix and the internal parameters of the RGB camera, a one-to-one correspondence between the RGB image and an XYZ depth image is obtained, then the RGB image is converted into an HSV image, V channel data in the HSV image is extracted to form a V image, the V image is segmented by adopting a maximum inter-class method threshold segmentation method, operations such as hole point number restoration and boundary smoothing are performed on the V binary image, and the smoothed V binary image is filtered again according to a set connected domain pixel threshold value to obtain a latest V binary image; and extracting (x, y, z) numerical values of corresponding positions in the XYZ depth image according to the position with the pixel value of 1 in the latest V binary image, and finally obtaining the filtered three-dimensional point cloud.
Compared with the traditional point cloud filtering method, the two-dimensional RGB image filtering method is indirectly applied to the three-dimensional point cloud filtering, so that a large number of statistical calculations are avoided in a three-dimensional space, and the filtering efficiency is improved; the RGB image is converted into the HSV image, the numerical value of the V channel is extracted to construct the V image, the influence of factors such as illumination intensity on the filtering effect is avoided, and the robustness is improved.
The invention and its embodiments have been described above by way of illustration and not limitation, and the invention is illustrated in the accompanying drawings and described in the drawings in which the actual structure is not limited thereto. Therefore, if one of ordinary skill in the art is informed by this disclosure, the structural mode and the embodiments similar to the technical scheme are not creatively designed without departing from the gist of the present invention.

Claims (8)

1. A point cloud filtering system based on RGB-D information is characterized in that: comprising the following steps:
the calibration module is used for calibrating the RGB camera and the depth camera on the depth camera, and obtaining an internal reference matrix of the RGB camera and a pose transformation matrix between a depth camera coordinate system and an RGB camera coordinate system;
the mapping module is used for mapping the RGB image onto the depth image or mapping the depth image onto the RGB image, obtaining corresponding points in the pixel points in the RGB image and the depth image according to the internal reference matrix and the pose transformation matrix, and generating an XYZ point cloud image;
the conversion module is used for converting the RGB image into an HSV image, and extracting V channel data to form a V image;
the first processing module is used for processing the V image and generating a binary image;
the patching module is used for patching the binary image to obtain a smooth binary image;
the second processing module is used for processing the smoothed binary image and generating a filtered binary image;
and the point cloud acquisition module is used for extracting the corresponding XYZ value from the XYZ point cloud image at the position with the pixel value of 1 in the filtered binary image to obtain the filtered point cloud.
2. A point cloud filtering system based on RGB-D information as claimed in claim 1, wherein: the first processing module comprises a segmentation module, and the segmentation module is used for segmenting the V image to obtain a segmented binary image.
3. A point cloud filtering system based on RGB-D information as claimed in claim 2, wherein: the repairing module comprises a hole repairing module and a boundary smoothing module, wherein the hole repairing module is used for repairing holes of the segmented binary image, and the boundary smoothing module is used for smoothing boundaries of the segmented binary image.
4. A point cloud filtering method based on RGB-D information is characterized in that: the method comprises the following steps:
1. the calibration module is used for calibrating the RGB camera and the depth camera on the depth camera, and obtaining an internal reference matrix of the RGB camera and a pose transformation matrix between a depth camera coordinate system and an RGB camera coordinate system;
2. the mapping module maps the RGB image onto the depth image or maps the depth image onto the RGB image, and obtains corresponding points in the pixel points in the RGB image and the depth image according to the internal reference matrix and the pose transformation matrix to generate an XYZ point cloud image;
3. the conversion module converts the RGB image into an HSV image, and extracts V channel data to form a V image;
4. the first processing module processes the V image and generates a binary image;
5. the patching module patches the binary image to obtain a smooth binary image;
6. the second processing module processes the smoothed binary image and generates a filtered binary image;
7. and the point cloud acquisition module extracts a corresponding XYZ value from the XYZ point cloud image at a position with a pixel value of 1 in the filtered binary image to obtain a filtered point cloud.
5. The method for filtering point cloud based on RGB-D information of claim 4, wherein: the calibration module is used for manufacturing a plane checkerboard calibration plate, and a plane calibration method is used for calibrating the depth camera.
6. The method for filtering point cloud based on RGB-D information of claim 5, wherein: the mapping formula in the second step is:
Figure FDA0001848775080000021
wherein, (u, v) is the coordinate value of the corresponding point on the RGB image, I is the color camera internal reference, T is the [ R T ] pose transformation matrix between the two coordinate systems.
7. The method for filtering point cloud based on RGB-D information of claim 6, wherein: the V channel data extraction formula in the third step is:
V=max(R,G,B),
Figure FDA0001848775080000022
Figure FDA0001848775080000031
the V image composition formula is:
Figure FDA0001848775080000032
Figure FDA0001848775080000033
the gray level in the V image is L (0-L-1) level, n i The number of times that the pixel with the gray level i appears in the V image is N, so that the pixel number of the whole V image is N, and the occurrence probability of the pixel with the gray level i in the V image is p i
Setting an initial segmentation thresholdK, then the V image can be divided into C according to the threshold 0 = {0 to k-1} and C 1 Two parts = { k-L-1 }, let C 0 The pixel count of the pixel count is omega in proportion to the V image 0 ,C 1 The pixel count of the pixel count is omega in proportion to the V image 1 The formula is as follows:
Figure FDA0001848775080000034
Figure FDA0001848775080000035
the total average gray level μ of the V image is:
Figure FDA0001848775080000036
C 0 pixel average gray scale mu of (2) 0 The method comprises the following steps:
Figure FDA0001848775080000037
C 1 pixel average gray scale mu of (2) 1 The method comprises the following steps:
Figure FDA0001848775080000038
/>
wherein ,
Figure FDA0001848775080000041
Figure FDA0001848775080000042
the method can obtain: mu =ω 0 μ 01 μ 1
Then C 0 Class C 1 Inter-class variance sigma of classes 2 (k) The method comprises the following steps:
σ 2 (k)=ω 00 -μ) 211 -μ) 2
=ω 0 ω 101 ) 2
k is changed from 0 to L-1 to take a value, and the inter-class variance sigma under different k values is calculated 2 (k) So that sigma 2 (k) The k value at maximum is the required optimal threshold Vth.
8. The method for filtering point cloud based on RGB-D information of claim 7, wherein: in the fourth step, the V image is divided according to the threshold Vth, and when the pixel value in the V image is greater than or equal to Vth, the corresponding position pixel value in the V image is updated to 1, and when the pixel value in the V image is less than Vth, the corresponding position pixel value in the V image is updated to 0, so that a V binary image can be generated.
CN201811284681.3A 2018-10-31 2018-10-31 Point cloud filtering system and filtering method based on RGB-D information Active CN109242809B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811284681.3A CN109242809B (en) 2018-10-31 2018-10-31 Point cloud filtering system and filtering method based on RGB-D information

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811284681.3A CN109242809B (en) 2018-10-31 2018-10-31 Point cloud filtering system and filtering method based on RGB-D information

Publications (2)

Publication Number Publication Date
CN109242809A CN109242809A (en) 2019-01-18
CN109242809B true CN109242809B (en) 2023-06-13

Family

ID=65079737

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811284681.3A Active CN109242809B (en) 2018-10-31 2018-10-31 Point cloud filtering system and filtering method based on RGB-D information

Country Status (1)

Country Link
CN (1) CN109242809B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110400252B (en) * 2019-06-28 2022-09-06 中科航宇(北京)自动化工程技术有限公司 Material yard contour line digitalization method and system
CN113592884B (en) * 2021-08-19 2022-08-09 遨博(北京)智能科技有限公司 Human body mask generation method

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106570904A (en) * 2016-10-25 2017-04-19 大连理工大学 Multi-target relative posture recognition method based on Xtion camera
CN106780417A (en) * 2016-11-22 2017-05-31 北京交通大学 A kind of Enhancement Method and system of uneven illumination image
CN106780576A (en) * 2016-11-23 2017-05-31 北京航空航天大学 A kind of camera position and orientation estimation method towards RGBD data flows
CN107516077A (en) * 2017-08-17 2017-12-26 武汉大学 Traffic sign information extracting method based on laser point cloud and image data fusion

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106570904A (en) * 2016-10-25 2017-04-19 大连理工大学 Multi-target relative posture recognition method based on Xtion camera
CN106780417A (en) * 2016-11-22 2017-05-31 北京交通大学 A kind of Enhancement Method and system of uneven illumination image
CN106780576A (en) * 2016-11-23 2017-05-31 北京航空航天大学 A kind of camera position and orientation estimation method towards RGBD data flows
CN107516077A (en) * 2017-08-17 2017-12-26 武汉大学 Traffic sign information extracting method based on laser point cloud and image data fusion

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
Learning Rich Features from RGB-D Images for Object Detection and Segmentation;Saurabh Gupta 等;《European Conference on Computer Vision》;20141231;第345-360页 *

Also Published As

Publication number Publication date
CN109242809A (en) 2019-01-18

Similar Documents

Publication Publication Date Title
CN110264416B (en) Sparse point cloud segmentation method and device
CN106780438B (en) Insulator defect detection method and system based on image processing
Abdelhamed et al. A high-quality denoising dataset for smartphone cameras
CN107316326B (en) Edge-based disparity map calculation method and device applied to binocular stereo vision
CN108335331B (en) Binocular vision positioning method and equipment for steel coil
CN106875437B (en) RGBD three-dimensional reconstruction-oriented key frame extraction method
CN109242809B (en) Point cloud filtering system and filtering method based on RGB-D information
CN107680140B (en) Depth image high-resolution reconstruction method based on Kinect camera
CN107369176B (en) System and method for detecting oxidation area of flexible IC substrate
CN111251336A (en) Double-arm cooperative intelligent assembly system based on visual positioning
JP6320115B2 (en) Image processing apparatus, image processing method, and program
CN104504722B (en) Method for correcting image colors through gray points
CN110059701B (en) Unmanned aerial vehicle landmark image processing method based on poor illumination
CN112200848B (en) Depth camera vision enhancement method and system under low-illumination weak-contrast complex environment
US9204130B2 (en) Method and system for creating a three dimensional representation of an object
CN113971669A (en) Three-dimensional detection system applied to pipeline damage identification
JP6232933B2 (en) Radiation distortion correction apparatus, road environment recognition apparatus, radial distortion correction method and program
CN110111341B (en) Image foreground obtaining method, device and equipment
CN108364274B (en) Nondestructive clear reconstruction method of optical image under micro-nano scale
CN102885631B (en) Distortion correction method applied to flat-panel charge coupling device (CCD) detector
CN106780425B (en) Positioning method of vortex detection system of heat transfer tube of VVER steam generator
CN112017108B (en) Satellite image color relative correction method based on adjustment of independent model method
CN115035175A (en) Three-dimensional model construction data processing method and system
CN111325802B (en) Circular mark point identification and matching method in helicopter wind tunnel test
CN106504200A (en) The image irradiation compensation method mapped based on hue shift estimation and pointwise tone and system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant