CN113269859B - RGBD vision real-time reconstruction method and system for actuator operation space - Google Patents

RGBD vision real-time reconstruction method and system for actuator operation space Download PDF

Info

Publication number
CN113269859B
CN113269859B CN202110642486.9A CN202110642486A CN113269859B CN 113269859 B CN113269859 B CN 113269859B CN 202110642486 A CN202110642486 A CN 202110642486A CN 113269859 B CN113269859 B CN 113269859B
Authority
CN
China
Prior art keywords
contour
reconstruction
point
depth
points
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110642486.9A
Other languages
Chinese (zh)
Other versions
CN113269859A (en
Inventor
杨明浩
孙杨昌
贾清玉
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Institute of Automation of Chinese Academy of Science
Original Assignee
Institute of Automation of Chinese Academy of Science
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Institute of Automation of Chinese Academy of Science filed Critical Institute of Automation of Chinese Academy of Science
Priority to CN202110642486.9A priority Critical patent/CN113269859B/en
Publication of CN113269859A publication Critical patent/CN113269859A/en
Application granted granted Critical
Publication of CN113269859B publication Critical patent/CN113269859B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/04Texture mapping
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/13Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image
    • G06T2207/10012Stereo images

Abstract

The application belongs to the field of real-time reconstruction of visual information, in particular relates to an RGBD visual real-time reconstruction method and system for an actuator operation space, and aims to solve the problems of low real-time performance, high dependence on manual auxiliary calibration and low adaptability to environmental change in the prior art. The application comprises the following steps: dividing the object outline of the RGB image obtained in the operation space environment of the actuator; mapping object contours in RGB and depth images to an actuator operation space by adopting a projection method based on a depth neural network, and reducing reconstruction errors by adopting a distance-limited outlier elimination strategy; performing Delaunay triangle subdivision on the RGB contour, and attaching textures of the subdivided image to the three-dimensional object contour according to the mapping relation between the RGB image and the actuator operation space and in a triangle texture mapping mode, so as to complete reconstruction of object information. The application does not need to calculate the internal and external parameters of the camera, has high reconstruction speed and strong real-time performance, does not need manual auxiliary calibration, and has strong adaptability to environmental changes.

Description

RGBD vision real-time reconstruction method and system for actuator operation space
Technical Field
The application belongs to the field of real-time reconstruction of visual information, and particularly relates to an RGBD visual real-time reconstruction method and system for an actuator operation space.
Background
The mechanical arm autonomous grabbing technology is one of the research hot spots and difficulties in the field of robots. At present, the mechanical arm grabbing technology is widely applied to the fields of intelligent logistics sorting, intelligent storage, intelligent home furnishing and the like. When the mechanical arm performs a grabbing task in real space, the mechanical arm firstly needs to acquire the accurate three-dimensional position of an object in the operation space of the mechanical arm.
When a conventional mechanical arm performs a grabbing task, in order to determine specific positions of the mechanical arm and an object, a reference object is usually required to be used for performing marking and positioning in a geometric mode or a neural network is simply used for performing position calculation. However, geometrically labeled positioning often requires heavy manual assistance in calibration and cannot effectively and quickly accommodate environmental changes. The calculation is performed through the neural network, and the amount of pixels in the image to be calculated is too large, so that the calculation time is too long, and the real-time reconstruction requirement is difficult to meet.
Disclosure of Invention
In order to solve the problems in the prior art, namely the problems of low real-time performance, high dependence on manual auxiliary calibration and low adaptability to environmental changes in the prior art, the application provides an RGBD vision real-time reconstruction method for an actuator operation space, which comprises the following steps:
step S10, aligning an RGB image acquired by a Depth camera with a corresponding Depth image to obtain a Depth value d corresponding to any pixel point (u, v) of the RGB image;
step S20, performing example segmentation on RGB images acquired by a depth camera by using a Mask R-CNN neural network to obtain example segmentation results of different objects;
step S30, carrying out contour extraction on the example segmentation results of the different objects, and removing contour points with abnormal depth information by combining depth values d corresponding to any pixel point (u, v) of the RGB image;
step S40, eliminating outliers in object contour reconstruction by adopting a distance limiting outlier correction method based on a density peak value based on the contour after eliminating the outliers, and obtaining n corrected contour points;
step S50, extracting the corrected contour pointsOutline points and passing said ++18 neural network through Resnet>Mapping each contour point to an actuator operation space to obtain an initial valueA three-dimensional reconstruction result;
and step S60, performing Delaunay three-dimensional global subdivision of the initial three-dimensional reconstruction result, and mapping texture information of the space triangular patches obtained by subdivision in the RGB image onto corresponding space triangular patches obtained by subdivision of the initial three-dimensional reconstruction result in a texture mapping mode to obtain a real-time reconstruction result.
In some preferred embodiments, in step S30, contour points with abnormal depth information are eliminated, and the method includes:
judging the depth of a current contour point in the contour points obtained by contour extraction, and if the depth value of the current contour point is zero or the depth value of the current contour point is two times or more than two times of the average value of other contour points, taking the current contour point as an outlier and removing;
traversing each contour point in the contour points obtained by contour extraction to obtain a contour with abnormal contour points removed.
In some preferred embodiments, step S40 includes:
step S41, marking the contour after the abnormal contour points are removed as the contourThe ith contour point in the contour is denoted p i =(u i ,v i ,d i ),(u i ,v i ) For the pixel position of the ith contour point in the corresponding RGB image, d i For the Depth value of the ith contour point in the corresponding Depth image, m is P RGB-D The number of mid-profile points;
step S42, P RGB-D Mapping to an actuator operation space to obtain a reconstruction point set P Recon
Step S43, reconstructing the point set P by a distance limiting outlier correction method based on the density peak Recon And removing the outliers to obtain n corrected contour points.
In some preferred embodiments, P is added in step S42 RGB-D Mapping to an actuator operation space, wherein the method comprises the following steps:
P Recon =M×P RGB-D
wherein M is a transfer matrix.
In some preferred embodiments, step S43 includes:
step S431, the reconstruction point set P Recon Is recorded asm is the reconstruction point set P Recon The number of mid-profile points;
step S432, calculating the reconstruction point set P Recon Is the center point of (2)And based on the center point->Constructing error setsWherein (1)>
Step S433, for each of the error sets XSeparately calculate->Local density ρ of (2) i And according to the local density ρ i Calculating the error distance delta i
Step S434, according to the currentThe corresponding local density ρ i And error distance delta i Respectively judging the current->Whether it is an outlier;
step S435, outliersCorresponding->From the reconstructed point set P Recon And removing the modified n contour points.
In some preferred embodiments, the center pointThe calculation method comprises the following steps:
wherein,a reconstruction point set P corresponding to the ith contour point Recon Is the i-th reconstruction point in the model.
In some preferred embodiments, the local density ρ i The calculation method comprises the following steps:
wherein d ij Is x i And x j Distance between d c For a set cut-off distance, d ij -d c <0, χ (d) ij -d c ) =1, otherwise χ (d ij -d c )=0。
In some preferred embodiments, the error distance δ i The calculation method comprises the following steps:
wherein min represents the minimum value.
In another aspect of the present application, an RGBD visual real-time reconstruction system for an actuator operation space is provided, the reconstruction system includes the following modules:
the Depth alignment module is configured to align the RGB image acquired by the Depth camera with the corresponding Depth image to obtain a Depth value d corresponding to any pixel point (u, v) of the RGB image;
the example segmentation module is configured to conduct example segmentation on RGB images acquired by the depth camera by using a Mask R-CNN neural network to obtain example segmentation results of different objects;
the abnormal removing module is configured to extract the outline of the example segmentation results of the different objects, and remove the outline points with abnormal depth information by combining the depth value d corresponding to any pixel point (u, v) of the RGB image;
the reconstruction and correction module is used for removing outliers in the object contour reconstruction by adopting a distance limiting outlier correction method based on a density peak value based on the contour after the outlier removal, so as to obtain n corrected contour points;
a mapping module configured to extract the modified contour pointsOutline points and passing said ++18 neural network through Resnet>Mapping each contour point to an actuator operation space to obtain an initial three-dimensional reconstruction result;
the texture mapping module is configured to perform Delaunay three-dimensional global subdivision of the initial three-dimensional reconstruction result, map texture information of the space triangular patches obtained by subdivision in the RGB image onto corresponding space triangular patches obtained by subdivision of the initial three-dimensional reconstruction result in a texture mapping mode, and obtain a real-time reconstruction result.
In a third aspect of the present application, an electronic device is provided, including:
at least one processor; and
a memory communicatively coupled to at least one of the processors; wherein,
the memory stores instructions executable by the processor for execution by the processor to implement the RGBD visual real-time reconstruction method for the actuator-oriented operating space described above.
The application has the beneficial effects that:
the RGBD vision real-time reconstruction method for the actuator operation space does not need to calculate the internal and external parameters of a camera, only uses contour segmentation, uses a neural network projection to map to the actuator operation space, finally eliminates abnormal points and outliers, then performs Delaunay triangle subdivision on the RGB contour, and pastes textures on the three-dimensional object contour according to the mapping relation from the RGB image to the actuator operation space by the way of triangle texture mapping on the image after subdivision, thereby realizing the three-dimensional reconstruction process quickly and effectively, having high efficiency, accuracy and precision, simple calculation and small resource consumption, and being applicable to embedded equipment and mobile equipment with limited calculation capacity and occasions with higher real-time requirements.
Drawings
Other features, objects and advantages of the present application will become more apparent upon reading of the detailed description of non-limiting embodiments, made with reference to the accompanying drawings in which:
fig. 1 is a schematic flow chart of the RGBD visual real-time reconstruction method for the operation space of the actuator.
Detailed Description
The application is described in further detail below with reference to the drawings and examples. It is to be understood that the specific embodiments described herein are merely illustrative of the application and are not limiting of the application. It should be noted that, for convenience of description, only the portions related to the present application are shown in the drawings.
It should be noted that, without conflict, the embodiments of the present application and features of the embodiments may be combined with each other. The application will be described in detail below with reference to the drawings in connection with embodiments.
The application provides an RGBD vision real-time reconstruction method for an actuator operation space, which comprises the following three steps: (1) Dividing the object outline of the RGB image obtained in the operation space environment of the actuator; (2) Mapping object outlines in RGB images and depth images to an actuator operation space by adopting a projection method based on a depth neural network, and simultaneously adopting a distance-limited outlier elimination strategy for reconstruction errors caused by noise so as to reduce the reconstruction errors; (3) Performing Delaunay triangle subdivision on the RGB outline, attaching textures to the three-dimensional object outline according to the mapping relation from the RGB image to the actuator operation space in a triangle texture mapping mode, and completing reconstruction of object information. The first step is to extract the image of the object quickly and reconstruct its contour to its three-dimensional position in the space of the mechanical arm; the second step aims to realize more accurate real-time reconstruction by eliminating abnormal values in the initial result obtained in the first step; the third step is to split the texture picture of the two-dimensional image and attach the texture picture to the reconstructed three-dimensional object surface in a texture mapping mode. The process can quickly and accurately detect each object in the image and project the object position information into the three-dimensional working space of the manipulator.
The application discloses an RGBD vision real-time reconstruction method for an actuator operation space, which comprises the following steps:
step S10, aligning an RGB image acquired by a Depth camera with a corresponding Depth image to obtain a Depth value d corresponding to any pixel point (u, v) of the RGB image;
step S20, performing example segmentation on RGB images acquired by a depth camera by using a Mask R-CNN neural network to obtain example segmentation results of different objects;
step S30, carrying out contour extraction on the example segmentation results of the different objects, and removing contour points with abnormal depth information by combining depth values d corresponding to any pixel point (u, v) of the RGB image;
step S40, eliminating outliers in object contour reconstruction by adopting a distance limiting outlier correction method based on a density peak value based on the contour after eliminating the outliers, and obtaining n corrected contour points;
step S50, extracting the corrected contour pointsOutline points and passing said ++18 neural network through Resnet>Mapping each contour point to an actuator operation space to obtain an initial three-dimensional reconstruction result;
and step S60, performing Delaunay three-dimensional global subdivision of the initial three-dimensional reconstruction result, and mapping texture information of the space triangular patches obtained by subdivision in the RGB image onto corresponding space triangular patches obtained by subdivision of the initial three-dimensional reconstruction result in a texture mapping mode to obtain a real-time reconstruction result.
In order to more clearly describe the RGBD visual real-time reconstruction method for the actuator operation space according to the present application, each step in the embodiment of the present application is described in detail below with reference to fig. 1.
The RGBD visual real-time reconstruction method for the actuator operation space of the first embodiment of the present application includes steps S10 to S60, and each step is described in detail as follows:
step S10, aligning the RGB image acquired by the Depth camera with the corresponding Depth image to obtain a Depth value d corresponding to any pixel point (u, v) of the RGB image.
And S20, performing example segmentation on the RGB image acquired by the depth camera by using a Mask R-CNN neural network to obtain example segmentation results of different objects.
In one embodiment of the present application, the Mask-RCNN network is used to perform instance segmentation, so that the contours of different objects can be distinguished in the RGB image, and other suitable instance segmentation networks can be selected in other application scenarios, which is not limited in the present application.
Step S30, carrying out contour extraction on the example segmentation results of the different objects, and eliminating contour points with abnormal depth information by combining depth values d corresponding to any pixel point (u, v) of the RGB image:
judging the depth of a current contour point in the contour points obtained by contour extraction, and if the depth value of the current contour point is zero or the depth value of the current contour point is two times or more than two times of the average value of other contour points, taking the current contour point as an outlier and removing;
traversing each contour point in the contour points obtained by contour extraction to obtain a contour with abnormal contour points removed.
And S40, eliminating outliers in the object contour reconstruction by adopting a distance limiting outlier correction method based on a density peak value based on the contour with the outliers eliminated, and obtaining n corrected contour points.
Step S41, marking the contour after the abnormal contour points are removed as the contourThe ith contour point in the contour is denoted p i =(u i ,v i ,d i ),(u i ,v i ) For the pixel position of the ith contour point in the corresponding RGB image, d i For the Depth value of the ith contour point in the corresponding Depth image, m is P RGB-D Number of mid-contour points.
Step S42, P RGB-D Mapping to an actuator operation space to obtain a reconstruction point set P Recon As shown in formula (1):
P Recon =M×P RGB-D (1)
wherein M is a transfer matrix.
Step S43, reconstructing the point set P by a distance limiting outlier correction method based on the density peak Recon And removing the outliers to obtain n corrected contour points.
Step S431, the reconstruction point set P Recon Is recorded asm is the reconstruction point set P Recon The number of mid-profile points;
step S432, calculating the reconstruction point set P Recon Is the center point of (2)And based on the center point->Constructing error sets
Center pointThe calculation method is shown as the formula (2):
wherein,a reconstruction point set P corresponding to the ith contour point Recon Is the i-th reconstruction point in the model.
The calculation mode of (2) is shown as the formula (3):
step S433, for each of the error sets XSeparately calculate->Local density ρ of (2) i And according to the local density ρ i Calculating the error distance delta i
If it isIs a central point, then ∈ ->More elements, which have a relatively higher density,local density ρ of (2) i The calculation method is shown as the formula (4):
wherein d ij Is x i And x j Distance between d c For a set cut-off distance, d ij -d c <0, χ (d) ij -d c ) =1, otherwise χ (d ij -d c )=0。
The distance from the points of greater density in the vicinity should be different from the spacing of the centers, the error distance delta i The calculation method is shown as the formula (5):
wherein min represents the minimum value.
In one embodiment of the application, d c The minimum value is 1, and the maximum data is 2.0% of the total number of data set points. Delta i Is composed ofThe minimum distance to other higher density objects. The greatest delta value is the center point of the object cluster with good effect, and the lower rho value is the outlier.
In step S434 of the present application,according to the currentThe corresponding local density ρ i And error distance delta i Respectively judging the current->Whether it is an outlier.
According to the above process, X is obtained in =X-X out Wherein X is out Is an outlier term in X, and then a corresponding set of satisfied contour points P is obtained in
Step S435, outliersCorresponding->From the reconstructed point set P Recon And removing the modified n contour points.
And eliminating abnormal values of the three-dimensional contour based on the DP-DROPA. Suppose that P of an object is extracted Recon There are m reconstruction points (x i ,y i ,z i ,0≤i<m) such that The maximum and minimum of (2) coordinate robot working space P Recon . Then the contour point set P Recon Transferring the extracted object to the point set +.>The value range is (0, 1)]。
Continuing to remove using the DP-DROPA algorithmIn (3) to obtain a corrected profile set
Step S50, extracting the corrected contour pointsOutline points and passing said ++18 neural network through Resnet>And mapping the contour points to an actuator operation space to obtain an initial three-dimensional reconstruction result.
And step S60, performing Delaunay three-dimensional global subdivision of the initial three-dimensional reconstruction result, and mapping texture information of the space triangular patches obtained by subdivision in the RGB image onto corresponding space triangular patches obtained by subdivision of the initial three-dimensional reconstruction result in a texture mapping mode to obtain a real-time reconstruction result.
Although the steps are described in the above-described sequential order in the above-described embodiments, it will be appreciated by those skilled in the art that in order to achieve the effects of the present embodiments, the steps need not be performed in such order, and may be performed simultaneously (in parallel) or in reverse order, and such simple variations are within the scope of the present application.
The RGBD vision real-time reconstruction system facing the operation space of the actuator of the second embodiment of the application comprises the following modules:
the Depth alignment module is configured to align the RGB image acquired by the Depth camera with the corresponding Depth image to obtain a Depth value d corresponding to any pixel point (u, v) of the RGB image;
the example segmentation module is configured to conduct example segmentation on RGB images acquired by the depth camera by using a Mask R-CNN neural network to obtain example segmentation results of different objects;
the abnormal removing module is configured to extract the outline of the example segmentation results of the different objects, and remove the outline points with abnormal depth information by combining the depth value d corresponding to any pixel point (u, v) of the RGB image;
the reconstruction and correction module is used for removing outliers in the object contour reconstruction by adopting a distance limiting outlier correction method based on a density peak value based on the contour after the outlier removal, so as to obtain n corrected contour points;
a mapping module configured to extract the modified contour pointsOutline points and passing said ++18 neural network through Resnet>Mapping each contour point to an actuator operation space to obtain an initial three-dimensional reconstruction result;
the texture mapping module is configured to perform Delaunay three-dimensional global subdivision of the initial three-dimensional reconstruction result, map texture information of the space triangular patches obtained by subdivision in the RGB image onto corresponding space triangular patches obtained by subdivision of the initial three-dimensional reconstruction result in a texture mapping mode, and obtain a real-time reconstruction result.
It will be clear to those skilled in the art that, for convenience and brevity of description, the specific working process of the system described above and the related description may refer to the corresponding process in the foregoing method embodiment, which is not repeated here.
It should be noted that, in the RGBD visual real-time reconstruction system for an actuator operation space provided in the foregoing embodiment, only the division of the foregoing functional modules is illustrated, in practical application, the foregoing functional allocation may be performed by different functional modules according to needs, that is, the modules or steps in the foregoing embodiment of the present application are further decomposed or combined, for example, the modules in the foregoing embodiment may be combined into one module, or may be further split into multiple sub-modules, so as to complete all or part of the functions described above. The names of the modules and steps related to the embodiments of the present application are merely for distinguishing the respective modules or steps, and are not to be construed as unduly limiting the present application.
An electronic device of a third embodiment of the present application includes:
at least one processor; and
a memory communicatively coupled to at least one of the processors; wherein,
the memory stores instructions executable by the processor for execution by the processor to implement the RGBD visual real-time reconstruction method for the actuator-oriented operating space described above.
A computer readable storage medium of a fourth embodiment of the present application stores computer instructions for execution by the computer to implement the RGBD visual real-time reconstruction method for an actuator operation space described above.
It will be clearly understood by those skilled in the art that, for convenience and brevity of description, the specific working process of the storage device and the processing device described above and the related description may refer to the corresponding process in the foregoing method embodiment, which is not repeated herein.
Those of skill in the art will appreciate that the various illustrative modules, method steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware, computer software, or combinations of both, and that the program(s) corresponding to the software modules, method steps, may be embodied in Random Access Memory (RAM), memory, read Only Memory (ROM), electrically programmable ROM, electrically erasable programmable ROM, registers, hard disk, removable disk, CD-ROM, or any other form of storage medium known in the art. To clearly illustrate this interchangeability of electronic hardware and software, various illustrative components and steps have been described above generally in terms of their functionality. Whether such functionality is implemented as electronic hardware or software depends upon the particular application and design constraints imposed on the solution. Those skilled in the art may implement the described functionality using different approaches for each particular application, but such implementation is not intended to be limiting.
The terms "first," "second," and the like, are used for distinguishing between similar objects and not for describing a particular sequential or chronological order.
The terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus/apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus/apparatus.
Thus far, the technical solution of the present application has been described in connection with the preferred embodiments shown in the drawings, but it is easily understood by those skilled in the art that the scope of protection of the present application is not limited to these specific embodiments. Equivalent modifications and substitutions for related technical features may be made by those skilled in the art without departing from the principles of the present application, and such modifications and substitutions will be within the scope of the present application.

Claims (7)

1. An RGBD vision real-time reconstruction method facing to an actuator operation space is characterized by comprising the following steps:
step S10, aligning an RGB image acquired by a Depth camera with a corresponding Depth image to obtain a Depth value d corresponding to any pixel point (u, v) of the RGB image;
step S20, performing example segmentation on RGB images acquired by a depth camera by using a Mask R-CNN neural network to obtain example segmentation results of different objects;
step S30, carrying out contour extraction on the example segmentation results of the different objects, and removing contour points with abnormal depth information by combining depth values d corresponding to any pixel point (u, v) of the RGB image;
step S40, eliminating outliers in object contour reconstruction by adopting a distance limiting outlier correction method based on density peaks based on the contour after eliminating the outliers, and obtaining n corrected contour points:
step S41, marking the contour after the abnormal contour points are removed as the contourThe ith contour point in the contour is denoted p i =(u i ,v i ,d i ),(u i ,v i ) For the pixel position of the ith contour point in the corresponding RGB image, d i For the Depth value of the ith contour point in the corresponding Depth image, m is P RGB-D The number of mid-profile points;
step S42, P RGB-D Mapping to an actuator operation space to obtain a reconstruction point set P Recon
P Recon =M×P RGB-D
Wherein M is a transfer matrix;
step S43, reconstructing the point set P Recon Is recorded asm is the reconstruction point set P Recon Calculating the number of middle contour points and the reconstruction point set P Recon Is +.>And based on the center point->Constructing error setsFor each of the error sets X +.>Separately calculate->Local density ρ of (2) i And according to the local density ρ i Calculating the error distance delta i According to the current->The corresponding local density ρ i And error distance delta i Respectively judging the current->Whether it is an outlier, the outlier is +.>Corresponding->From the reconstructed point set P Recon Removing the modified n contour points; wherein, a reconstruction point set P corresponding to the ith contour point Recon An ith reconstruction point in (a);
step S50, extracting the corrected contour pointsContour points and passing the contour points through a Resnet18 neural networkMapping each contour point to an actuator operation space to obtain an initial three-dimensional reconstruction result;
and step S60, performing Delaunay three-dimensional global subdivision of the initial three-dimensional reconstruction result, and mapping texture information of the space triangular patches obtained by subdivision in the RGB image onto corresponding space triangular patches obtained by subdivision of the initial three-dimensional reconstruction result in a texture mapping mode to obtain a real-time reconstruction result.
2. The RGBD visual real-time reconstruction method for an actuator operation space according to claim 1, wherein in step S30, contour points with abnormal depth information are eliminated, and the method comprises the following steps:
judging the depth of a current contour point in the contour points obtained by contour extraction, and if the depth value of the current contour point is zero or the depth value of the current contour point is two times or more than two times of the average value of other contour points, taking the current contour point as an outlier and removing;
traversing each contour point in the contour points obtained by contour extraction to obtain a contour with abnormal contour points removed.
3. The method of real-time RGBD visual reconstruction of an actuator-oriented operating space of claim 1, wherein said center pointThe calculation method comprises the following steps:
4. RGBD vision real-time reconstruction method for actuator operation space according to claim 1Characterized in that the local density ρ i The calculation method comprises the following steps:
wherein d ij Is x i And x j Distance between d c For a set cut-off distance, d ij -d c <0, χ (d) ij -d c ) =1, otherwise χ (d ij -d c )=0。
5. The method of real-time RGBD visual reconstruction of an actuator-oriented operating space of claim 4, wherein said error distance δ i The calculation method comprises the following steps:
wherein min represents the minimum value.
6. An RGBD vision real-time reconstruction system facing to an actuator operation space is characterized by comprising the following modules:
the Depth alignment module is configured to align the RGB image acquired by the Depth camera with the corresponding Depth image to obtain a Depth value d corresponding to any pixel point (u, v) of the RGB image;
the example segmentation module is configured to conduct example segmentation on RGB images acquired by the depth camera by using a Mask R-CNN neural network to obtain example segmentation results of different objects;
the abnormal removing module is configured to extract the outline of the example segmentation results of the different objects, and remove the outline points with abnormal depth information by combining the depth value d corresponding to any pixel point (u, v) of the RGB image;
the reconstruction and correction module is used for removing outliers in the object contour reconstruction by adopting a distance limiting outlier correction method based on a density peak value based on the contour after the outlier removal, so as to obtain n corrected contour points;
a mapping module configured to extract the modified contour pointsOutline points and passing said ++18 neural network through Resnet>Mapping each contour point to an actuator operation space to obtain an initial three-dimensional reconstruction result;
the texture mapping module is configured to perform Delaunay three-dimensional global subdivision of the initial three-dimensional reconstruction result, map texture information of the space triangular patches obtained by subdivision in the RGB image onto corresponding space triangular patches obtained by subdivision of the initial three-dimensional reconstruction result in a texture mapping mode, and obtain a real-time reconstruction result;
the abnormal eliminating module eliminates outline points with abnormal depth information, and comprises the following steps:
marking the contour with the abnormal contour points removed asThe ith contour point in the contour is denoted p i =(u i ,v i ,d i ),(u i ,v i ) For the pixel position of the ith contour point in the corresponding RGB image, d i For the Depth value of the ith contour point in the corresponding Depth image, m is P RGB-D The number of mid-profile points;
will P RGB-D Mapping to an actuator operation space to obtain a reconstruction point set P Recon
P Recon =M×P RGB-D
Wherein M is a transfer matrix;
reconstructing the point set P Recon Is recorded asm is the reconstruction point set P Recon Calculating the number of middle contour points and the reconstruction point set P Recon Is +.>And based on the center point->Constructing error set +.>For each of the error sets X +.>Separately calculate->Local density ρ of (2) i And according to the local density ρ i Calculating the error distance delta i According to the current->The corresponding local density ρ i And error distance delta i Respectively judging the current->Whether it is an outlier, the outlier is +.>Corresponding->From the reconstructed point set P Recon Removing the modified n contour points; wherein (1)> A reconstruction point set P corresponding to the ith contour point Recon Is the i-th reconstruction point in the model.
7. An electronic device, comprising:
at least one processor; and
a memory communicatively coupled to at least one of the processors; wherein,
the memory stores instructions executable by the processor for execution by the processor to implement the RGBD visual real-time reconstruction method of an actuator-oriented operating space of any of claims 1-5.
CN202110642486.9A 2021-06-09 2021-06-09 RGBD vision real-time reconstruction method and system for actuator operation space Active CN113269859B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110642486.9A CN113269859B (en) 2021-06-09 2021-06-09 RGBD vision real-time reconstruction method and system for actuator operation space

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110642486.9A CN113269859B (en) 2021-06-09 2021-06-09 RGBD vision real-time reconstruction method and system for actuator operation space

Publications (2)

Publication Number Publication Date
CN113269859A CN113269859A (en) 2021-08-17
CN113269859B true CN113269859B (en) 2023-11-24

Family

ID=77234742

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110642486.9A Active CN113269859B (en) 2021-06-09 2021-06-09 RGBD vision real-time reconstruction method and system for actuator operation space

Country Status (1)

Country Link
CN (1) CN113269859B (en)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106709947A (en) * 2016-12-20 2017-05-24 西安交通大学 RGBD camera-based three-dimensional human body rapid modeling system
CN107170037A (en) * 2016-03-07 2017-09-15 深圳市鹰眼在线电子科技有限公司 A kind of real-time three-dimensional point cloud method for reconstructing and system based on multiple-camera
CN110148217A (en) * 2019-05-24 2019-08-20 北京华捷艾米科技有限公司 A kind of real-time three-dimensional method for reconstructing, device and equipment
CN110310362A (en) * 2019-06-24 2019-10-08 中国科学院自动化研究所 High dynamic scene three-dimensional reconstruction method, system based on depth map and IMU
CN112132972A (en) * 2020-09-29 2020-12-25 凌美芯(北京)科技有限责任公司 Three-dimensional reconstruction method and system for fusing laser and image data
CN112150609A (en) * 2020-09-10 2020-12-29 刘帆 VR system based on indoor real-time dense three-dimensional reconstruction technology

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20160086350A1 (en) * 2014-09-22 2016-03-24 Foundation for Research and Technology - Hellas (FORTH) (acting through its Institute of Computer Apparatuses, methods and systems for recovering a 3-dimensional skeletal model of the human body

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107170037A (en) * 2016-03-07 2017-09-15 深圳市鹰眼在线电子科技有限公司 A kind of real-time three-dimensional point cloud method for reconstructing and system based on multiple-camera
CN106709947A (en) * 2016-12-20 2017-05-24 西安交通大学 RGBD camera-based three-dimensional human body rapid modeling system
CN110148217A (en) * 2019-05-24 2019-08-20 北京华捷艾米科技有限公司 A kind of real-time three-dimensional method for reconstructing, device and equipment
CN110310362A (en) * 2019-06-24 2019-10-08 中国科学院自动化研究所 High dynamic scene three-dimensional reconstruction method, system based on depth map and IMU
CN112150609A (en) * 2020-09-10 2020-12-29 刘帆 VR system based on indoor real-time dense three-dimensional reconstruction technology
CN112132972A (en) * 2020-09-29 2020-12-25 凌美芯(北京)科技有限责任公司 Three-dimensional reconstruction method and system for fusing laser and image data

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
面向RGBD深度数据的快速点云配准方法;苏本跃等;《中国图象图形学报》;第22卷(第5期);第643-655页 *

Also Published As

Publication number Publication date
CN113269859A (en) 2021-08-17

Similar Documents

Publication Publication Date Title
CN109459119B (en) Weight measurement method, device and computer readable storage medium
CN108369650B (en) Method for identifying possible characteristic points of calibration pattern
CN108225319B (en) Monocular vision rapid relative pose estimation system and method based on target characteristics
CN107507277B (en) Three-dimensional point cloud reconstruction method and device, server and readable storage medium
CN110458772B (en) Point cloud filtering method and device based on image processing and storage medium
CN109784250B (en) Positioning method and device of automatic guide trolley
JP6483168B2 (en) System and method for efficiently scoring a probe in an image with a vision system
CN108362205B (en) Space distance measuring method based on fringe projection
CN112686950B (en) Pose estimation method, pose estimation device, terminal equipment and computer readable storage medium
CN111340834B (en) Lining plate assembly system and method based on laser radar and binocular camera data fusion
CN111681186A (en) Image processing method and device, electronic equipment and readable storage medium
CN113269859B (en) RGBD vision real-time reconstruction method and system for actuator operation space
CN114926514A (en) Registration method and device of event image and RGB image
Zhang et al. L 2 V 2 T 2 Calib: Automatic and Unified Extrinsic Calibration Toolbox for Different 3D LiDAR, Visual Camera and Thermal Camera
CN110047032B (en) Local self-adaptive mismatching point removing method based on radial basis function fitting
CN111633358B (en) Laser-based weld parameter measuring method and device
CN111914857B (en) Layout method, device and system for plate excess material, electronic equipment and storage medium
CN114387353A (en) Camera calibration method, calibration device and computer readable storage medium
CN113065483A (en) Positioning method, positioning device, electronic equipment, medium and robot
CN111178366B (en) Mobile robot positioning method and mobile robot
CN114648544A (en) Sub-pixel ellipse extraction method
CN109919998B (en) Satellite attitude determination method and device and terminal equipment
CN112446926A (en) Method and device for calibrating relative position of laser radar and multi-eye fisheye camera
CN111553969A (en) Texture mapping method, medium, terminal and device based on gradient domain
Yu et al. A self-correction based algorithm for single-shot camera calibration

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant