CN117911631B - Three-dimensional reconstruction method based on heterogeneous image matching - Google Patents
Three-dimensional reconstruction method based on heterogeneous image matching Download PDFInfo
- Publication number
- CN117911631B CN117911631B CN202410308781.4A CN202410308781A CN117911631B CN 117911631 B CN117911631 B CN 117911631B CN 202410308781 A CN202410308781 A CN 202410308781A CN 117911631 B CN117911631 B CN 117911631B
- Authority
- CN
- China
- Prior art keywords
- pixels
- images
- pixel
- circle
- circle layer
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000000034 method Methods 0.000 title claims abstract description 57
- 238000012545 processing Methods 0.000 claims abstract description 25
- 238000007781 pre-processing Methods 0.000 claims abstract description 10
- 238000003379 elimination reaction Methods 0.000 claims description 21
- 238000012937 correction Methods 0.000 claims description 12
- FWXAUDSWDBGCMN-DNQXCXABSA-N [(2r,3r)-3-diphenylphosphanylbutan-2-yl]-diphenylphosphane Chemical compound C=1C=CC=CC=1P([C@H](C)[C@@H](C)P(C=1C=CC=CC=1)C=1C=CC=CC=1)C1=CC=CC=C1 FWXAUDSWDBGCMN-DNQXCXABSA-N 0.000 claims description 6
- 238000012216 screening Methods 0.000 claims description 6
- 238000012986 modification Methods 0.000 claims description 5
- 230000004048 modification Effects 0.000 claims description 5
- 230000008030 elimination Effects 0.000 claims description 3
- 230000001066 destructive effect Effects 0.000 abstract description 7
- 230000000007 visual effect Effects 0.000 abstract description 7
- 230000009286 beneficial effect Effects 0.000 description 3
- 230000000694 effects Effects 0.000 description 2
- 230000010485 coping Effects 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 238000005286 illumination Methods 0.000 description 1
- 230000009466 transformation Effects 0.000 description 1
Landscapes
- Image Processing (AREA)
Abstract
The invention relates to the field of image processing, and provides a three-dimensional reconstruction method based on heterogeneous image matching, which comprises the steps of obtaining a plurality of heterogeneous images, carrying out image preprocessing on the heterogeneous images to obtain a plurality of first images, respectively extracting and marking characteristic points on each first image, respectively carrying out destructive processing on the plurality of first images based on the characteristic points to obtain a plurality of second images, and completing three-dimensional reconstruction based on the plurality of second images. The method can improve the precision of the output model after three-dimensional reconstruction, ensure that the visual information of the heterogeneous images can be accurately reflected in the three-dimensional model, reduce the quality reduction of the reconstructed model caused by the inconsistency among the heterogeneous images by carrying out destructive processing on pixels, and improve the precision of the model output in a mode of optimizing the heterogeneous images by taking the heterogeneous images as the heterogeneous images of the model input, thereby being capable of fully improving the reconstruction speed of a large-scale or complex three-dimensional model.
Description
Technical Field
The invention relates to the field of image processing, in particular to a three-dimensional reconstruction method based on heterogeneous image matching.
Background
In the fields of computer vision and image processing, a three-dimensional reconstruction technique is aimed at recovering three-dimensional structural information of an object from a two-dimensional image, and the technique has been widely applied to a plurality of fields of virtual reality, game development, cultural relic protection, and the like. However, conventional three-dimensional reconstruction methods generally rely on multi-view images from the same source, which often encounters limitations in practical applications, especially where the image sources are complex or acquisition conditions are limited. Still further, geometric distortion between images becomes a significant challenge when faced with heterogeneous images obtained from different devices or different perspectives. These distortions often result from differences in shooting angles, lens characteristics, or scene depth, making direct matching between images difficult, thereby affecting the accurate reconstruction and realism of the three-dimensional model. Therefore, the method for effectively coping with geometric distortion and carrying out three-dimensional reconstruction by utilizing the heterologous image is developed, so that the flexibility and the application range of three-dimensional reconstruction can be greatly improved, the accuracy and the visual effect of a reconstruction model can be ensured, and the wider application requirements in the three-dimensional reconstruction technology can be met.
Disclosure of Invention
The invention aims to provide a three-dimensional reconstruction method based on heterogeneous image matching, which solves one or more technical problems in the prior art and at least provides a beneficial selection or creation condition.
The invention provides a three-dimensional reconstruction method based on heterogeneous image matching, which comprises the steps of obtaining a plurality of heterogeneous images, carrying out image preprocessing on the heterogeneous images to obtain a plurality of first images, respectively extracting and marking characteristic points of each first image, respectively carrying out destructive processing on the plurality of first images based on the characteristic points to obtain a plurality of second images, and completing three-dimensional reconstruction based on the plurality of second images. The method can improve the precision of the output model after three-dimensional reconstruction, ensure that the visual information of the heterogeneous images can be accurately reflected in the three-dimensional model, reduce the quality reduction of the reconstructed model caused by the inconsistency among the heterogeneous images by carrying out destructive processing on pixels, and improve the precision of the model output in a mode of optimizing the heterogeneous images by taking the heterogeneous images as the heterogeneous images of the model input, thereby being capable of fully improving the reconstruction speed of a large-scale or complex three-dimensional model.
To achieve the above object, according to an aspect of the present invention, there is provided a three-dimensional reconstruction method based on heterologous image matching, the method comprising the steps of:
s100, acquiring a plurality of heterogeneous images, and performing image preprocessing on the heterogeneous images to obtain a plurality of first images;
S200, extracting and marking characteristic points of each first image respectively;
S300, respectively carrying out cancellation processing on the plurality of first images based on the feature points to obtain a plurality of second images;
And S400, completing three-dimensional reconstruction based on the plurality of second images.
Further, in S100, the plurality of heterologous images refer to a plurality of images from different types of photographic equipment or obtained under different photographing conditions.
Further, S100, performing image preprocessing on a plurality of heterologous images to obtain a plurality of first images, where the method specifically includes: the denoising, color correction and geometric distortion correction are sequentially performed on the plurality of heterologous images, and the heterologous images after pretreatment are stored as a first image (the heterologous images after pretreatment are the heterologous images after the denoising, color correction and geometric distortion correction are completed).
Optionally, S200, extracting and marking feature points of each first image, where the method for extracting feature points is a SIFT algorithm.
Further, S300, based on the feature points, performs a cancellation process on the plurality of first images, and the method for obtaining the plurality of second images specifically includes: the number of the first images is recorded as M, the j th image in the M first images is represented by img (j), j is the serial number, the cancellation processing is performed on the img (j) from j=1 until j=M, the cancellation processing of the M first images is completed, and the M first images after the cancellation processing are stored as M second images.
Further, the method for performing the cancellation processing on img (j) specifically comprises the following steps:
S301, recording img (j) contains N (j) pixels, wherein each pixel corresponds to a pixel value, p (i, j) represents the ith pixel in the N (j) pixels, pv (i, j) represents the pixel value corresponding to p (i, j); i is a sequence number, i=1, 2, …, N (j);
The img (j) contains R (j) feature points;
acquiring a first pixel, wherein P (A, j) represents the first pixel, A is a serial number, A is E [1, N (j) ], and the process goes to S302;
s302, selecting a pixel closest to the value of pv (A, j) in the 1-circle layer of P (A, j) and representing the pixel by P (B, j), and turning to S303; wherein pv (A, j) represents the pixel value corresponding to P (A, j), B is a sequence number, B epsilon [1, N (j) ];
S303, if the 1-circle layer of the P (B, j) does not contain any feature point, marking the P (B, j) as a sub-pixel, simultaneously updating the P (A, j) to the P (B, j) (namely taking the current P (B, j) as a new P (A, j)), and turning to S302;
If any feature point is contained in the 1-circle layer of P (B, j), selecting a pixel with the smallest pixel value in the 1-circle layer of P (B, j), representing the pixel by P (M0, j), marking pv (M0, j) as a partial elimination pixel, marking all feature points in the 1-circle layer of P (B, j) as cancellation points, and turning to S304; pv (M0, j) represents a pixel value corresponding to P (M0, j);
S304, if all the R (j) feature points are marked as the vanishing points, the process goes to S305; otherwise, update P (a, j) to P (M0, j) (i.e., take current P (M0, j) as new P (a, j)) and go to S302;
s305, pixel replacement is performed on the divided pixels.
The beneficial effects of this step are: in the three-dimensional reconstruction process based on multiple images, the visual information of the multiple images on the same scene or object is always different under the influence of different visual angles and illumination conditions during shooting, if three-dimensional reconstruction is directly carried out on the multiple images in different modes, the inconsistency of brightness and color among the images can affect the feature point matching and the generation precision of point cloud data, and further the quality of a three-dimensional reconstruction model is reduced.
Further, the method for performing pixel replacement on the divided pixels specifically comprises the following steps: the number of all the sub-elimination pixels is recorded as T, dp (k) is used for representing the kth of the T sub-elimination pixels, k is a serial number, and k=1, 2, … and T;
Defining a replacement algorithm as: in img (j), screening out the nearest characteristic point to dp (k) and recording the characteristic point as chp (C, j), recording chebyshev distance (chessboard distance, because image pixels are arranged in a vertical or horizontal mode, taking the chessboard distance as a distance reference between pixels) as P (k), and modifying the pixel value of dp (k) as CirA (P (k)); c is a sequence number, C epsilon [1, R (j) ];
wherein CirA (P (k)) is equal to dp (k) -the average of the pixel values of all pixels within the layer;
in the replacement algorithm, k is traversed from k=1 to k=t, thereby completing the pixel value modifications of dp (1), dp (2), …, dp (T).
Further, the method for acquiring the first pixel specifically includes: screening out all pixels meeting a first condition from N pixels in img (j), and selecting one pixel at will from the pixels meeting the first condition as a first pixel;
if there is no pixel satisfying the first condition in img (j), taking the pixel with the largest pixel value in img (j) as the first pixel;
Wherein, the pixels satisfying the first condition refer to: and p (X, j) represents any one of N pixels in img (j), and if the Chebyshev distance between p (X, j) and any one of the characteristic points is larger than R (j), the p (X, j) is called as the pixel meeting the first condition.
Further, the definition method of the 1-circle layer of P (A, j) is as follows: taking P (A, j) as a circle center, taking the side length of 1 pixel as a radius as a circle U (A, j), taking all pixels positioned inside the circle U (A, j) and all pixels intersected with the circumference of the circle U (A, j) as circle layer pixels of P (A, j), eliminating sub-elimination pixels and P (A, j) from the circle layer pixels of P (A, j), and forming a 1-circle layer of P (A, j) by the rest circle layer pixels after eliminating the sub-elimination pixels and P (A, j).
Further, the definition method of the 1-circle layer of the P (B, j) is that the P (B, j) is used as a center, the side length of 1 pixel is used as a radius to be used as a circle U (B, j), all pixels positioned in the circle U (B, j) and all pixels intersected with the circumference of the circle U (B, j) are used as circle layer pixels of the P (B, j), the circle layer pixels of the P (B, j) are used for eliminating the sub-elimination pixels and the P (B, j), and the rest circle layer pixels after the sub-elimination pixels and the P (B, j) are removed to form the 1-circle layer of the P (B, j).
Further, the definition method of the P (k) -circle layer of dp (k) is that a circle U (k) is made by taking dp (k) as a circle center and taking the side length of P (k) pixels as a radius, all pixels positioned in the circle U (k) and all pixels intersected with the circumference of the circle U (k) are taken as circle layer pixels of dp (k), the circle layer pixels of dp (k) are used for eliminating the sub-elimination pixels and dp (k), and the rest circle layer pixels after dp (k) are eliminated to form the P (k) -circle layer of dp (k).
Optionally, S400, based on the plurality of second images, the method for completing three-dimensional reconstruction is as follows: based on the plurality of second images, outputting a three-dimensional model through PMVS algorithm.
The beneficial effects of the invention are as follows: the method can improve the precision of the output model after three-dimensional reconstruction, ensure that the visual information of the heterogeneous images can be accurately reflected in the three-dimensional model, reduce the quality reduction of the reconstructed model caused by the inconsistency among the heterogeneous images by carrying out destructive processing on pixels, and improve the precision of the model output in a mode of optimizing the heterogeneous images by taking the heterogeneous images as the heterogeneous images of the model input, thereby being capable of fully improving the reconstruction speed of a large-scale or complex three-dimensional model.
Drawings
The above and other features of the present invention will become more apparent from the detailed description of the embodiments thereof given in conjunction with the accompanying drawings, in which like reference characters designate like or similar elements, and it is apparent that the drawings in the following description are merely some examples of the present invention, and other drawings may be obtained from these drawings without inventive effort to those of ordinary skill in the art, in which:
fig. 1 is a flow chart of a three-dimensional reconstruction method based on heterologous image matching.
Detailed Description
The conception, specific structure, and technical effects produced by the present application will be clearly and completely described below with reference to the embodiments and the drawings to fully understand the objects, aspects, and effects of the present application. It should be noted that, without conflict, the embodiments of the present application and features of the embodiments may be combined with each other.
In the description of the present invention, a number means one or more, a number means two or more, and greater than, less than, exceeding, etc. are understood to not include the present number, and above, below, within, etc. are understood to include the present number. The description of the first and second is for the purpose of distinguishing between technical features only and should not be construed as indicating or implying relative importance or implicitly indicating the number of technical features indicated or implicitly indicating the precedence of the technical features indicated.
Referring to fig. 1, a flow chart of a three-dimensional reconstruction method based on the matching of a heterologous image according to the present invention is shown, and a three-dimensional reconstruction method based on the matching of a heterologous image according to an embodiment of the present invention is described below with reference to fig. 1.
The invention provides a three-dimensional reconstruction method based on heterogeneous image matching, which comprises the following steps:
s100, acquiring a plurality of heterogeneous images, and performing image preprocessing on the heterogeneous images to obtain a plurality of first images;
S200, extracting and marking characteristic points of each first image respectively;
S300, respectively carrying out cancellation processing on the plurality of first images based on the feature points to obtain a plurality of second images;
And S400, completing three-dimensional reconstruction based on the plurality of second images.
Further, in S100, the plurality of heterologous images refer to a plurality of images from different types of photographic equipment or obtained under different photographing conditions.
Further, S100, performing image preprocessing on a plurality of heterologous images to obtain a plurality of first images, where the method specifically includes: the denoising, color correction and geometric distortion correction are sequentially performed on the plurality of heterologous images, and the heterologous images after pretreatment are stored as a first image (the heterologous images after pretreatment are the heterologous images after the denoising, color correction and geometric distortion correction are completed).
Specifically, the geometric distortion correction is perspective transformation correction.
Specifically, the algorithm for extracting feature points (or feature pixels) of each first image by using the SIFT algorithm is as follows:
//C++
Ptr<Feature2D>sift = xfeatures2d::SIFT::create();
std::vector<KeyPoint>keypoints;
sift- > detect (HDImage _gray, keypoints);// HDImage _gray represents the grayed-out heterologous image;
Further, S300, based on the feature points, performs a cancellation process on the plurality of first images, and the method for obtaining the plurality of second images specifically includes: the number of the first images is recorded as M, the j th image in the M first images is represented by img (j), j is the serial number, the cancellation processing is performed on the img (j) from j=1 until j=M, the cancellation processing of the M first images is completed, and the M first images after the cancellation processing are stored as M second images.
Further, the method for performing the cancellation processing on img (j) specifically comprises the following steps:
S301, recording img (j) contains N (j) pixels, wherein each pixel corresponds to a pixel value, p (i, j) represents the ith pixel in the N (j) pixels, pv (i, j) represents the pixel value corresponding to p (i, j); i is a sequence number, i=1, 2, …, N (j);
The img (j) contains R (j) feature points;
acquiring a first pixel, wherein P (A, j) represents the first pixel, A is a serial number, A is E [1, N (j) ], and the process goes to S302; (because the first pixel belongs to one of the N (j) pixels, there is a unique sequence number that can identify which particular pixel of the N (j) pixels it belongs to, with a in P (a, j) being the unique sequence number);
s302, selecting a pixel closest to the value of pv (A, j) in the 1-circle layer of P (A, j) and representing the pixel by P (B, j), and turning to S303; wherein pv (A, j) represents the pixel value corresponding to P (A, j), B is a sequence number, B epsilon [1, N (j) ];
S303, if the 1-circle layer of the P (B, j) does not contain any feature point, marking the P (B, j) as a sub-pixel, simultaneously updating the P (A, j) to the P (B, j) (namely taking the current P (B, j) as a new P (A, j)), and turning to S302;
If any feature point is contained in the 1-circle layer of P (B, j), selecting a pixel with the smallest pixel value in the 1-circle layer of P (B, j), representing the pixel by P (M0, j), marking pv (M0, j) as a partial elimination pixel, marking all feature points in the 1-circle layer of P (B, j) as cancellation points, and turning to S304; pv (M0, j) represents a pixel value corresponding to P (M0, j);
S304, if all the R (j) feature points are marked as the vanishing points, the process goes to S305; otherwise, update P (a, j) to P (M0, j) (i.e., take current P (M0, j) as new P (a, j)) and go to S302;
s305, pixel replacement is performed on the divided pixels.
Further, the method for performing pixel replacement on the divided pixels specifically comprises the following steps: the number of all the sub-elimination pixels is recorded as T, dp (k) is used for representing the kth of the T sub-elimination pixels, k is a serial number, and k=1, 2, … and T;
Defining a replacement algorithm as: in img (j), screening out the nearest characteristic point to dp (k) and recording the characteristic point as chp (C, j), recording chebyshev distance (chessboard distance, because image pixels are arranged in a vertical or horizontal mode, taking the chessboard distance as a distance reference between pixels) as P (k), and modifying the pixel value of dp (k) as CirA (P (k)); c is a sequence number, C epsilon [1, R (j) ];
wherein CirA (P (k)) is equal to dp (k) -the average of the pixel values of all pixels within the layer;
in the replacement algorithm, k is traversed from k=1 to k=t, thereby completing the pixel value modifications of dp (1), dp (2), …, dp (T).
Further, the method for acquiring the first pixel specifically includes: screening out all pixels meeting a first condition from N pixels in img (j), and selecting one pixel at will from the pixels meeting the first condition as a first pixel;
if there is no pixel satisfying the first condition in img (j), taking the pixel with the largest pixel value in img (j) as the first pixel;
Wherein, the pixels satisfying the first condition refer to: and p (X, j) represents any one of N pixels in img (j), and if the Chebyshev distance between p (X, j) and any one of the characteristic points is larger than R (j), the p (X, j) is called as the pixel meeting the first condition.
Further, the definition method of the 1-circle layer of P (A, j) is as follows: taking P (A, j) as a circle center, taking the side length of 1 pixel as a radius as a circle U (A, j), taking all pixels positioned inside the circle U (A, j) and all pixels intersected with the circumference of the circle U (A, j) as circle layer pixels of P (A, j), eliminating sub-elimination pixels and P (A, j) from the circle layer pixels of P (A, j), and forming a 1-circle layer of P (A, j) by the rest circle layer pixels after eliminating the sub-elimination pixels and P (A, j).
The invention provides a three-dimensional reconstruction method based on heterogeneous image matching, which comprises the steps of obtaining a plurality of heterogeneous images, carrying out image preprocessing on the heterogeneous images to obtain a plurality of first images, respectively extracting and marking characteristic points of each first image, respectively carrying out destructive processing on the plurality of first images based on the characteristic points to obtain a plurality of second images, and completing three-dimensional reconstruction based on the plurality of second images. The method can improve the precision of the output model after three-dimensional reconstruction, ensure that the visual information of the heterogeneous images can be accurately reflected in the three-dimensional model, reduce the quality reduction of the reconstructed model caused by the inconsistency among the heterogeneous images by carrying out destructive processing on pixels, and improve the precision of the model output in a mode of optimizing the heterogeneous images by taking the heterogeneous images as the heterogeneous images of the model input, thereby being capable of fully improving the reconstruction speed of a large-scale or complex three-dimensional model. Although the present invention has been described in considerable detail and with particularity with respect to several described embodiments, it is not intended to be limited to any such detail or embodiment or any particular embodiment so as to effectively cover the intended scope of the invention. Furthermore, the foregoing description of the invention has been presented in its embodiments contemplated by the inventors for the purpose of providing a useful description, and for the purposes of providing a non-essential modification of the invention that may not be presently contemplated, may represent an equivalent modification of the invention.
Claims (3)
1. A three-dimensional reconstruction method based on heterologous image matching, the method comprising the steps of:
s100, acquiring a plurality of heterogeneous images, and performing image preprocessing on the heterogeneous images to obtain a plurality of first images;
S200, extracting and marking characteristic points of each first image respectively;
S300, respectively carrying out cancellation processing on the plurality of first images based on the feature points to obtain a plurality of second images;
s400, completing three-dimensional reconstruction based on a plurality of second images;
Wherein, S300, based on the feature points, the method for performing the cancellation processing on the plurality of first images to obtain the plurality of second images specifically includes: recording the number of the first images as M, representing the j th image in the M first images by img (j) as a serial number, starting from j=1, performing cancellation processing on the img (j) until j=M, thereby completing the cancellation processing of the M first images, and storing the M first images after the cancellation processing as M Zhang Dier images;
The method for carrying out the cancellation treatment on img (j) comprises the following steps:
S301, recording img (j) contains N (j) pixels, wherein each pixel corresponds to a pixel value, p (i, j) represents the ith pixel in the N (j) pixels, pv (i, j) represents the pixel value corresponding to p (i, j); the img (j) contains R (j) feature points;
acquiring a first pixel, representing the first pixel by P (A, j), and turning to S302;
S302, selecting a pixel closest to the value of pv (A, j) in the 1-circle layer of P (A, j) and representing the pixel by P (B, j), and turning to S303; wherein pv (a, j) represents a pixel value corresponding to P (a, j);
S303, if the 1-circle layer of the P (B, j) does not contain any feature point, marking the P (B, j) as a sub-pixel, simultaneously updating the P (A, j) to the P (B, j), and turning to S302;
If any feature point is contained in the 1-circle layer of P (B, j), selecting a pixel with the smallest pixel value in the 1-circle layer of P (B, j), representing the pixel by P (M0, j), marking pv (M0, j) as a partial elimination pixel, marking all feature points in the 1-circle layer of P (B, j) as cancellation points, and turning to S304; pv (M0, j) represents a pixel value corresponding to P (M0, j);
s304, if all the R (j) feature points are marked as the vanishing points, the process goes to S305; otherwise, update P (a, j) to P (M0, j) and go to S302;
s305, performing pixel replacement on the split pixels;
the method for pixel replacement of the divided pixels specifically comprises the following steps: the number of all the sub-elimination pixels is recorded as T, and the kth of the T sub-elimination pixels is expressed by dp (k);
Defining a replacement algorithm as: in img (j), screening out the characteristic point closest to dp (k), recording the characteristic point as chp (C, j), recording the Chebyshev distance between dp (k) and chp (C, j) as P (k), and modifying the pixel value of dp (k) as CirA (P (k));
wherein CirA (P (k)) is equal to dp (k) -the average of the pixel values of all pixels within the layer;
Traversing k from k=1 to k=t in an alternative algorithm, thereby completing pixel value modifications of dp (1), dp (2), …, dp (T);
The method for acquiring the first pixel specifically comprises the following steps: screening out all pixels meeting a first condition from N pixels in img (j), and selecting one pixel at will from the pixels meeting the first condition as a first pixel;
if there is no pixel satisfying the first condition in img (j), taking the pixel with the largest pixel value in img (j) as the first pixel;
Wherein, the pixels satisfying the first condition refer to: p (X, j) represents any one of N pixels in img (j), and if the Chebyshev distance between p (X, j) and any one of the characteristic points is larger than R (j), the p (X, j) is called as the pixel meeting the first condition;
The definition method of the 1-circle layer of P (A, j) is as follows: taking P (A, j) as a circle center, taking the side length of 1 pixel as a radius as a circle U (A, j), taking all pixels positioned inside the circle U (A, j) and all pixels intersected with the circumference of the circle U (A, j) as circle layer pixels of P (A, j), eliminating the sub-elimination pixels and P (A, j) from the circle layer pixels of P (A, j), and forming a 1-circle layer of P (A, j) by eliminating the sub-elimination pixels and the rest circle layer pixels after P (A, j);
The definition method of the 1-circle layer of the P (B, j) is that taking the P (B, j) as a circle center and taking the side length of 1 pixel as a radius as a circle U (B, j), taking all pixels positioned in the circle U (B, j) and all pixels intersected with the circumference of the circle U (B, j) as circle layer pixels of the P (B, j), eliminating the sub-elimination pixels and the P (B, j) in the circle layer pixels of the P (B, j), and eliminating the sub-elimination pixels and the rest circle layer pixels after the P (B, j) form the 1-circle layer of the P (B, j);
The definition method of the P (k) -circle layer of dp (k) is that the dp (k) is used as a center, the side length of P (k) pixels is used as a radius to form a circle U (k), all pixels positioned in the circle U (k) and all pixels intersected with the circumference of the circle U (k) are used as circle layer pixels of dp (k), the circle layer pixels of dp (k) are used for eliminating the sub-elimination pixels and dp (k), and the rest circle layer pixels after dp (k) are eliminated to form the P (k) -circle layer of dp (k).
2. The three-dimensional reconstruction method according to claim 1, wherein in step S100, the plurality of heterologous images are obtained from different types of photographic equipment or under different photographing conditions.
3. The three-dimensional reconstruction method based on heterologous image matching according to claim 1, wherein S100, the method for performing image preprocessing on a plurality of heterologous images to obtain a plurality of first images specifically comprises: and sequentially denoising, color correction and geometric distortion correction are carried out on the plurality of heterologous images, and the heterologous images after preprocessing are saved as a first image.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202410308781.4A CN117911631B (en) | 2024-03-19 | 2024-03-19 | Three-dimensional reconstruction method based on heterogeneous image matching |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202410308781.4A CN117911631B (en) | 2024-03-19 | 2024-03-19 | Three-dimensional reconstruction method based on heterogeneous image matching |
Publications (2)
Publication Number | Publication Date |
---|---|
CN117911631A CN117911631A (en) | 2024-04-19 |
CN117911631B true CN117911631B (en) | 2024-05-28 |
Family
ID=90697488
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202410308781.4A Active CN117911631B (en) | 2024-03-19 | 2024-03-19 | Three-dimensional reconstruction method based on heterogeneous image matching |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN117911631B (en) |
Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN115330929A (en) * | 2022-03-30 | 2022-11-11 | 土豆数据科技集团有限公司 | Multi-view three-dimensional reconstruction method and device |
WO2023078335A1 (en) * | 2021-11-04 | 2023-05-11 | 索尼集团公司 | Three-dimensional reconstruction method and system, and storage medium |
CN116310105A (en) * | 2023-03-09 | 2023-06-23 | 广州沃佳科技有限公司 | Object three-dimensional reconstruction method, device, equipment and storage medium based on multiple views |
CN116704111A (en) * | 2022-12-08 | 2023-09-05 | 荣耀终端有限公司 | Image processing method and apparatus |
CN116912405A (en) * | 2023-07-07 | 2023-10-20 | 上海大学 | Three-dimensional reconstruction method and system based on improved MVSNet |
CN116958434A (en) * | 2023-07-25 | 2023-10-27 | 重庆大学 | Multi-view three-dimensional reconstruction method, measurement method and system |
CN117036737A (en) * | 2023-08-17 | 2023-11-10 | 渤海大学 | Feature extraction and matching method based on information entropy, GMS and LC significant detection |
Family Cites Families (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2022052032A1 (en) * | 2020-09-11 | 2022-03-17 | 京东方科技集团股份有限公司 | Image segmentation method and device, and three-dimensional reconstruction method and device for image |
-
2024
- 2024-03-19 CN CN202410308781.4A patent/CN117911631B/en active Active
Patent Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2023078335A1 (en) * | 2021-11-04 | 2023-05-11 | 索尼集团公司 | Three-dimensional reconstruction method and system, and storage medium |
CN115330929A (en) * | 2022-03-30 | 2022-11-11 | 土豆数据科技集团有限公司 | Multi-view three-dimensional reconstruction method and device |
CN116704111A (en) * | 2022-12-08 | 2023-09-05 | 荣耀终端有限公司 | Image processing method and apparatus |
CN116310105A (en) * | 2023-03-09 | 2023-06-23 | 广州沃佳科技有限公司 | Object three-dimensional reconstruction method, device, equipment and storage medium based on multiple views |
CN116912405A (en) * | 2023-07-07 | 2023-10-20 | 上海大学 | Three-dimensional reconstruction method and system based on improved MVSNet |
CN116958434A (en) * | 2023-07-25 | 2023-10-27 | 重庆大学 | Multi-view three-dimensional reconstruction method, measurement method and system |
CN117036737A (en) * | 2023-08-17 | 2023-11-10 | 渤海大学 | Feature extraction and matching method based on information entropy, GMS and LC significant detection |
Non-Patent Citations (1)
Title |
---|
多视觉动画图像三维重建特征点匹配优化仿真;李金明;;计算机仿真;20170915(第09期);全文 * |
Also Published As
Publication number | Publication date |
---|---|
CN117911631A (en) | 2024-04-19 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN109165680B (en) | Single-target object dictionary model improvement method in indoor scene based on visual SLAM | |
CN111105452B (en) | Binocular vision-based high-low resolution fusion stereo matching method | |
CN112929626B (en) | Three-dimensional information extraction method based on smartphone image | |
CN110517348B (en) | Target object three-dimensional point cloud reconstruction method based on image foreground segmentation | |
US20220327710A1 (en) | Image Segmentation Method and Apparatus and Image Three-Dimensional Reconstruction Method and Apparatus | |
CN113723317A (en) | Reconstruction method and device of 3D face, electronic equipment and storage medium | |
CN114549669B (en) | Color three-dimensional point cloud acquisition method based on image fusion technology | |
CN113706607B (en) | Subpixel positioning method, computer equipment and device based on circular array diagram | |
CN111739071A (en) | Rapid iterative registration method, medium, terminal and device based on initial value | |
CN114463521A (en) | Building target point cloud rapid generation method for air-ground image data fusion | |
CN112802186B (en) | Dynamic scene real-time three-dimensional reconstruction method based on binarization characteristic coding matching | |
CN117911631B (en) | Three-dimensional reconstruction method based on heterogeneous image matching | |
CN113935917A (en) | Optical remote sensing image thin cloud removing method based on cloud picture operation and multi-scale generation countermeasure network | |
CN117501313A (en) | Hair rendering system based on deep neural network | |
CN112465984A (en) | Monocular camera sequence image three-dimensional reconstruction method based on double-layer filtering | |
CN113902847B (en) | Monocular depth image pose optimization method based on three-dimensional feature constraint | |
CN109377524B (en) | Method and system for recovering depth of single image | |
CN116704123A (en) | Three-dimensional reconstruction method combined with image main body extraction technology | |
CN116012449A (en) | Image rendering method and device based on depth information | |
CN115880424A (en) | Three-dimensional reconstruction method and device, electronic equipment and machine-readable storage medium | |
CN115239559A (en) | Depth map super-resolution method and system for fusion view synthesis | |
CN115293995A (en) | Point cloud point-by-point denoising method based on Transformer | |
CN114049423A (en) | Automatic realistic three-dimensional model texture mapping method | |
CN112949504A (en) | Stereo matching method, device, equipment and storage medium | |
CN112700504A (en) | Parallax measurement method of multi-view telecentric camera |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |