CN111354075A - Foreground reduction interference extraction method in three-dimensional reconstruction - Google Patents

Foreground reduction interference extraction method in three-dimensional reconstruction Download PDF

Info

Publication number
CN111354075A
CN111354075A CN202010122413.2A CN202010122413A CN111354075A CN 111354075 A CN111354075 A CN 111354075A CN 202010122413 A CN202010122413 A CN 202010122413A CN 111354075 A CN111354075 A CN 111354075A
Authority
CN
China
Prior art keywords
depth map
image
point
depth
foreground
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010122413.2A
Other languages
Chinese (zh)
Inventor
纪刚
杜靖
安帅
杨丰拓
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Qingdao Lianhe Chuangzhi Technology Co ltd
Original Assignee
Qingdao Lianhe Chuangzhi Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Qingdao Lianhe Chuangzhi Technology Co ltd filed Critical Qingdao Lianhe Chuangzhi Technology Co ltd
Priority to CN202010122413.2A priority Critical patent/CN111354075A/en
Publication of CN111354075A publication Critical patent/CN111354075A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration by the use of more than one image, e.g. averaging, subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/155Segmentation; Edge detection involving morphological operators
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/194Segmentation; Edge detection involving foreground-background segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery
    • G06T7/55Depth or shape recovery from multiple images
    • G06T7/579Depth or shape recovery from multiple images from motion
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20021Dividing image into blocks, subimages or windows
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging

Abstract

The invention belongs to the technical field of foreground image processing, and relates to a foreground drop interference extraction method in three-dimensional reconstruction; the process comprises the following steps: step one, calculating an image depth map; step two, segmenting the depth map into blocks according to the depth values; deleting image blocks with small areas and filling fine gaps in the depth map; step four, counting the depth values and deleting pixel points with the depth values more than twice of the average depth value; step five, calculating three-dimensional point cloud by using a depth map fusion algorithm; the method can effectively reduce the interference of the background object to the foreground in the three-dimensional reconstruction, capture and delete the interference factors of the small images doped in the foreground and perfectly repair the interference factors, effectively reduce error points in the three-dimensional point cloud, narrow the modeling range, be more beneficial to modeling, observing and analyzing the foreground object in the image and have wide application scenes.

Description

Foreground reduction interference extraction method in three-dimensional reconstruction
The technical field is as follows:
the invention belongs to the technical field of foreground image processing, relates to a foreground extraction method in a process of performing motion reconstruction on a shot image by using a computer, and particularly relates to a foreground reduction interference extraction method in three-dimensional reconstruction.
Background art:
at present, motion reconstruction is required when a monitored image is processed, and the goal of motion reconstruction (SFM) is to be able to automatically recover camera motion and scene Structure using two or more scenes; it is a self-calibrating technique that can automatically accomplish camera tracking and motion matching. A commonly used motion reconstruction solution is to calculate the camera pose of each image, then calculate the depth map of each image, and finally recover the scene three-dimensional point cloud according to the depth map.
However, when an image is captured, a background is captured in addition to an object to be modeled, and the background not only increases the amount of calculation but also is generally far from the object to be modeled, which causes problems such as an excessively large modeling range and irregular model shape. Meanwhile, the background information in the image is not rich enough, so that the modeling quality of the background part is poor. In general, after scene reconstruction is completed, the model needs to be cut and the background part needs to be deleted.
In the prior art, chinese patent publication No. CN110717417A discloses a depth map human foreground extraction method and a computer-readable storage medium, the method including: acquiring a face detection frame or a tracking frame in the depth map of the current frame; taking the central point of the face detection frame or the tracking frame as a seed point, and adding the seed point into a seed point queue; judging whether the seed point queue is empty or not; if the foreground points are empty, obtaining a human body foreground according to the pixel points in the human body foreground point set; if not, taking out the current head point of the seed point queue, and adding the head point into the human body foreground point set; acquiring a neighborhood with a preset size of a head point of a queue from a depth map of a current frame; determining a threshold corresponding to each pixel point in the neighborhood; if the absolute value of the difference value between the pixel value of a pixel point in the neighborhood and the pixel value of the head point of the queue is smaller than the threshold value corresponding to the pixel point, adding the pixel point into a seed point queue; the step of judging whether the seed point queue is empty is continuously executed, and the method can effectively realize the extraction of the human body foreground and remove the ground part; chinese patent publication No. CN108447068A discloses an automatic ternary diagram generation method, which comprises the following steps: s1, acquiring an input image, and continuously scanning the image through a window; s2, extracting the image edge in each window through an edge detection algorithm to obtain an edge image of the input image; s3, obtaining an incomplete mark input image of the image segmentation algorithm according to the edge image; s4, obtaining a segmentation result image through an image segmentation algorithm and an incomplete label input image; and S5, obtaining a ternary diagram according to the segmentation result image.
At present, in the process of modeling an obtained image, the background increases the amount of modeling calculation, which also causes the problems of large modeling range, small monitoring foreground occupation area, irregular model and the like.
The invention content is as follows:
the invention aims to overcome the defects of the existing foreground image processing, and provides a method for extracting the foreground interference reduction in three-dimensional reconstruction aiming at the defects that a large amount of background information exists in the existing modeling method in the current motion reconstruction process, background interference is obvious, and the existing foreground extraction cannot give consideration to the optimization and filling of images.
In order to achieve the purpose, the invention relates to a foreground reduction interference extraction method in three-dimensional reconstruction, which comprises the following process steps:
step one, calculating an image depth map;
step two, segmenting the depth map into blocks according to the depth values;
deleting image blocks with small areas and filling fine gaps in the depth map;
step four, counting the depth values and deleting pixel points with the depth values more than twice of the average depth value;
and fifthly, calculating the three-dimensional point cloud by using a depth map fusion algorithm.
The specific process of the first step of the invention is as follows: and performing feature extraction, feature matching and convergence adjustment on the shot images by using a motion reconstruction algorithm, calculating the positions of the images in the space, and obtaining the depth maps of the images after dense matching.
The specific process of the second step of the invention is as follows: processing the depth map by adopting a flood filling algorithm, firstly taking a point at the upper left corner of the image as a seed point, setting the depth value of the seed point as V, and setting the depth value of an adjacent point of the seed point as Vi(i ═ 1,2) when V × 0.9.9 < ViIf the seed point is less than V × 1.1.1, the adjacent point and the seed point belong to the same image block, the adjacent point is a new seed point, the adjacent point of the new seed point is continuously judged, and if the adjacent point does not accord with V × 0.9.9, the adjacent point is less than ViIf the point is less than V × 1.1.1, the adjacent point and the seed point belong to different image blocks until all the points adjacent to the seed point do not meet the condition, V × 0.9.9 is satisfiediAnd selecting the upper left point of the points which are not divided as a seed point to perform new area division until all areas in the image are divided completely.
The specific process of the third step of the invention is as follows: setting the width and height of the image as W and H respectively, dividing the depth map into a plurality of image blocks, counting the number of pixels of each image block as S respectivelyiThe total number of pixels of the image is Ns(NsW × H), when SiIf the size of the image block is smaller than W × H × 0.04.04, the image block is deleted, and a gap L is generated in the depth map after the isolated small image block is deletedi(i ═ 1,2,3), firstly carrying out row-by-row statistics on the generated cracks, filling the row of cracks when the height of the cracks is less than 0.1H and the two ends belong to the same image block, and carrying out linear interpolation on pixels in the cracks according to the depth values of the two ends of the cracks during filling; and then carrying out line-by-line statistics on the generated cracks, and filling the line of cracks when the width of the cracks is less than 0.1W and the two ends belong to the same image block, thereby finally obtaining a depth map after filling.
The specific process of the fourth step of the invention is as follows: and counting the average depth value of the residual pixel points in the filled depth map, and deleting the pixel points which are more than twice of the average depth value to obtain the depth map after interference removal.
The concrete process of the fifth step of the invention is as follows: and when most of the background information and pixel points with wrong depth values in the depth map after the interference is removed are deleted, processing the depth map after the interference is removed by using a depth map fusion algorithm, and generating the three-dimensional point cloud of the foreground object.
Compared with the prior art, the designed method for extracting the foreground degradation interference in the three-dimensional reconstruction is complete and reasonable in steps, can effectively reduce the interference of a background object on the foreground during the three-dimensional reconstruction, can capture and delete the interference factors of small images doped in the foreground and perfectly repair the interference factors, can reduce error points in three-dimensional point cloud, reduces the modeling range, is good in comprehensive performance of foreground image acquisition and processing, is more beneficial to modeling, observation and analysis of the foreground object in the image, and is wide in application scene.
Description of the drawings:
fig. 1 is a schematic block diagram of a specific process flow of a foreground reduction interference extraction method in three-dimensional reconstruction according to the present invention.
Fig. 2 is a schematic diagram of the effect of the image taken from different angles on the modeled object according to the present invention.
Fig. 3 is a schematic block diagram illustrating the principle of segmentation of a depth map using a flood fill algorithm according to the present invention.
Fig. 4 is a schematic block diagram of a small-area image block for deleting a depth map after segmentation according to the present invention.
Fig. 5 is a schematic block diagram of filling fine gaps in a depth map according to the present invention.
Fig. 6 is a schematic diagram of an original image processed by applying the foreground-reducing interference extraction method in three-dimensional reconstruction according to the present invention.
Fig. 7 is a schematic diagram of a calculated depth map of an original image according to the present invention.
Fig. 8 is a schematic diagram of a calculated post-interference-removal depth map according to the present invention.
Fig. 9 is a schematic diagram of a three-dimensional point cloud model obtained by calculation according to the present invention.
Fig. 10 is a schematic diagram of a generic model generated by using an existing original depth map according to the present invention.
The specific implementation mode is as follows:
the invention is further illustrated by the following examples in conjunction with the accompanying drawings.
Example 1:
the method for extracting the foreground degradation interference in the three-dimensional reconstruction, which is related by the embodiment, comprises the following specific process steps:
step one, calculating an image depth map: as shown in FIG. 1, S represents a modeled object, Pi(i is 1,2,3) images shot at different angles are represented, the steps of feature extraction, feature matching and convergence adjustment are carried out on the shot images by using a motion reconstruction algorithm, the positions of the images in the space are calculated, and the depth map of each image can be obtained after dense matching;
step two, segmenting the depth map into blocks according to the depth values: because the surfaces of the objects are continuous and the foreground objects have a large proportion in the image, the depth value of the background object is large under normal conditions, and a large background block can be deleted by deleting points with large depth values; when discrete small image blocks appear in the depth map, the depth values in the small image blocks are generally considered to be calculated wrongly or not belong to foreground objects, so that the image is segmented firstly and then the small image blocks are deleted; processing the depth map by adopting a flood filling algorithm, as shown in FIG. 2, firstly, taking a point at the upper left corner of the image as a seed point, setting the depth value of the seed point as V, and setting the depth value of an adjacent point of the seed point as Vi(i ═ 1,2) when V × 0.9.9 < ViIf the seed point is less than V × 1.1.1, the adjacent point and the seed point belong to the same image block, the adjacent point is a new seed point, the adjacent point of the new seed point is continuously judged, and if the adjacent point does not accord with V × 0.9.9, the adjacent point is less than ViIf the point is less than V × 1.1.1, the adjacent point and the seed point belong to different image blocks until all the points adjacent to the seed point do not meet the condition, V × 0.9.9 is satisfiediAll adjacent points less than V × 1.1 form the same image block, and the current area is completely divided;then, selecting a point at the upper left corner of the points which are not segmented as a seed point to perform new region segmentation until all regions in the image are completely segmented;
step three, deleting image blocks with small areas and filling fine gaps in the depth map: small gaps in image blocks and depth maps with small areas belong to error points and need further processing; let W and H be the image width and height, respectively, as shown in FIG. 3, the depth map is divided into a plurality of image blocks, and the number of pixels of each image block is counted as SiThe total number of pixels of the image is Ns(NsW × H), when SiIf the image block is deleted < W × H × 0.04.04, as shown in FIG. 4, a gap L is generated in the depth map after the isolated small image block is deletedi(i ═ 1,2,3), since the same object surface is continuous, when a gap appears in one image block, it is considered that the pixel value in the gap is lost, and therefore it needs to be filled; counting the generated gaps line by line, when the height of the gap is less than 0.1H and the two ends belong to the same image block, L is shown in FIG. 41Filling the crack, and performing linear interpolation on pixels in the crack according to the depth values at two ends of the crack during filling; then, the generated slits are counted line by line, and when the width of the slit is less than 0.1W and the two ends belong to the same image block, L is shown in FIG. 42Filling the line of cracks to finally obtain a depth map after filling;
step four, counting the depth values and deleting the pixel points with the depth values more than two times of the average depth value: because the distance between the background object and the foreground object is long and the depth value of the background object is large, the background can be further deleted by deleting the pixel point with the large depth value; counting the average depth value of the residual pixel points in the filled depth map, deleting the pixel points which are more than twice of the average depth value, and obtaining the depth map after interference elimination;
step five, calculating three-dimensional point cloud by using a depth map fusion algorithm: and deleting most background information and pixel points with wrong depth values in the depth map after the interference is removed in the steps, and finally processing the depth map after the interference is removed by using a depth map fusion algorithm to generate the three-dimensional point cloud of the foreground object.
In the existing method, depth map fusion is directly carried out after a depth map is calculated, because a large number of error points exist in an original depth map, a large number of error point clouds also exist in generated point clouds, and meanwhile, an original image has a large number of background information, so that the generated point clouds are large in modeling range, small in foreground occupation area and irregular in model; by adopting the method for extracting the foreground degradation interference in the three-dimensional reconstruction, the modeling method is optimized, the interference of error points can be effectively eliminated, the modeling range of the depth map after the three-dimensional reconstruction is small, the foreground occupation ratio is large, the model is regular, and the viewing is convenient.
Example 2:
in this embodiment, the method for extracting foreground degradation interference in three-dimensional reconstruction described in embodiment 1 is applied to process a specific image; taking large scene modeling as an example, the specific processing steps are as follows:
(1) calculating an image depth map: acquiring original images, wherein two original images are shown in fig. 6(a) and 6(b), calculating a depth map of the original images, processing the original images to obtain fig. 7(a) and processing the original images to obtain fig. 7 (b);
(2) further processing the depth maps of all the original images according to the second step to the fourth step in the embodiment 1 to obtain the depth map after interference elimination, processing the image 7(a) to obtain an image 8(a), and processing the image 7(b) to obtain an image 8 (b);
(3) according to the fifth step described in embodiment 1, the depth maps of all the original images are fused to obtain a three-dimensional point cloud model, as shown in fig. 9, the image in the white virtual circle in fig. 9 is the foreground object, and the obtained three-dimensional point cloud model is compared with a common model generated by using the existing original depth map (as shown in fig. 10, the image in the white virtual circle in fig. 10 is the foreground object), so that it can be seen that: the three-dimensional point cloud model has smaller modeling range and more prominent foreground object.

Claims (6)

1. A foreground drop interference extraction method in three-dimensional reconstruction is characterized by comprising the following steps: the specific process steps are as follows:
step one, calculating an image depth map;
step two, segmenting the depth map into blocks according to the depth values;
deleting image blocks with small areas and filling fine gaps in the depth map;
step four, counting the depth values and deleting pixel points with the depth values more than twice of the average depth value;
and fifthly, calculating the three-dimensional point cloud by using a depth map fusion algorithm.
2. The method for extracting foreground degradation interference in three-dimensional reconstruction according to claim 1, wherein: the specific process of the step one is as follows: and performing feature extraction, feature matching and convergence adjustment on the shot images by using a motion reconstruction algorithm, calculating the positions of the images in the space, and obtaining the depth maps of the images after dense matching.
3. The method for extracting foreground degradation interference in three-dimensional reconstruction according to claim 2, wherein: the specific process of the second step is as follows: processing the depth map by adopting a flood filling algorithm, firstly taking a point at the upper left corner of the image as a seed point, setting the depth value of the seed point as V, and setting the depth value of an adjacent point of the seed point as Vi(i ═ 1,2) when V × 0.9.9 < ViIf the seed point is less than V × 1.1.1, the adjacent point and the seed point belong to the same image block, the adjacent point is a new seed point, the adjacent point of the new seed point is continuously judged, and if the adjacent point does not accord with V × 0.9.9, the adjacent point is less than ViIf the point is less than V × 1.1.1, the adjacent point and the seed point belong to different image blocks until all the points adjacent to the seed point do not meet the condition, V × 0.9.9 is satisfiediAnd selecting the upper left point of the points which are not divided as a seed point to perform new area division until all areas in the image are divided completely.
4. The method for extracting foreground degradation interference in three-dimensional reconstruction according to claim 3, wherein: the specific process of the third step is as follows: setting the width and height of the image as W and H respectively, dividing the depth map into a plurality of image blocks, counting the number of pixels of each image block as S respectivelyiImage assemblyThe number of pixels is Ns(NsW × H), when SiIf the size of the image block is smaller than W × H × 0.04.04, the image block is deleted, and a gap L is generated in the depth map after the isolated small image block is deletedi(i ═ 1,2,3), firstly carrying out row-by-row statistics on the generated cracks, filling the row of cracks when the height of the cracks is less than 0.1H and the two ends belong to the same image block, and carrying out linear interpolation on pixels in the cracks according to the depth values of the two ends of the cracks during filling; and then carrying out line-by-line statistics on the generated cracks, and filling the line of cracks when the width of the cracks is less than 0.1W and the two ends belong to the same image block, thereby finally obtaining a depth map after filling.
5. The method for extracting foreground degradation interference in three-dimensional reconstruction according to claim 4, wherein: the specific process of the step four is as follows: and counting the average depth value of the residual pixel points in the filled depth map, and deleting the pixel points which are more than twice of the average depth value to obtain the depth map after interference removal.
6. The method for extracting foreground degradation interference in three-dimensional reconstruction according to claim 5, wherein: the concrete process of the step five is as follows: and when most of the background information and pixel points with wrong depth values in the depth map after the interference is removed are deleted, processing the depth map after the interference is removed by using a depth map fusion algorithm, and generating the three-dimensional point cloud of the foreground object.
CN202010122413.2A 2020-02-27 2020-02-27 Foreground reduction interference extraction method in three-dimensional reconstruction Pending CN111354075A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010122413.2A CN111354075A (en) 2020-02-27 2020-02-27 Foreground reduction interference extraction method in three-dimensional reconstruction

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010122413.2A CN111354075A (en) 2020-02-27 2020-02-27 Foreground reduction interference extraction method in three-dimensional reconstruction

Publications (1)

Publication Number Publication Date
CN111354075A true CN111354075A (en) 2020-06-30

Family

ID=71194100

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010122413.2A Pending CN111354075A (en) 2020-02-27 2020-02-27 Foreground reduction interference extraction method in three-dimensional reconstruction

Country Status (1)

Country Link
CN (1) CN111354075A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111932537A (en) * 2020-10-09 2020-11-13 腾讯科技(深圳)有限公司 Object deformation detection method and device, computer equipment and storage medium
CN115761152A (en) * 2023-01-06 2023-03-07 深圳星坊科技有限公司 Image processing and three-dimensional reconstruction method and device under common light source and computer equipment

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104077808A (en) * 2014-07-20 2014-10-01 詹曙 Real-time three-dimensional face modeling method used for computer graph and image processing and based on depth information
CN104268138A (en) * 2014-05-15 2015-01-07 西安工业大学 Method for capturing human motion by aid of fused depth images and three-dimensional models
CN108681389A (en) * 2018-05-11 2018-10-19 亮风台(上海)信息科技有限公司 A kind of method and apparatus read by arrangement for reading
CN109447945A (en) * 2018-09-21 2019-03-08 河南农业大学 Wheat Basic Seedling rapid counting method based on machine vision and graphics process
CN110580709A (en) * 2019-07-29 2019-12-17 浙江工业大学 Target detection method based on ViBe and three-frame differential fusion
CN110689008A (en) * 2019-09-17 2020-01-14 大连理工大学 Monocular image-oriented three-dimensional object detection method based on three-dimensional reconstruction
US20200364849A1 (en) * 2018-01-03 2020-11-19 Southeast University Method and device for automatically drawing structural cracks and precisely measuring widths thereof

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104268138A (en) * 2014-05-15 2015-01-07 西安工业大学 Method for capturing human motion by aid of fused depth images and three-dimensional models
CN104077808A (en) * 2014-07-20 2014-10-01 詹曙 Real-time three-dimensional face modeling method used for computer graph and image processing and based on depth information
US20200364849A1 (en) * 2018-01-03 2020-11-19 Southeast University Method and device for automatically drawing structural cracks and precisely measuring widths thereof
CN108681389A (en) * 2018-05-11 2018-10-19 亮风台(上海)信息科技有限公司 A kind of method and apparatus read by arrangement for reading
CN109447945A (en) * 2018-09-21 2019-03-08 河南农业大学 Wheat Basic Seedling rapid counting method based on machine vision and graphics process
CN110580709A (en) * 2019-07-29 2019-12-17 浙江工业大学 Target detection method based on ViBe and three-frame differential fusion
CN110689008A (en) * 2019-09-17 2020-01-14 大连理工大学 Monocular image-oriented three-dimensional object detection method based on three-dimensional reconstruction

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
李玲玲;王正勇;卿粼波;何海波;: "基于Kinect 2.0深度图像的快速体积测量" *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111932537A (en) * 2020-10-09 2020-11-13 腾讯科技(深圳)有限公司 Object deformation detection method and device, computer equipment and storage medium
CN115761152A (en) * 2023-01-06 2023-03-07 深圳星坊科技有限公司 Image processing and three-dimensional reconstruction method and device under common light source and computer equipment

Similar Documents

Publication Publication Date Title
KR100748719B1 (en) Apparatus and method for 3-dimensional modeling using multiple stereo cameras
CN108038883B (en) Crack detection and identification method applied to highway pavement video image
CN109636732B (en) Hole repairing method of depth image and image processing device
CN110517288A (en) Real-time target detecting and tracking method based on panorama multichannel 4k video image
CN106875437B (en) RGBD three-dimensional reconstruction-oriented key frame extraction method
CN111612725B (en) Image fusion method based on contrast enhancement of visible light image
KR100953076B1 (en) Multi-view matching method and device using foreground/background separation
CN110414385B (en) Lane line detection method and system based on homography transformation and characteristic window
CN111951197B (en) Point cloud segmentation method based on structured light
CN111354075A (en) Foreground reduction interference extraction method in three-dimensional reconstruction
CN110544300B (en) Method for automatically generating three-dimensional model based on two-dimensional hand-drawn image characteristics
CN109559324A (en) A kind of objective contour detection method in linear array images
CN101510304B (en) Method, device and pick-up head for dividing and obtaining foreground image
CN110660131B (en) Virtual viewpoint hole filling method based on deep background modeling
CN110309765B (en) High-efficiency detection method for video moving target
CN109166125A (en) A kind of three dimensional depth image partitioning algorithm based on multiple edge syncretizing mechanism
CN111860321B (en) Obstacle recognition method and system
Korchev et al. On real-time lidar data segmentation and classification
CN106934819A (en) A kind of method of moving object segmentation precision in raising image
CN112819812A (en) Powder bed defect detection method based on image processing
CN117011704A (en) Feature extraction method based on dotted line feature fusion and self-adaptive threshold
CN112801021B (en) Method and system for detecting lane line based on multi-level semantic information
CN113077504B (en) Large scene depth map generation method based on multi-granularity feature matching
CN111507933B (en) DIBR synthetic image quality evaluation method based on cavity and contour amplification
CN115147751A (en) Method for counting station passenger flow in real time based on video image

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination