CN114972129B - Image restoration method based on depth information - Google Patents

Image restoration method based on depth information Download PDF

Info

Publication number
CN114972129B
CN114972129B CN202210913433.0A CN202210913433A CN114972129B CN 114972129 B CN114972129 B CN 114972129B CN 202210913433 A CN202210913433 A CN 202210913433A CN 114972129 B CN114972129 B CN 114972129B
Authority
CN
China
Prior art keywords
repair
depth
repaired
block
damaged
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202210913433.0A
Other languages
Chinese (zh)
Other versions
CN114972129A (en
Inventor
朱树元
刘柯宇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
University of Electronic Science and Technology of China
Original Assignee
University of Electronic Science and Technology of China
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by University of Electronic Science and Technology of China filed Critical University of Electronic Science and Technology of China
Priority to CN202210913433.0A priority Critical patent/CN114972129B/en
Publication of CN114972129A publication Critical patent/CN114972129A/en
Application granted granted Critical
Publication of CN114972129B publication Critical patent/CN114972129B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/77Retouching; Inpainting; Scratch removal
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/136Segmentation; Edge detection involving thresholding
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20172Image enhancement details
    • G06T2207/20192Edge enhancement; Edge preservation

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Processing (AREA)

Abstract

The invention belongs to the field of image restoration, and particularly provides an image restoration method based on depth information, which is used for realizing high-quality restoration of a damaged image; the method realizes the division of image restoration types by constructing a plane segmentation model based on depth information; in the single-plane restoration, average depth information of a sample block and a block to be restored is added to a judgment criterion of an optimal matching block as a constraint item, so that the accuracy of selecting the optimal matching block is improved; in the multi-plane restoration, a plurality of depth planes are formed by dividing according to depth information, firstly, layer-by-layer rough restoration is carried out on each depth plane, then, single-plane fine restoration is carried out on the depth planes after the rough restoration, and the continuity of a boundary and the restoration quality of the depth planes are ensured; in conclusion, the method and the device can effectively reduce the influence between the repair planes, obviously improve the quality of image repair, reduce the repair time to a certain extent and further improve the efficiency.

Description

Image restoration method based on depth information
Technical Field
The invention belongs to the field of image restoration, and particularly provides an image restoration method based on depth information.
Background
The image restoration technology is mainly used for restoring damaged areas in images, and as the image restoration technology is applied to more and more image processing tasks such as image editing, restoration, coding, synthesis, super-resolution and the like, people pay more and more attention to the research of the image restoration technology, and hopefully, the restoration effect is better and better.
Elements or objects in the image damage area may exist in different depth planes according to different depth information, if an end-to-end method of deep learning is adopted, a good effect can be obtained generally, but the algorithm complexity is high, and the requirement on a computing processor is high; in other conventional repair methods based on an artificial feature matching method, the damaged area is repaired as a single plane, and the boundary of each depth plane is often blurred and discontinuous, which seriously affects the repair effect.
Disclosure of Invention
The invention aims to provide a depth information assisted image restoration method aiming at the problems in the background technology, which can effectively improve the quality of image restoration.
In order to realize the purpose, the technical scheme adopted by the invention is as follows:
an image restoration method based on depth information is characterized by comprising the following steps:
step 1, adopting a depth image of a damaged image to be repaired as prior information;
step 2, constructing a plane segmentation model;
step 2.1, finding out a damaged position in the damaged image;
step 2.2, determining the depth plane number N of the damaged area in the damaged image to be repaired according to the depth information;
step 3, constructing a region repairing model;
according to the depth plane number N of the damaged area in the damaged image to be repaired, when N =1, adopting single-plane repair to complete image repair; and when N is greater than 1, adopting multi-plane restoration to finish image restoration.
Further, in step 2.2, the specific process is as follows:
step 2.2.1, setting an inter-class distance threshold r and a proportional threshold sigma of the minimum class number in the total class number;
step 2.2.2, extracting depth information of the damaged area, assigning the depth information to be a depth matrix D, unfolding the depth matrix D into column vectors, and sequencing elements in the column vectors from small to large to obtain a depth column vector D;
step 2.2.3 store the 1 st element of the depth column vector d into the 1 st class set c 1 And traversing other elements in the depth column vector d in sequence:
to the firstiElement, calculating element value and current class set c j Difference of mean values of medium elements; if the difference is smaller than the threshold value r of the distance between classes, the element is divided into a current class set c j Performing the following steps; otherwise, create the secondj+1 class sets c j+1 And store this element in class set c j+1 Performing the following steps;i=2,3,. Cndot, P is the total number of elements in the depth column vector d;
step 2.2.4, the number of elements in each class set is counted, and judgment is carried out:
to the firstjClass set c j If the number of elements in the class set is less than sigma multiplied by P, the class set is judged to be invalid; otherwise, judging as an effective class set;
and counting the number of the effective class sets as the number N of depth planes of the damaged area in the damaged image to be repaired.
Further, in step 3, the repairing of the single plane specifically includes:
step 3.1.1 dividing the damaged image to be repaired into undamaged areasΦAnd damaged areaΩExtracting damaged boundaries using binary morphological methodsδΩ
Step 3.1.2 for damaged boundariesδΩSetting repair priority for each reference point:
Figure 272002DEST_PATH_IMAGE001
wherein,pas a reference point on the broken boundary,P(p) As a reference pointpRepair priority of the corresponding location;αpresetting a repair factor;
C(p) Confidence term for the block to be repaired:
Figure 856567DEST_PATH_IMAGE002
wherein,Φrepresenting an undamaged area in a damaged image to be repaired,
Figure 961926DEST_PATH_IMAGE003
to indicate as reference pointspIs a block to be repaired which is at the center,
Figure 973745DEST_PATH_IMAGE004
and
Figure 797344DEST_PATH_IMAGE005
D(p) For data items of blocks to be repaired:
Figure 451180DEST_PATH_IMAGE006
Wherein,
Figure 942204DEST_PATH_IMAGE007
indicating reference pointspThe corresponding normal vector is then used to determine,
Figure 390503DEST_PATH_IMAGE008
indicating reference pointspThe corresponding vector of the isoline of sight,βis a normalization factor;
E(p) For the edge entry of the block to be repaired:
Figure 498136DEST_PATH_IMAGE009
wherein,
Figure 690083DEST_PATH_IMAGE010
for the convolution result of the Sobel horizontal operator to the block to be repaired,
Figure 301193DEST_PATH_IMAGE011
the convolution result of the Sobel vertical operator to the block to be repaired is obtained;
step 3.1.3, setting a depth threshold De, and repairing the block to be repaired corresponding to the reference point with the maximum repairing priority;
traversing sample blocks which have the same size with the block to be repaired and do not have the damaged point in the damaged image to be repaired, calculating the average depth information of the sample blocks and the average depth information of the block to be repaired aiming at each sample block, and calculating the difference value of the average depth information and the average depth information of the block to be repairedD 0
Figure 920393DEST_PATH_IMAGE012
Wherein,
Figure 984164DEST_PATH_IMAGE013
represents average depth information of a block to be repaired,
Figure 448643DEST_PATH_IMAGE014
mean depth information representing a block of samples;
if the difference is not the sameD 0 If the depth is larger than the depth threshold De, skipping the current sample block; otherwise, calculating the minimum difference and error between the current sample block and the block to be repaired
Figure 648680DEST_PATH_IMAGE015
Figure 438782DEST_PATH_IMAGE016
Wherein,nfor the number of undamaged pixels in the block to be repaired,R pi G pi B pi respectively represent the first in the block to be repairediPixel values of the undamaged pixel points on the three channels of RGB,R qi G qi B qi respectively represent the second in the current sample blockiPixel values of the undamaged pixel points on the RGB three channels;
selecting
Figure 458690DEST_PATH_IMAGE015
The minimum sample block is taken as an optimal matching block and attached to the position of the block to be repaired;
step 3.1.4 update damaged boundariesδΩRecalculating the repair priority of each reference point on the damaged boundary, and repairing the block to be repaired corresponding to the reference point with the maximum repair priority; wherein the confidence termC(p) The update formula is:
Figure 986581DEST_PATH_IMAGE017
wherein,
Figure 41124DEST_PATH_IMAGE018
a reference point for the repair of the previous stage is indicated,
Figure 736548DEST_PATH_IMAGE019
representing the block to be repaired in the previous stage;
and 3.1.5, repeating the steps 3.1.3-3.1.4 until no damaged boundary exists in the damaged area, completing the repair and obtaining a repair result.
Further, in step 3, the multi-plane repair specifically includes:
step 3.2.1 multi-plane segmentation;
according to the number N of depth planes of a damaged area in a damaged image to be repaired, searching N-1 demarcation threshold values by using an ostu method, and dividing the damaged area into N depth planes;
step 3.2.2 depth plane 'rough repair';
selecting any depth plane in the damaged area as a current depth plane, and performing 'rough repair' on the current depth plane, specifically:
calculating the boundary between the current depth plane and other depth planes by adopting a binary morphology method, respectively calculating pixel value mean values in eight fields of the reference point as reference values by taking two end points of the boundary as reference points, and then calculating the mean values of the two reference values as repair pixel values to fill all positions of the boundary to complete a layer of 'rough repair'; repeating the process until the current depth plane has no boundary line with other depth planes, and finishing the 'rough repair' of the current depth plane;
step 3.2.3 depth plane 'fine repair';
performing 'fine repair' on the current depth plane subjected to 'coarse repair' in the damaged area, specifically:
taking the repair area of the 1 st to 5 th layers of coarse repair in the depth plane after the coarse repair as a buffer area, and taking other areas as areas to be subjected to fine repair; taking the area to be subjected to the fine repair as a damaged area, and taking other depth planes as undamaged areas to carry out single-plane repair;
and 3.2.4, finishing the rough repair of the step 3.2.2 and the fine repair of the step 3.2.3 on each depth plane of the damaged area to obtain a repair result.
Compared with the prior art, the invention has the beneficial effects that:
the invention provides an image restoration method based on depth information, which realizes the division of image restoration types by constructing a plane segmentation model based on the depth information, is divided into two conditions of 'single-plane restoration' and 'multi-plane restoration', and is matched with respective restoration modes; in the 'single-plane restoration', the average depth information of the sample block and the block to be restored is added into the judgment criterion of the optimal matching block as a constraint item, so that the accuracy of selecting the optimal matching block is improved, and the search time is reduced; in the multi-plane restoration, a plurality of depth planes are formed by division according to depth information, firstly, the depth planes are roughly restored layer by layer, then, the depth planes after the rough restoration are finely restored, and the continuity of a boundary and the restoration quality of the depth planes are ensured; in conclusion, the depth information is introduced as the prior information, so that the influence among all the restoration planes can be effectively reduced, the quality of image restoration is remarkably improved, the restoration time can be reduced to a certain extent, and the efficiency is further improved.
Drawings
Fig. 1 is a schematic flow chart of an image restoration method based on depth information according to the present invention.
Fig. 2 is an original image of "person parachuting" in the embodiment.
FIG. 3 is an image of the breakage of the "parachutist" in the example.
FIG. 4 is a repair result of comparative example 1 on the broken image shown in FIG. 3 in example.
FIG. 5 shows the repair result of comparative example 2 on the broken image shown in FIG. 3 in examples.
FIG. 6 shows the repair result of comparative example 3 on the damaged image shown in FIG. 3 in example.
FIG. 7 shows the repair result of the present invention on the damaged image shown in FIG. 3 in the embodiment.
Fig. 8 is an original image of "cat" in the example.
FIG. 9 is an image of breakage of "cat" in the example.
FIG. 10 shows the repair of the broken image shown in FIG. 9 using a single plane repair in an example.
FIG. 11 shows the repair result of the damaged image shown in FIG. 9 by the "rough repair" in the embodiment.
FIG. 12 shows the repair result of comparative example 3 on the broken image shown in FIG. 9 in example.
FIG. 13 shows the repair result of the invention for the broken image shown in FIG. 9 in the embodiment.
Fig. 14 is an original image of an "alarm clock" in an embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more clearly understood, the present invention is further described in detail with reference to the accompanying drawings and embodiments.
The present embodiment provides an image inpainting method based on depth information, a flow of which is shown in fig. 1, and specifically includes the following steps:
step 1, adopting a depth image of a damaged image to be repaired as prior information;
step 2, constructing a plane segmentation model;
step 2.1 finding out a damaged position in the damaged image; the method comprises the following specific steps: converting the damaged image to be repaired into gray image, and setting a damaged area judgment thresholdθFinding a pixel value in the grayscale image that is less thanθThe position of (3), namely a damaged area (ROI); in this exampleθ=4;
Step 2.2, determining the depth plane number N of the damaged area in the damaged image to be repaired;
in this embodiment, the depth image of the original image corresponding to the damaged image to be repaired is used as prior information, that is, the depth information of the damaged area is correspondingly obtained; determining the depth plane number of the damaged area by means of the depth information, specifically:
step 2.2.1 sets an inter-class distance threshold r and a proportional threshold σ of the minimum number of classes to the total number of classes (in the present embodiment, r =10, σ = 0.2);
step 2.2.2, extracting depth information of a damaged area (ROI) and assigning the depth information to be a depth matrix D, then unfolding the depth matrix D into column vectors, and sequencing elements in the column vectors from small to large to obtain a depth column vector D;
step 2.2.3 store the 1 st element of the depth column vector d into the 1 st class set c 1 And traversing other elements in the depth column vector d in sequence:
to the firstiAn element, calculating element value and current class set c j The difference of the medium element mean (the mean of all elements in the current class set); if the difference is smaller than the threshold value r of the distance between classes, the element is divided into a current class set c j The preparation method comprises the following steps of (1) performing; otherwise, create the firstj+1 class sets c j+1 And store this element in class set c j+1 (ii) (as the 1 st element);ip, P is the total number of elements in the depth column vector d;
step 2.2.4 after the traversal is finished, counting the number of elements in each class set, and judging:
to the firstjClass set c j If the number of elements in the class set is less than sigma multiplied by P, the class set is judged to be invalid; otherwise, judging as an effective class set;
counting the number of the effective class sets as the number N of depth planes of the damaged area in the damaged image to be repaired;
step 3, constructing a region repairing model;
according to the depth plane number N of the damaged area in the damaged image to be repaired, the method is divided into two conditions: n =1 or N >1; when N =1, the method is called single-plane restoration, similar to deleting foreground elements in the background, and the restored foreground elements should be background elements; when N is greater than 1, the method is called multi-plane repair, a damaged area has background and foreground elements, and the recovered area also needs to be the background and the foreground elements; the method specifically comprises the following steps:
3.1, repairing the single plane;
step 3.1.1 willDividing the damaged image to be repaired into undamaged areasΦAnd damaged areaΩExtracting damaged boundaries using binary morphological methodsδΩ(ii) a The method specifically comprises the following steps: generating a binary imageB s (where 1 represents a damaged area of the damaged image to be repaired, and 0 represents an undamaged area), and converting the binary image into a binary imageB s Obtaining a binary image after one-time 3 multiplied by 3 structural element corrosionB sn To obtain a binary imageB s Subtracting the binary imageB sn Obtaining a damaged boundaryδΩ
Step 3.1.2 for damaged boundariesδΩSetting repair priority for each reference point:
Figure 509332DEST_PATH_IMAGE001
wherein,pas a reference point on the broken boundary,P(p) As a reference pointpRepair priority of the corresponding location;αthe preset restoration factor is set to 1.33e-4 in this embodiment;
C(p) For the confidence term of the block to be repaired, the calculation formula is as follows:
Figure 315614DEST_PATH_IMAGE002
wherein,Φrepresenting an undamaged area in a damaged image to be repaired,
Figure 490243DEST_PATH_IMAGE003
representing by reference pointpA (M, M = 9) block to be repaired at the center,
Figure 622147DEST_PATH_IMAGE004
and
Figure 616648DEST_PATH_IMAGE005
D(p) For the data item of the block to be repaired, the calculation formula is as follows:
Figure 492200DEST_PATH_IMAGE006
wherein,
Figure 786915DEST_PATH_IMAGE007
indicating reference pointspThe corresponding normal vector of the normal line,
Figure 824141DEST_PATH_IMAGE008
indicating reference pointspThe corresponding vector of the isoline of sight,βis a normalization factor (set to 255 in this embodiment);
E(p) The value of the edge item of the block to be repaired is the sum of the convolutions of the selected image block by respectively using a Sobel horizontal operator and a Sobel vertical operator, and the calculation formula is as follows:
Figure 571518DEST_PATH_IMAGE009
wherein,
Figure 985181DEST_PATH_IMAGE010
for the convolution result of the Sobel horizontal operator to the block to be repaired,
Figure 603244DEST_PATH_IMAGE011
the convolution result of the Sobel vertical operator to the block to be repaired is obtained;
step 3.1.3, setting a depth threshold De (8 in the embodiment), and repairing the block to be repaired corresponding to the reference point with the maximum repair priority;
traversing sample blocks which have the same size with the block to be repaired and do not have the damaged point in the damaged image to be repaired, calculating the average depth information of the sample blocks and the average depth information of the block to be repaired aiming at each sample block, and calculating the difference value of the average depth information of the sample blocks and the average depth information of the block to be repairedD 0
Figure 811372DEST_PATH_IMAGE012
Wherein,
Figure 780465DEST_PATH_IMAGE013
represents average depth information of a block to be repaired,
Figure 732240DEST_PATH_IMAGE014
representing sample block average depth information;
if the difference is not the sameD 0 If the depth is larger than the depth threshold De, skipping the current sample block; otherwise, calculating the minimum difference and error between the current sample block and the block to be repaired
Figure 470389DEST_PATH_IMAGE015
Figure 114997DEST_PATH_IMAGE016
Wherein,nthe number of the undamaged pixel points in the block to be repaired,R pi G pi B pi respectively represent the first in the block to be repairediPixel values of the undamaged pixel points on the three channels of RGB,R qi G qi B qi respectively represent the second in the current sample blockiPixel values of the undamaged pixel points on the RGB three channels;
selecting
Figure 571386DEST_PATH_IMAGE015
The minimum sample block is taken as the optimal matching block to be attached to the position of the block to be repaired, namely the binary imageB s Updating the value 1 (damaged area) at the position of the corresponding block to be repaired to be a value 0 (undamaged area);
step 3.1.4 updating the damaged boundaryδΩRecalculating repair priority of each reference point on the damaged boundary, and repairingRepairing the block to be repaired corresponding to the reference point with the maximum priority; wherein the confidence termC(p) The update formula is:
Figure 326853DEST_PATH_IMAGE017
wherein,
Figure 185087DEST_PATH_IMAGE018
a reference point for the last stage of repair is indicated,
Figure 735017DEST_PATH_IMAGE019
representing the block to be repaired in the previous stage;
in addition, the data itemD(p) And edge itemE(p) The calculation process of (3) is the same as that of step (3.1.2);
step 3.1.5, repeating the steps 3.1.3 to 3.1.4 until no damaged boundary exists in the damaged area, and finishing repair;
step 3.2, multi-plane repair;
step 3.2.1 multi-plane segmentation;
according to the number N of depth planes of a damaged area in a damaged image to be repaired, searching N-1 demarcation threshold values by using an ostu method, and dividing the damaged area into N depth planes;
step 3.2.2 depth plane 'rough repair';
selecting any depth plane in the damaged area as a current depth plane, and performing 'rough repair' on the current depth plane, specifically:
calculating boundary lines of the current depth plane and other depth planes by adopting a binary morphology method (same as the step 3.1.1), respectively calculating the mean value of pixel values in eight fields of the reference point as reference values by taking two end points of the boundary lines as reference points, calculating the mean value of the two reference values as a repair pixel value, filling all positions of the boundary lines, namely updating the value 1 of the corresponding position of the boundary line in the binary image to a value 0, and completing a layer of 'rough repair'; repeating the process until the current depth plane has no boundary line with other depth planes, and finishing the 'rough repair' of the current depth plane; the above operations are respectively carried out on three channels of RGB;
step 3.2.3 depth plane 'fine repair';
performing 'fine repair' on the current depth plane subjected to 'coarse repair' in the damaged area, specifically:
taking the repair area of the 1 st to 5 th layers of rough repair in the depth plane after the rough repair as a buffer area, and taking other areas as areas to be subjected to fine repair; taking the area to be subjected to the fine repair as a damaged area, and taking other depth planes as undamaged areas to carry out single-plane repair;
and 3.2.4, finishing the rough repair of the step 3.2.2 and the fine repair of the step 3.2.3 on each depth plane of the damaged area to obtain a repair result.
It should be noted that: threshold for judging damaged areaθThe inter-class distance threshold r, the proportion threshold sigma of the minimum class number to the total class number and the depth threshold De are all empirical thresholds,θthe value range of (a) is 4 to 6, the value range of r is 10 to 12, the value range of sigma is 0.15 to 0.2, and the value range of De is 8 to 10; the division of the area to be "refined" can effectively prevent the discontinuity of the boundary.
To illustrate the effectiveness of the present invention, in this embodiment, we use "repair of single plane" by parachutist "as shown in fig. 2, and" repair of multi-plane "by cat" as shown in fig. 8, and use Criminisi (Region Filling and Object Removal by expression-Based Image Inpainting) as comparative example 1, modified Criminisi (modified Criminisi algorithm Based on Sobel operator) as comparative example 2, and Photoshop as comparative example 3. It should be noted that: in the "single plane repair", the case where only one depth plane exists in the damaged area is similar to the case where foreground elements in a certain background are deleted in real life, such as: a person standing in front of a white wall needs to remove the person (cover up), and the position of the person becomes a damaged area; based on this, as shown in fig. 2, the original image of the "parachuted person", as shown in fig. 3, the corresponding damaged image, as shown in fig. 4, the repair result of comparative example 1, as shown in fig. 5, the repair result of comparative example 2, as shown in fig. 6, the repair result of comparative example 3, and as shown in fig. 7, the repair result of the present invention in a single plane is visually superior to that of other comparative examples. In the multi-plane restoration, a damaged area has both a foreground and a background, so the restoration aims at completing the content, and the restoration result is similar to the original image as much as possible; based on this, as shown in fig. 8, which is the "cat" original image, as shown in fig. 9, which is the corresponding damaged image, as shown in fig. 10, which is the result of using the single-plane repair, as shown in fig. 11, which is the result of using the "rough repair", as shown in fig. 12, which is the repair result of comparative example 3, as shown in fig. 13, which is the repair result of the present invention, it can be seen that the result of the multi-plane repair of the present invention is visually significantly better than that of the other comparative examples. In addition, the present embodiment also adopts an "alarm clock" as shown in fig. 14 as a test image for testing, and the PSNR values of the test are shown in the following table:
Figure 678703DEST_PATH_IMAGE020
as can be seen from the table, the image restoration method based on the depth information has the optimal performance, and the test result shows that the introduction of the depth information can enhance the restoration effect and efficiency of the image, thereby proving the effectiveness and superiority of the method.
In summary, the image restoration method based on depth information provided by the invention has excellent performance, and compared with other related restoration methods, the image restoration method based on single-plane and multi-plane restoration has the highest score on subjective evaluation, and the comprehensive restoration efficiency on objective evaluation is the highest, and the calculated PSNR value is improved by more than 1 dB.
While the invention has been described with reference to specific embodiments, any feature disclosed in this specification may be replaced by alternative features serving the same, equivalent or similar purpose, unless expressly stated otherwise; all of the disclosed features, or all of the method or process steps, may be combined in any combination, except mutually exclusive features and/or steps.

Claims (1)

1. An image restoration method based on depth information is characterized by comprising the following steps:
step 1, adopting a depth image of a damaged image to be repaired as prior information;
step 2, constructing a plane segmentation model;
step 2.1, finding out a damaged position in the damaged image;
step 2.2, determining the depth plane number N of the damaged area in the damaged image to be repaired according to the depth information; the specific process is as follows:
step 2.2.1, setting an inter-class distance threshold r and a proportional threshold sigma of the minimum class number in the total class number;
step 2.2.2, extracting depth information of the damaged area, assigning the depth information to be a depth matrix D, unfolding the depth matrix D into column vectors, and sequencing elements in the column vectors from small to large to obtain a depth column vector D;
step 2.2.3 store the 1 st element of the depth column vector d into the 1 st class set c 1 And traversing other elements in the depth column vector d in sequence:
to the firstiElement, calculating element value and current class set c j Difference of mean values of medium elements; if the difference is smaller than the threshold value r of the distance between classes, the element is divided into a current class set c j The preparation method comprises the following steps of (1) performing; otherwise, create the firstj+1 class sets c j+1 And store this element in class set c j+1 The preparation method comprises the following steps of (1) performing;ip, P is the total number of elements in the depth column vector d;
step 2.2.4, the number of elements in each class set is counted, and judgment is carried out:
to the firstjClass set c j If the number of elements in the class set is less than sigma multiplied by P, the class set is judged to be invalid; otherwise, judging as an effective class set;
counting the number of the effective class sets as the number N of depth planes of the damaged area in the damaged image to be repaired;
step 3, constructing a region repairing model;
according to the depth plane number N of the damaged area in the damaged image to be repaired, when N =1, adopting single-plane repair to complete image repair; when N is greater than 1, adopting multi-plane restoration to finish image restoration;
the single-plane repair specifically comprises the following steps:
step 3.1.1 dividing the damaged image to be repaired into undamaged areasΦAnd damaged areaΩExtracting damaged boundary by binary morphologyδΩ
Step 3.1.2 for damaged boundariesδΩSetting repair priority for each reference point:
Figure DEST_PATH_IMAGE002
wherein,pas a reference point on the broken boundary,P(p) As a reference pointpRepair priority of the corresponding location;αpresetting a repair factor;
C(p) Confidence term for the block to be repaired:
Figure DEST_PATH_IMAGE004
wherein,Φrepresenting an undamaged area in a damaged image to be repaired,
Figure DEST_PATH_IMAGE006
to indicate as reference pointspIs a block to be repaired which is the center,
Figure DEST_PATH_IMAGE008
and
Figure DEST_PATH_IMAGE010
D(p) For data items of a block to be repaired:
Figure DEST_PATH_IMAGE012
wherein,
Figure DEST_PATH_IMAGE014
indicating reference pointspThe corresponding normal vector is then used to determine,
Figure DEST_PATH_IMAGE016
indicating reference pointspThe corresponding vector of the isoline is taken as,βis a normalization factor;
E(p) For the edge entry of the block to be repaired:
Figure DEST_PATH_IMAGE018
wherein,
Figure DEST_PATH_IMAGE020
for the convolution result of the Sobel horizontal operator to the block to be repaired,
Figure DEST_PATH_IMAGE022
the convolution result of the Sobel vertical operator to the block to be repaired is obtained;
step 3.1.3, setting a depth threshold De, and repairing the block to be repaired corresponding to the reference point with the maximum repairing priority;
traversing sample blocks which have the same size with the block to be repaired and do not have the damaged point in the damaged image to be repaired, calculating the average depth information of the sample blocks and the average depth information of the block to be repaired aiming at each sample block, and calculating the difference value of the average depth information of the sample blocks and the average depth information of the block to be repairedD 0
Figure DEST_PATH_IMAGE024
Wherein,
Figure DEST_PATH_IMAGE026
represents average depth information of a block to be repaired,
Figure DEST_PATH_IMAGE028
mean depth information representing a block of samples;
if the difference is not the sameD 0 If the depth is larger than the depth threshold De, skipping the current sample block; otherwise, calculating the minimum difference and error between the current sample block and the block to be repaired
Figure DEST_PATH_IMAGE030
Figure DEST_PATH_IMAGE032
Wherein,nfor the number of undamaged pixels in the block to be repaired,R pi G pi B pi respectively represent the first in the block to be repairediPixel values of the undamaged pixel points on the three channels of RGB,R qi G qi B qi respectively represent the second in the current sample blockiPixel values of the undamaged pixel points on the RGB three channels;
selecting
Figure 250134DEST_PATH_IMAGE030
The minimum sample block is taken as an optimal matching block and attached to the position of the block to be repaired;
step 3.1.4 update damaged boundariesδΩRecalculating the repair priority of each reference point on the damaged boundary, and repairing the block to be repaired corresponding to the reference point with the maximum repair priority; wherein the confidence termC(p) The update formula is:
Figure DEST_PATH_IMAGE034
wherein,
Figure DEST_PATH_IMAGE036
a reference point for the repair of the previous stage is indicated,
Figure DEST_PATH_IMAGE038
representing the block to be repaired in the previous stage;
step 3.1.5, repeating the step 3.1.3 to 3.1.4 until no damage boundary exists in the damaged area, completing the repair and obtaining a repair result;
the multi-plane repair specifically comprises the following steps:
step 3.2.1 multi-plane segmentation;
according to the number N of depth planes of a damaged area in a damaged image to be repaired, searching N-1 demarcation threshold values by using an ostu method, and dividing the damaged area into N depth planes;
step 3.2.2 depth plane "rough repair";
selecting any depth plane in the damaged area as a current depth plane, and performing 'rough repair' on the current depth plane, specifically:
calculating the boundary between the current depth plane and other depth planes by adopting a binary morphology method, respectively calculating pixel value mean values in eight fields of reference points as reference values by taking two end points of the boundary as the reference points, and then calculating the mean values of the two reference values as repair pixel values to fill all positions of the boundary to complete a layer of 'rough repair'; repeating the process until the current depth plane has no boundary line with other depth planes, and finishing the 'rough repair' of the current depth plane;
step 3.2.3 depth plane "fine repair";
performing 'fine repair' on the current depth plane subjected to 'coarse repair' in the damaged area, specifically:
taking the repair area of the 1 st to 5 th layers of rough repair in the depth plane after the rough repair as a buffer area, and taking other areas as areas to be subjected to fine repair; taking the area to be subjected to the fine repair as a damaged area, and taking other depth planes as undamaged areas to carry out single-plane repair;
and 3.2.4, finishing the rough repair of the step 3.2.2 and the fine repair of the step 3.2.3 on each depth plane of the damaged area to obtain a repair result.
CN202210913433.0A 2022-08-01 2022-08-01 Image restoration method based on depth information Active CN114972129B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210913433.0A CN114972129B (en) 2022-08-01 2022-08-01 Image restoration method based on depth information

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210913433.0A CN114972129B (en) 2022-08-01 2022-08-01 Image restoration method based on depth information

Publications (2)

Publication Number Publication Date
CN114972129A CN114972129A (en) 2022-08-30
CN114972129B true CN114972129B (en) 2022-11-08

Family

ID=82969000

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210913433.0A Active CN114972129B (en) 2022-08-01 2022-08-01 Image restoration method based on depth information

Country Status (1)

Country Link
CN (1) CN114972129B (en)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109685732A (en) * 2018-12-18 2019-04-26 重庆邮电大学 A kind of depth image high-precision restorative procedure captured based on boundary
CN109785255A (en) * 2018-12-30 2019-05-21 芜湖哈特机器人产业技术研究院有限公司 A kind of picture of large image scale restorative procedure
CN110147816A (en) * 2019-04-10 2019-08-20 中国科学院深圳先进技术研究院 A kind of acquisition methods of color depth image, equipment, computer storage medium

Family Cites Families (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104616286B (en) * 2014-12-17 2017-10-31 浙江大学 Quick semi-automatic multi views depth restorative procedure
US11660196B2 (en) * 2017-04-21 2023-05-30 Warsaw Orthopedic, Inc. 3-D printing of bone grafts
CN107578389B (en) * 2017-09-13 2021-01-08 中山大学 Plane-supervised image color depth information collaborative restoration system
EP3709651A1 (en) * 2019-03-14 2020-09-16 InterDigital VC Holdings, Inc. A method and apparatus for encoding an rendering a 3d scene with inpainting patches
CN111523411B (en) * 2020-04-10 2023-02-28 陕西师范大学 Synthetic aperture imaging method based on semantic patching
CN112543317B (en) * 2020-12-03 2022-07-12 东南大学 Method for converting high-resolution monocular 2D video into binocular 3D video
CN113870128A (en) * 2021-09-08 2021-12-31 武汉大学 Digital mural image restoration method based on deep convolution impedance network
CN114359098A (en) * 2021-12-31 2022-04-15 合肥综合性国家科学中心人工智能研究院(安徽省人工智能实验室) Method for quickly repairing damaged monitoring information of notebook production line and storage medium

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109685732A (en) * 2018-12-18 2019-04-26 重庆邮电大学 A kind of depth image high-precision restorative procedure captured based on boundary
CN109785255A (en) * 2018-12-30 2019-05-21 芜湖哈特机器人产业技术研究院有限公司 A kind of picture of large image scale restorative procedure
CN110147816A (en) * 2019-04-10 2019-08-20 中国科学院深圳先进技术研究院 A kind of acquisition methods of color depth image, equipment, computer storage medium

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
基于超像素分割和图像配准的深度图像修复方法;杨飞等;《机械设计与制造工程》;20200115(第01期);25-29 *

Also Published As

Publication number Publication date
CN114972129A (en) 2022-08-30

Similar Documents

Publication Publication Date Title
CN109712067B (en) Virtual viewpoint drawing method based on depth image
CN107301623B (en) Traffic image defogging method and system based on dark channel and image segmentation
CN100514367C (en) Color segmentation-based stereo 3D reconstruction system and process
JPH1023452A (en) Picture extracting device and its method
CN108182671B (en) Single image defogging method based on sky area identification
WO2018053952A1 (en) Video image depth extraction method based on scene sample library
WO2020173024A1 (en) Multi-gesture precise segmentation method for smart home scenario
CN113744142B (en) Image restoration method, electronic device and storage medium
CN111507971A (en) Tunnel surface defect detection method
CN109064419A (en) A kind of removing rain based on single image method based on WLS filtering and multiple dimensioned sparse expression
CN112164010A (en) Multi-scale fusion convolution neural network image defogging method
CN114972129B (en) Image restoration method based on depth information
CN108510425B (en) Reversible watermarking method based on IPPVO and optimized MHM
Du et al. Perceptually optimized generative adversarial network for single image dehazing
CN117853510A (en) Canny edge detection method based on bilateral filtering and self-adaptive threshold
CN112750089A (en) Optical remote sensing image defogging method based on local block maximum and minimum pixel prior
Turakhia et al. Automatic crack detection in heritage site images for image inpainting
CN111523411A (en) Synthetic aperture imaging method based on semantic patching
CN114677393B (en) Depth image processing method, depth image processing device, image pickup apparatus, conference system, and medium
Wang et al. Accurate depth estimation for image defogging using Markov Random Field
CN110942467A (en) Improved watershed image segmentation method based on PSO-FCM
Chi et al. Single image dehazing using a novel histogram tranformation network
CN112669360B (en) Multi-source image registration method based on non-closed multi-dimensional contour feature sequence
CN110264434A (en) A kind of removing rain based on single image method based on low-rank matrix completion
CN106447681B (en) A kind of object segmentation methods of non-uniform severe motion degraded image

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant