CN117911287A - Interactive splicing and repairing method for large-amplitude wall painting images - Google Patents

Interactive splicing and repairing method for large-amplitude wall painting images Download PDF

Info

Publication number
CN117911287A
CN117911287A CN202410318399.1A CN202410318399A CN117911287A CN 117911287 A CN117911287 A CN 117911287A CN 202410318399 A CN202410318399 A CN 202410318399A CN 117911287 A CN117911287 A CN 117911287A
Authority
CN
China
Prior art keywords
wall painting
image
painting image
image blocks
pixel
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202410318399.1A
Other languages
Chinese (zh)
Other versions
CN117911287B (en
Inventor
邱实
史晨亮
张朋昌
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
XiAn Institute of Optics and Precision Mechanics of CAS
Original Assignee
XiAn Institute of Optics and Precision Mechanics of CAS
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by XiAn Institute of Optics and Precision Mechanics of CAS filed Critical XiAn Institute of Optics and Precision Mechanics of CAS
Priority to CN202410318399.1A priority Critical patent/CN117911287B/en
Publication of CN117911287A publication Critical patent/CN117911287A/en
Application granted granted Critical
Publication of CN117911287B publication Critical patent/CN117911287B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting
    • G06T3/4038Image mosaicing, e.g. composing plane images from plane sub-images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/30Determination of transform parameters for the alignment of images, i.e. image registration
    • G06T7/33Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods
    • G06T7/337Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods involving reference images or patches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • G06V10/443Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components by matching or filtering
    • G06V10/449Biologically inspired filters, e.g. difference of Gaussians [DoG] or Gabor filters
    • G06V10/451Biologically inspired filters, e.g. difference of Gaussians [DoG] or Gabor filters with interaction between the filter responses, e.g. cortical complex cells
    • G06V10/454Integrating the filters into a hierarchical structure, e.g. convolutional neural networks [CNN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • G06V10/761Proximity, similarity or dissimilarity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2200/00Indexing scheme for image data processing or generation, in general
    • G06T2200/32Indexing scheme for image data processing or generation, in general involving image mosaicing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Multimedia (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Computing Systems (AREA)
  • Software Systems (AREA)
  • Medical Informatics (AREA)
  • Databases & Information Systems (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Biodiversity & Conservation Biology (AREA)
  • Biomedical Technology (AREA)
  • Molecular Biology (AREA)
  • Image Processing (AREA)

Abstract

The invention provides an interactive splicing and repairing method for large-scale wall painting images, which is used for solving the technical problems that accurate corresponding points are difficult to find in the existing wall painting image splicing and a highlight area exists on a photographed wall painting image. The splicing repair method of the invention comprises the following steps: inputting the data of the overlapping areas of the two wall painting image blocks into a pre-trained convolutional neural network, and outputting an embedded vector to each pixel point of the overlapping areas of the two wall painting image blocks by the pre-trained convolutional neural network; calculating Euclidean distances between the embedded vectors of all the pixel points in the overlapping area of one wall painting image block and the embedded vectors of all the pixel points in the overlapping area of the other wall painting image block, so as to obtain a plurality of groups of similar characteristic points with higher similarity on the two wall painting image blocks; and determining the relative positions of the two fresco image blocks when in splicing according to the captured multiple groups of similar characteristic points, rapidly and accurately completing image registration and image fusion, and realizing splicing and repairing of large-scale fresco images.

Description

Interactive splicing and repairing method for large-amplitude wall painting images
Technical Field
The invention belongs to the technical field of image stitching, and particularly relates to an interactive stitching restoration method for a large-scale wall painting image.
Background
The image stitching is a method for stitching a plurality of overlapped images of the same scene into a larger image, and has great application in the fields of cultural heritage protection, computer vision and the like.
The image stitching technique mainly comprises the following steps:
1. Feature extraction: feature points, such as corner points, edges, textures, etc., are detected in all input images for subsequent image registration.
2. Image registration: the geometric correspondence between the images is established so that they can be transformed, compared and analyzed in a common frame of reference. Common image registration methods include correlation-based, frequency-domain-based, feature-based, and the like.
3. Image transformation: based on the results of the image registration, the input images are appropriately transformed, such as rotated, translated, scaled, etc., so that they can be aligned on the same plane.
4. And (3) image fusion: and seamless splicing is carried out on the transformed images, the discontinuity and distortion of the overlapped area are eliminated, and a high-quality panoramic image is generated. The common image fusion methods include gradient-based methods, poisson equation-based methods, multi-resolution analysis-based methods and the like.
5. Image post-processing: and (3) performing some optimization and correction on the spliced image, such as removing black edges, filling holes, enhancing contrast and the like, so as to improve the visual effect of the image.
The mural splicing technology is a method for further scientific and artistic research in a specific scene in order to save a large-scale mural into electronic equipment. As shown in fig. 1, since the wall painting has a large picture, it is generally necessary to take a picture in sections in order to ensure the quality of the picture, resulting in the wall painting being divided into fragmented images. In order to restore the complete wall painting, the images need to be stitched. However, the existing wall painting splicing mainly has the following difficulties:
1. the irregularities of the wall painting curved surface and the edges make the geometric transformation between images complex, and it is difficult to find accurate corresponding points.
2. As shown in fig. 2, due to uneven lighting, irregular wall attached to the wall or uneven color of the wall in the shooting process, brightness variation among different parts in the image is large, and a plurality of highlight areas appear, which hinders appreciation and research of the wall.
Disclosure of Invention
The invention aims to solve the technical problems that the accurate corresponding point is difficult to find in the existing wall painting splicing, and a highlight area is caused by uneven lighting, irregular wall attached to the wall painting or uneven wall painting color in the shooting process, and provides an interactive splicing and repairing method for a large-scale wall painting image.
In order to achieve the above object, the technical solution of the present invention is as follows:
the interactive splicing and repairing method for the large-amplitude wall painting image is characterized by comprising the following steps of:
s1, determining the overlapping area of two adjacent mural image blocks;
Selecting any two adjacent wall painting image blocks from a plurality of photographed wall painting image blocks, splicing, and selecting the overlapping areas of the two wall painting image blocks;
S2, capturing similar feature points;
S2.1, inputting data of overlapping areas of two wall painting image blocks into a pre-trained convolutional neural network, and outputting an embedded vector to each pixel point of the overlapping areas of the two wall painting image blocks by the pre-trained convolutional neural network; the pretrained convolutional neural network comprises a convolutional layer conv180, a convolutional layer conv90, a convolutional layer conv64, a full-connection layer fc2048, a full-connection layer fc1024, a full-connection layer fc512 and a classification layer which are sequentially connected according to input and output;
s2.2, calculating Euclidean distances between embedded vectors of all pixel points in a wall painting image block overlapping area and embedded vectors of all pixel points in another wall painting image block overlapping area, and storing the calculated Euclidean distances to obtain a storage list;
s2.3, taking pixel points in the overlapping areas of the two wall painting image blocks corresponding to the minimum Euclidean distance in the current storage list as a group of similar characteristic points; deleting all Euclidean distances corresponding to any similar feature point in the group of similar feature points from the storage list to obtain a new storage list;
S2.4, taking the new storage list as the current storage list, and returning to the step S2.3 to obtain a new group of similar feature points until the number of the selected groups of similar feature points reaches a set threshold;
S3, registering images;
determining the relative positions of two fresco image blocks when the two fresco image blocks are spliced according to the multiple groups of similar characteristic points obtained in the step S2.4 so as to finish image registration;
s4, image fusion;
taking similar characteristic points of two wall painting image blocks as reference points during image registration, and performing image fusion to finish the splicing of the two wall painting image blocks;
S5, judging whether all the mural image blocks are spliced, if yes, executing a step S7, and if not, executing a step S6;
S6, taking the spliced image in the step S4 as one of the wall painting image blocks, selecting one wall painting image block adjacent to the spliced image block, selecting the overlapping area of the two wall painting image blocks, and returning to the step S2 until all the wall painting image blocks are spliced;
s7, performing highlight restoration on the spliced wall painting image to finish interactive splicing restoration of the large-scale wall painting.
Further, in step S2.2, the euclidean distance between the embedded vectors of all the pixels in the overlapping region of the image block of one wall painting and the embedded vectors of all the pixels in the overlapping region of the image block of the other wall painting is calculated by
In the above-mentioned method, the step of,Is an embedded vector corresponding to the ith pixel point in the overlapping area of the wall painting image block,/>The embedded vector corresponding to the j pixel point in the overlapping area of the other wall painting image; i and j are integers greater than or equal to 1.
Further, the step S3 specifically includes:
And (2) scaling one of the wall painting image blocks by taking the other wall painting image block as a reference, and determining the relative position of the two wall painting image blocks when the two wall painting image blocks are spliced according to the plurality of groups of similar characteristic points obtained in the step (S2.4) so as to finish image registration.
Further, in step S2.4, the threshold is set to 10% of the number of all pixels in the overlapping area of the wall painting image block.
Further, step S7 specifically includes:
S7.1, selecting all highlight areas in the spliced wall painting image, and calculating high-brightness pixels in all highlight areas; the high-brightness pixels are the pixels with the highest brightness in the high-brightness area of 15% -30%;
S7.2, selecting a part of background images which can represent the whole background image from the wall painting image, solving the pixel mean value of the selected background image, and taking the pixel mean value as a background vector;
s7.3, fusing the obtained background vector into the high-brightness pixel calculated in the step S7.1 to eliminate the high light, wherein the specific expression is as follows:
wherein, Is the fused pixel, X is the original pixel, B is the background vector,/>Is the fusion coefficient.
Further, after step S7.1, step a is further included before step S7.2:
Marking out the calculated high-brightness pixels, and manually judging whether a highlight part to be removed is covered or not; if yes, executing the step S7.2, otherwise, returning to the step S7.1 to reselect all highlight areas in the spliced mural image;
In step S7.1, the high-brightness pixel is the 20% pixel with the highest brightness in the high-brightness area.
Further, in step S1, when the overlapping areas in the two mural image blocks are selected, the overlapping area in each of the selected mural image blocks is to cover the actual overlapping area.
Further, step S2.4 further includes: and marking the obtained multiple groups of similar characteristic points, manually confirming all the marked similar characteristic points, and screening out accurate similar characteristic points.
Further, step S7.3 further includes: adjusting the fusion coefficient according to the display effect after the highlight eliminationThe display effect after the highlight is eliminated meets the requirements; the fusion coefficient/>The value range of (2) is 0-1.
Further, the fusion coefficient0.5.
Compared with the prior art, the invention has the following beneficial effects:
1. According to the interactive splicing and repairing method for the large-amplitude wall painting image, which is provided by the invention, the wall painting image data is processed through the pretrained convolutional neural network, and an embedded vector is output to each pixel point of the overlapping area of two wall painting image blocks; then, calculating Euclidean distances between the embedded vectors of all the pixel points in the overlapping area of one wall painting image block and the embedded vectors of all the pixel points in the overlapping area of the other wall painting image block, and obtaining a plurality of groups of similar characteristic points with highest similarity degree on the two wall painting image blocks through screening all the Euclidean distances; according to the captured multiple groups of similar characteristic points, the relative positions of the two fresco image blocks during splicing are determined, so that image registration and image fusion can be rapidly and accurately completed, and finally, large-scale splicing repair of fresco is realized.
2. According to the interactive splicing repair method for the large-amplitude wall painting images, when similar characteristic points on two wall painting image blocks are captured through screening Euclidean distances, the method of deleting the corresponding Euclidean distances from the storage list is adopted, and therefore the speed of capturing the similar characteristic points is greatly improved.
3. According to the interactive splicing restoration method for the large-amplitude wall painting image, the background vector is obtained, the obtained background vector is fused into the selected pixels to eliminate highlight, the wall painting image acquired in a blocking mode can be quickly and accurately restored, and the original appearance of the wall painting is restored to the maximum extent for scientific research.
Drawings
FIG. 1 is a schematic view of a plurality of wall painting tiles of a captured large wall painting;
FIG. 2 is a schematic illustration of a highlight region in a captured wall painting image block;
FIG. 3 is a schematic flow chart of an embodiment of an interactive stitching and repairing method for a large-scale mural image according to the present invention;
FIG. 4 is a schematic diagram showing a method for interactive mosaic repair of large-scale mural images according to an embodiment of the present invention in which two overlapping areas of mural image blocks are selected in step S1;
FIG. 5 is a schematic diagram showing capturing similar feature points through a pretrained convolutional neural network in step S2 of an embodiment of an interactive mosaic repair method for large-scale mural images according to the present invention;
FIG. 6 is a schematic diagram of manually screening similar feature points in step S2.4 in an embodiment of an interactive mosaic repair method for large-scale mural images according to the present invention;
FIG. 7 is a schematic diagram showing image fusion in step S4 of an embodiment of an interactive mosaic repair method for large-scale mural images according to the present invention;
FIG. 8 is a schematic diagram showing a highlight region selected in step S7.1 of an interactive mosaic repair method for large-scale mural images according to an embodiment of the present invention;
FIG. 9 is a schematic diagram of marking high brightness pixels in step S7.1 of an interactive stitching repair method for large-scale mural images according to an embodiment of the present invention;
FIG. 10 is a graph showing the contrast of effects before and after highlight restoration at step S7 according to an embodiment of an interactive mosaic restoration method for large-scale mural images of the present invention;
fig. 11 shows a plurality of wall painting image blocks in a large-scale wall painting image captured in the first embodiment of the present invention, wherein (a) is six wall painting image blocks in a first portion (upper left side), (b) is five wall painting image blocks in a second portion (upper right side), and (c) is three wall painting image blocks in a third portion (lower side);
FIG. 12 is a block diagram of a large-scale wall painting image after stitching according to an embodiment of the present invention;
fig. 13 shows a plurality of wall painting image blocks in a large-scale wall painting image captured in the second embodiment of the present invention, wherein (a) is seven wall painting image blocks in a first portion (left side), and (b) is seven wall painting image blocks in a second portion (right side);
Fig. 14 is a large-scale wall painting image after stitching according to the second embodiment of the present invention.
Detailed Description
To further clarify the advantages and features of the present invention, a more particular description of the invention will be rendered by reference to specific embodiments thereof which are illustrated in the appended drawings.
Example 1
An interactive splicing and repairing method for a large-scale mural image is shown in fig. 3, and specifically comprises the following steps:
s1, inputting two adjacent mural image blocks, and selecting a superposition area.
And selecting any two adjacent wall painting image blocks from a plurality of wall painting image blocks shot on site, and inputting the wall painting image blocks into a splicing program for splicing.
Meanwhile, the splicing program can display the input mural image blocks so as to perform man-machine interaction. The overlapping area of the two input mural image blocks is selected through the man-machine interaction interface, and in the embodiment, the overlapping area is selected in a mode of selecting the upper left pixel to the lower right pixel by clicking a mouse. As shown in fig. 4, the overlapping areas in the two manually selected mural image blocks in this embodiment are the coating area on the right of the left mural image block and the coating area on the left of the right mural image block. When the overlapping areas in the two wall painting image blocks are selected, the overlapping areas in each wall painting image block are selected to cover the actual overlapping areas, namely the area of the overlapping areas in each wall painting image block needs to be larger than that of the actual overlapping areas, so that the operation is convenient, and the splicing effect of the wall painting images is not affected.
S2, capturing similar characteristic points.
S2.1, inputting the data of the overlapping areas of the two selected wall painting image blocks into a pre-trained convolutional neural network, and outputting an embedded vector to each pixel point of the overlapping areas of the two wall painting image blocks by the pre-trained convolutional neural network.
The invention selectively uses the pretrained convolutional neural network to capture similar characteristic points of the overlapping areas of the two wall painting image blocks, so as to increase the program execution efficiency and improve the capturing quality. The pretrained convolutional neural network comprises a convolutional layer conv180, a convolutional layer conv90, a convolutional layer conv64, a full-connection layer fc2048, a full-connection layer fc1024, a full-connection layer fc512 and a classification layer which are sequentially connected according to input and output. Wherein, the activation functions all adopt a leak ReLU, and the negative slope is 0.1.
The classification layer at the tail end is used for pre-training classification tasks on simple pictures, wherein the simple pictures comprise straight lines, oblique lines, T-shaped intersection points, cross intersection points and the like, and the classification layer finally outputs a probability value for each class due to the structure of the classification layer, so that the pre-trained convolutional neural network performs classification training by capturing the line characteristics of the simple pictures. In this embodiment, the optimizer used for classification training is a random gradient descent SGD, the momentum is set to 0.9, and the learning rate is 0.05. Meanwhile, a callback function ReduceLROnPlateau in a learning rate adjustment library lr_schedule is used to dynamically adjust the learning rate in training, the adjustment multiplying power is 0.3, the relative threshold value is 0.001, and the adjustment condition is that the learning rate is not reduced beyond the threshold value after 4 training rounds (4 epochs are trained). Let the real class label in the class training beModel predicted class probability distribution is/>The loss function L in classification training adopts a cross entropy loss function, and is expressed as follows: /(I)
In the above formula, i=0, 1, …, n.
After the convolutional neural network is pre-trained, the above line features are stored on the convolutional neural network, that is, the convolutional neural network has the capability of extracting similar line features, and feature information of an image is embedded in a vector output by the f512 layer of the convolutional neural network, so that the vector in the middle layer is called an embedded vector, and later, similar feature points are captured by utilizing the feature.
When the similar characteristic points are captured, the terminal classification layer is abandoned, the images input into the pre-trained convolutional neural network sequentially pass through a series of convolutional layers for characteristic extraction and downsampling, then sequentially pass through a series of full-connection layers for learning global characteristics and classification, and finally the embedded vectors output by the full-connection layers are used as information carriers. Compared with probability information output by the classification layer, the embedded vector output by the full-connection layer has more information quantity, and can provide finer information difference in similar characteristic points. As shown in fig. 5, in this embodiment, the image input to the pre-trained convolutional neural network sequentially passes through the convolutional layers conv180, conv90 and conv64, then sequentially passes through the full-link layers fc2048, fc1024 and fc512, and finally uses the embedded vector output by fc512 as the information carrier.
S2.2, the Euclidean distance between two embedded vectors is used as an index for determining similar characteristic points, so that the Euclidean distance between the embedded vectors of all pixel points in the overlapping area of one wall painting image block and the embedded vectors of all pixel points in the overlapping area of the other wall painting image block is calculated firstly
In the above-mentioned method, the step of,For the embedded vector corresponding to the ith pixel point in the overlapping area of one wall painting image block,/>The embedded vector corresponding to the j pixel point in the overlapping area of the image block of the other wall painting; i and j are integers greater than or equal to 1.
Then all Euclidean distances calculatedThe storage is performed in order, in this embodiment in order from small to large in the storage list.
S2.3, the Euclidean distance in the current storage listAnd taking the pixel points in the overlapping areas of the two wall painting image blocks corresponding to the minimum value as a group of similar characteristic points. And then deleting all Euclidean distances corresponding to any similar feature point in the group of similar feature points from the storage list to obtain a new storage list.
S2.4, taking the new storage list as the current storage list, and returning to the step S2.3 to obtain a new group of similar feature points until the number of the groups of the selected similar feature points reaches a set threshold. In this embodiment, the set threshold is 10% of the number of all pixels in the overlapping area of the wall painting image block, and in other embodiments of the present invention, the set threshold may be fine-tuned up and down according to specific needs.
Because the similar feature points captured by the convolutional neural network are not necessarily accurate, the accuracy of the captured similar feature points is improved through manual confirmation. Specifically, several groups of similar feature points with higher similarity captured by the convolutional neural network are marked, and then two groups or more than two groups of similar feature points are manually clicked and confirmed by using a mouse. As shown in fig. 6, the round points are similar feature points selected by the manual confirmation, and the rest of the unselected points are dissimilar feature points or redundant similar feature points.
S3, image registration.
Since the distance between the photographer and the wall painting may vary when two wall painting image pieces are photographed, one of the wall painting image pieces needs to be scaled. When one of the wall painting image blocks is used as a reference, the other wall painting image block needs to be scaled. The scaled scale can be deduced from the distance of corresponding similar feature points on the two wall painting image blocks. Defining the width direction of the mural image block as the x axis and the height direction as the y axis, and marking the sitting marks of two similar characteristic points on the reference mural image block asAnd/>The sitting marks of two corresponding similar characteristic points on the wall painting image block to be zoomed are as followsAnd/>Scaling multiple of the mural image block to be scaled relative to the reference mural image block in the x-axis directionAnd scaling factor/>, in y-axis directionThe method comprises the following steps of:
i.e. the fresco image blocks to be zoomed are needed to be respectively carried out X-axis scaling of the times/>The y-axis of the magnification is scaled so that it coincides with the size of the reference mural image block.
After the scaling is finished, the relative positions of the two wall painting image blocks can be determined when the two wall painting image blocks are spliced because similar feature points of the two wall painting image blocks are found, so that the image registration is finished.
S4, image fusion.
As shown in fig. 7, similar feature points in two wall painting image blocks are used as reference points in image registration, and image fusion is performed to complete the splicing of the two wall painting image blocks. And in the image fusion process, the brightness values at the two sides of the interface of the two wall painting image blocks are averaged, so that the condition that the brightness at the two sides of the interface of the two wall painting image blocks is uneven is avoided. Preferably, the embodiment further performs color temperature calibration on the wall painting image after the image fusion, so as to improve the effect of the wall painting image.
And S5, judging whether all the mural image blocks are spliced, if so, executing the step S7, and if not, executing the step S6.
S6, taking the spliced image in the step S4 as one of the wall painting image blocks, selecting one wall painting image block adjacent to the spliced image as one of the wall painting image blocks, selecting the overlapping area of the two wall painting image blocks, returning to the step S2, and splicing the currently spliced image and the wall painting image block adjacent to the currently spliced image until all the wall painting image blocks are spliced, and finishing the splicing work.
S7, highlight restoration.
S7.1, selecting a highlight region and screening highlight pixels.
Since there are also some highlight regions in the stitched image, in order to eliminate these highlights, all highlight regions in the stitched mural image are selected as shown in fig. 8. A slightly larger should be selected during the selection process to include all the highlight regions. In this embodiment, the highlight region is selected by selecting the upper left pixel to the lower right pixel with the mouse.
The high-luminance pixels in all the high-luminance regions are calculated, specifically, the high-luminance pixels represent the highest 15% -30% of the pixels, preferably the highest 20% of the pixels, in the high-luminance regions, and the above-mentioned luminance can be calculated by adding the values of the three channels (red, yellow, green). As shown in FIG. 9, these high-brightness pixel marks are displayed, after the high-brightness pixels are displayed, an interactive window is provided, whether the high-brightness part to be removed is covered or not can be judged manually, then the ratio of the selected pixels is manually adjusted, so that all the high-brightness pixels are selected, and all the pixels to be adjusted can be adjusted with the greatest accuracy. Note that the highlighting of the entire image is shown in fig. 9, but only the highlighting within the selected highlighting area is stored.
S7.2, obtaining a background vector.
And selecting a part of background image which can represent the whole background image from the wall painting image, then solving the pixel mean value of the selected background image, and using the pixel mean value as a background vector for neutralizing high light so that the wall painting image is natural.
S7.3, eliminating the highlight.
The background vector obtained in step S7.2 is fused into the high-brightness pixel calculated in step S7.1 to eliminate the high light by the following expression:
wherein, Is the fused pixel, X is the original pixel, B is the background vector,/>Is a fusion coefficient, and can be manually adjusted.
After the highlight is primarily eliminated, displaying the effect, providing an interactive window, and manually adjusting the fusion coefficient according to the display effectSo as to achieve the most natural visual effect. Wherein, the fusion coefficient/>The value range of (2) is 0-1, and default is 0.5. The effect pairs before and after highlight restoration are shown in fig. 10, for example.
The multiple wall painting image blocks in the large-sized wall painting image shot in the embodiment are shown in fig. 11, wherein (a) is six wall painting image blocks in the first part (upper left side), (b) is five wall painting image blocks in the second part (upper right side), and (c) is three wall painting image blocks in the third part (lower side), and according to the stitching restoration method of the present invention, the multiple wall painting image blocks shot in the embodiment are stitched and restored in a splitting manner, and the large-sized wall painting image after stitching restoration is obtained as shown in fig. 12. The experiment is performed by splicing and repairing through PyCharm software on a Windows operating system.
Example two
As shown in fig. 13, the plurality of wall painting image blocks in the large wall painting image captured in the second embodiment are different from the first embodiment in that the plurality of wall painting image blocks in the present embodiment are only a plurality of wall painting image blocks adjacent to each other in the left-right direction. Specifically, in fig. 13, (a) is seven mural image blocks of the first part (left side), and (b) is seven mural image blocks of the second part (right side), according to the stitching restoration method of the present invention, a plurality of mural image blocks that are photographed in frames in this embodiment are stitched and restored, and a large mural image after stitching restoration is obtained as shown in fig. 14.
The above is only a preferred embodiment of the present invention, and is not intended to limit the present invention, but various modifications and variations can be made to the present invention by those skilled in the art. Any modification, equivalent replacement, improvement, etc. made within the spirit and principle of the present invention should be included in the protection scope of the present invention.

Claims (10)

1. An interactive splicing and repairing method for a large-scale mural image is characterized by comprising the following steps of:
s1, determining the overlapping area of two adjacent mural image blocks;
Selecting any two adjacent wall painting image blocks from a plurality of photographed wall painting image blocks, splicing, and selecting the overlapping areas of the two wall painting image blocks;
S2, capturing similar feature points;
S2.1, inputting data of overlapping areas of two wall painting image blocks into a pre-trained convolutional neural network, and outputting an embedded vector to each pixel point of the overlapping areas of the two wall painting image blocks by the pre-trained convolutional neural network; the pretrained convolutional neural network comprises a convolutional layer conv180, a convolutional layer conv90, a convolutional layer conv64, a full-connection layer fc2048, a full-connection layer fc1024, a full-connection layer fc512 and a classification layer which are sequentially connected according to input and output;
s2.2, calculating Euclidean distances between embedded vectors of all pixel points in a wall painting image block overlapping area and embedded vectors of all pixel points in another wall painting image block overlapping area, and storing the calculated Euclidean distances to obtain a storage list;
s2.3, taking pixel points in the overlapping areas of the two wall painting image blocks corresponding to the minimum Euclidean distance in the current storage list as a group of similar characteristic points; deleting all Euclidean distances corresponding to any similar feature point in the group of similar feature points from the storage list to obtain a new storage list;
S2.4, taking the new storage list as the current storage list, and returning to the step S2.3 to obtain a new group of similar feature points until the number of the selected groups of similar feature points reaches a set threshold;
S3, registering images;
determining the relative positions of two fresco image blocks when the two fresco image blocks are spliced according to the multiple groups of similar characteristic points obtained in the step S2.4 so as to finish image registration;
s4, image fusion;
taking similar characteristic points of two wall painting image blocks as reference points during image registration, and performing image fusion to finish the splicing of the two wall painting image blocks;
S5, judging whether all the mural image blocks are spliced, if yes, executing a step S7, and if not, executing a step S6;
S6, taking the spliced image in the step S4 as one of the wall painting image blocks, selecting one wall painting image block adjacent to the spliced image block, selecting the overlapping area of the two wall painting image blocks, and returning to the step S2 until all the wall painting image blocks are spliced;
s7, performing highlight restoration on the spliced wall painting image to finish interactive splicing restoration of the large-scale wall painting.
2. The method for interactive mosaic restoration of a large-format wall painting image according to claim 1, wherein:
in step S2.2, the Euclidean distance between the embedded vectors of all the pixel points in the overlapping area of the wall painting image block and the embedded vectors of all the pixel points in the overlapping area of the wall painting image block is calculated by the following method
In the above-mentioned method, the step of,Is an embedded vector corresponding to the ith pixel point in the overlapping area of the wall painting image block,/>The embedded vector corresponding to the j pixel point in the overlapping area of the other wall painting image; i and j are integers greater than or equal to 1.
3. The method for interactive mosaic restoration of a large-scale mural image according to claim 2, wherein step S3 specifically comprises:
And (2) scaling one of the wall painting image blocks by taking the other wall painting image block as a reference, and determining the relative position of the two wall painting image blocks when the two wall painting image blocks are spliced according to the plurality of groups of similar characteristic points obtained in the step (S2.4) so as to finish image registration.
4. An interactive mosaic repair method for large-format wall painting images according to claim 3, wherein:
In step S2.4, the threshold is set to 10% of the number of all pixels in the overlapping area of the wall painting image block.
5. The method for interactive mosaic restoration of large-scale mural images according to any one of claims 1 to 4, wherein step S7 specifically comprises:
S7.1, selecting all highlight areas in the spliced wall painting image, and calculating high-brightness pixels in all highlight areas; the high-brightness pixels are the pixels with the highest brightness in the high-brightness area of 15% -30%;
S7.2, selecting a part of background images which can represent the whole background image from the wall painting image, solving the pixel mean value of the selected background image, and taking the pixel mean value as a background vector;
s7.3, fusing the obtained background vector into the high-brightness pixel calculated in the step S7.1 to eliminate the high light, wherein the specific expression is as follows:
wherein, Is the fused pixel, X is the original pixel, B is the background vector,/>Is the fusion coefficient.
6. The method for interactive mosaic restoration of a large-format wall painting image according to claim 5, wherein:
after step S7.1, step a is further included before step S7.2:
Marking out the calculated high-brightness pixels, and manually judging whether a highlight part to be removed is covered or not; if yes, executing the step S7.2, otherwise, returning to the step S7.1 to reselect all highlight areas in the spliced mural image;
In step S7.1, the high-brightness pixel is the 20% pixel with the highest brightness in the high-brightness area.
7. The method for interactive mosaic restoration of a large-format wall painting image according to claim 6, wherein:
in step S1, when the overlapping areas in the two mural image blocks are selected, the overlapping area in each of the selected mural image blocks is to cover the actual overlapping area.
8. The method for interactive mosaic restoration of a large-format wall painting image according to claim 7, wherein step S2.4 further comprises:
And marking the obtained multiple groups of similar characteristic points, manually confirming all the marked similar characteristic points, and screening out accurate similar characteristic points.
9. The method for interactive mosaic restoration of a large-format wall painting image according to claim 8, wherein step S7.3 further comprises:
Adjusting the fusion coefficient according to the display effect after the highlight elimination The display effect after the highlight is eliminated meets the requirements; the fusion coefficient/>The value range of (2) is 0-1.
10. The method for interactive mosaic restoration of a large-format wall painting image according to claim 9, wherein:
The fusion coefficient 0.5.
CN202410318399.1A 2024-03-20 2024-03-20 Interactive splicing and repairing method for large-amplitude wall painting images Active CN117911287B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202410318399.1A CN117911287B (en) 2024-03-20 2024-03-20 Interactive splicing and repairing method for large-amplitude wall painting images

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202410318399.1A CN117911287B (en) 2024-03-20 2024-03-20 Interactive splicing and repairing method for large-amplitude wall painting images

Publications (2)

Publication Number Publication Date
CN117911287A true CN117911287A (en) 2024-04-19
CN117911287B CN117911287B (en) 2024-08-02

Family

ID=90692603

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202410318399.1A Active CN117911287B (en) 2024-03-20 2024-03-20 Interactive splicing and repairing method for large-amplitude wall painting images

Country Status (1)

Country Link
CN (1) CN117911287B (en)

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101350101A (en) * 2008-09-09 2009-01-21 北京航空航天大学 Method for auto-registration of multi-amplitude deepness image
CN101556692A (en) * 2008-04-09 2009-10-14 西安盛泽电子有限公司 Image mosaic method based on neighborhood Zernike pseudo-matrix of characteristic points
CN107220955A (en) * 2017-04-24 2017-09-29 东北大学 A kind of brightness of image equalization methods based on overlapping region characteristic point pair
CN107704856A (en) * 2017-09-28 2018-02-16 杭州电子科技大学 Ice core optical characteristics image acquisition and processing method
CN110660023A (en) * 2019-09-12 2020-01-07 中国测绘科学研究院 Video stitching method based on image semantic segmentation
US20210174471A1 (en) * 2018-08-29 2021-06-10 Shanghai Sensetime Intelligent Technology Co., Ltd. Image Stitching Method, Electronic Apparatus, and Storage Medium
CN113313002A (en) * 2021-05-24 2021-08-27 清华大学 Multi-mode remote sensing image feature extraction method based on neural network
CN116244464A (en) * 2023-03-10 2023-06-09 重庆邮电大学 Hand-drawing image real-time retrieval method based on multi-mode data fusion
CN117670664A (en) * 2022-08-16 2024-03-08 武汉联影智融医疗科技有限公司 Image stitching method and device based on feature points, electronic device and storage medium

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101556692A (en) * 2008-04-09 2009-10-14 西安盛泽电子有限公司 Image mosaic method based on neighborhood Zernike pseudo-matrix of characteristic points
CN101350101A (en) * 2008-09-09 2009-01-21 北京航空航天大学 Method for auto-registration of multi-amplitude deepness image
CN107220955A (en) * 2017-04-24 2017-09-29 东北大学 A kind of brightness of image equalization methods based on overlapping region characteristic point pair
CN107704856A (en) * 2017-09-28 2018-02-16 杭州电子科技大学 Ice core optical characteristics image acquisition and processing method
US20210174471A1 (en) * 2018-08-29 2021-06-10 Shanghai Sensetime Intelligent Technology Co., Ltd. Image Stitching Method, Electronic Apparatus, and Storage Medium
CN110660023A (en) * 2019-09-12 2020-01-07 中国测绘科学研究院 Video stitching method based on image semantic segmentation
CN113313002A (en) * 2021-05-24 2021-08-27 清华大学 Multi-mode remote sensing image feature extraction method based on neural network
CN117670664A (en) * 2022-08-16 2024-03-08 武汉联影智融医疗科技有限公司 Image stitching method and device based on feature points, electronic device and storage medium
CN116244464A (en) * 2023-03-10 2023-06-09 重庆邮电大学 Hand-drawing image real-time retrieval method based on multi-mode data fusion

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
杨文宗;唐兴佳;张朋昌;胡炳樑;金紫琳: "《基于融合光谱分析的墓葬壁画颜料色彩虚拟修复方法研究》", 《文物保护与考古科学》, 15 August 2023 (2023-08-15), pages 1 - 13 *
杨金龙: "《基于SIFT算法的图像配准算法研究》", 《优秀硕士论文》, 1 October 2013 (2013-10-01) *

Also Published As

Publication number Publication date
CN117911287B (en) 2024-08-02

Similar Documents

Publication Publication Date Title
CN101394573B (en) Panoramagram generation method and system based on characteristic matching
CN110503688A (en) A kind of position and orientation estimation method for depth camera
CN107680053A (en) A kind of fuzzy core Optimized Iterative initial value method of estimation based on deep learning classification
CN107516319A (en) A kind of high accuracy simple interactive stingy drawing method, storage device and terminal
CN107833186A (en) A kind of simple lens spatial variations image recovery method based on Encoder Decoder deep learning models
CN105868797A (en) Network parameter training method, scene type identification method and devices
CN105023260A (en) Panorama image fusion method and fusion apparatus
CN111368637B (en) Transfer robot target identification method based on multi-mask convolutional neural network
CN110176042A (en) Training method, device and the storage medium of camera self moving parameter estimation model
CN115147488B (en) Workpiece pose estimation method and grabbing system based on dense prediction
CN105488777A (en) System and method for generating panoramic picture in real time based on moving foreground
CN107730469A (en) A kind of three unzoned lens image recovery methods based on convolutional neural networks CNN
CN115115522A (en) Goods shelf commodity image splicing method and system
CN117173096A (en) Door surface defect detection method based on improved YOLOv8 network
CN107767357A (en) A kind of depth image super-resolution method based on multi-direction dictionary
CN115456870A (en) Multi-image splicing method based on external parameter estimation
CN117911287B (en) Interactive splicing and repairing method for large-amplitude wall painting images
CN107330856B (en) Panoramic imaging method based on projective transformation and thin plate spline
CN107203984A (en) Correction system is merged in projection for third party software
CN115578260A (en) Attention method and system for direction decoupling for image super-resolution
CN111260561A (en) Rapid multi-graph splicing method for mask defect detection
CN111047513A (en) Robust image alignment method and device for cylindrical panoramic stitching
CN109961393A (en) Subpixel registration and splicing based on interpolation and iteration optimization algorithms
CN115713678A (en) Arrow picture data augmentation method and system, electronic device and storage medium
CN115965529A (en) Image stitching method based on unsupervised learning and confrontation generation network

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant