CN108537782B - Building image matching and fusing method based on contour extraction - Google Patents
Building image matching and fusing method based on contour extraction Download PDFInfo
- Publication number
- CN108537782B CN108537782B CN201810280577.0A CN201810280577A CN108537782B CN 108537782 B CN108537782 B CN 108537782B CN 201810280577 A CN201810280577 A CN 201810280577A CN 108537782 B CN108537782 B CN 108537782B
- Authority
- CN
- China
- Prior art keywords
- photo
- historical
- image
- matching
- straight lines
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/0002—Inspection of images, e.g. flaw detection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/13—Edge detection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/30—Determination of transform parameters for the alignment of images, i.e. image registration
- G06T7/33—Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/90—Determination of colour characteristics
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10004—Still image; Photographic image
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20212—Image combination
- G06T2207/20221—Image fusion; Image merging
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30181—Earth observation
- G06T2207/30184—Infrastructure
Landscapes
- Engineering & Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Quality & Reliability (AREA)
- Image Analysis (AREA)
- Image Processing (AREA)
Abstract
The invention discloses a building image matching and fusing method based on contour extraction, which comprises the following steps: preprocessing the historical photos; extracting outlines of the preview photo and the preprocessed historical photo to obtain an outline image of the historical photo and the preview photo; carrying out straight line extraction on the contour graphs of the two photos, and pairing straight lines of the historical photos and the preview photos according to straight line characteristics by using a straight line matching algorithm to obtain an optimal matching team set; calculating an included angle between straight lines in the optimal matching team set to obtain two included angle matrixes, and calculating the similarity of the included angle matrixes to obtain the similarity of the historical photos and the preview photos; the preview pictures and the historical pictures are subjected to image fusion processing, so that the similar pictures and the historical pictures are displayed in one picture at the same time, the historical building and the existing preview pictures can be matched in real time, the matching degree of the two building images is judged, and the comparison of the building pictures is more accurate and rapid.
Description
Technical Field
The invention relates to the field of image processing, in particular to a building image matching and fusing method based on contour extraction.
Background
With the rapid development of cities, buildings and surrounding scenes in the same place are greatly changed in time and space. Urban historical building protection becomes a prominent social problem. Among them, it is difficult to compare and connect the current building situation with the historical situation, and this is a main reason that prevents the public from participating in the historical building protection. With the development of computer vision and mobile computing technologies, the ability of mobile devices to analyze and understand images is greatly enhanced, and it is a feasible solution to build novel image processing applications on mobile devices to promote user participation.
However, a city building contains both historical photographs from different ages and new photographs taken by the user. Although the new photo and the historical photo are both taken at similar angles of the same building, the surrounding background is changed greatly due to different ages, so that the historical photo and the new photo have different characteristic differences, including color, texture, foreground, background and the like.
The existing image similarity calculation algorithms such as color histograms, perceptual hashing and the like consider the overall characteristics of images, and do not extract the straight line characteristics of buildings, so that a place to be improved exists in the problem of calculating the image similarity between historical photos and new photos of buildings.
Disclosure of Invention
In view of the above-mentioned drawbacks or shortcomings, an object of the present invention is to provide a method for matching and fusing building images based on contour extraction, which can fuse a historical building picture with a newly-taken picture of the same building.
In order to achieve the above purpose, the technical scheme of the invention is as follows:
a method for matching and fusing building images based on contour extraction comprises the following steps:
1) acquiring a historical photo of the building, and preprocessing the historical photo;
2) extracting outlines of the preview photo and the preprocessed historical photo to obtain an outline image of the historical photo and the preview photo;
3) respectively carrying out linear extraction on the contour diagrams of the historical photo and the preview photo by using an LSD linear extraction algorithm, and pairing the linear of the historical photo and the preview photo according to the length, the slope and the position characteristics of the linear by using a linear matching algorithm to obtain an optimal matching pair set;
4) calculating the included angle between every two straight lines in the optimal matching pair set to obtain two included angle matrixes, calculating the similarity of the included angle matrixes to obtain the similarity of the historical photos and the preview photos, and taking photos or comparing the photos of the building according to the similarity in an auxiliary mode to obtain similar photos with high similarity;
5) and performing image fusion processing on the similar photo and the historical photo so that the similar photo and the historical photo are displayed in one photo at the same time.
The preprocessing the historical photos comprises the following steps: and adjusting the size proportion of the historical photos, converting the historical photos into a gray-scale image, and finally, carrying out filtering smoothing treatment on the gray-scale image.
The step 2) specifically comprises the following steps:
2.1, marking the historical photo as F and the preview photo as G;
2.2, respectively carrying out edge detection on the historical photo F and the preview photo G by using an edge detection algorithm to obtain a historical photo contour map F 'and a preview photo contour map G'.
The step 3) specifically comprises the following steps:
3.1, extracting straight lines in the historical photo profile F 'and the preview photo profile G' by using an LSD straight line extraction algorithm, and respectively storing the straight lines in the historical photo straight line set LAAnd a set of straight lines L for previewing the photographsBPerforming the following steps;
3.2, adopting a greedy algorithm to collect the history photos into a straight line LAStraight lines, set L of straight lines from geometric features to preview picturesBMatching the middle straight lines to obtain an optimal matching team set S; wherein the geometric features include slope, length and position of a straight line.
After the step 3.1, a linear set L of the historical photos is further includedAAnd a set of straight lines L for previewing the photographsBThe straight lines in (1) are polymerized to reduce edge repetition.
The step 3.2 specifically comprises:
A. a linear set L of historical photographs in turnAEach of the straight line scan preview picture straight line sets LBFinding all feasible solutions by using the straight lines in the step (1), and judging whether the straight line pairs are matched or not through a threshold value;
B. for the solution satisfying matching, calculating the difference between two straight lines, and finding the solution diff with the minimum difference between the straight lines:
in the above formula, /)1,l2Respectively representing a set of historical photograph lines LAAnd a set of straight lines L for previewing the photographsBThe length of the middle straight line; k is a radical of1,k2Respectively represent the slope of the straight line;
C. and selecting the straight line with the minimum value for pairing according to the solution diff with the minimum difference between the straight lines.
The specific steps of judging whether the straight line pairs are matched through the threshold value are as follows: calculating a history photo straight line set LAAnd a set of straight lines L for previewing the photographsBAdaptive threshold T of middle straight lineaAnd TbSaid T isaAnd TbRespectively representing a slope threshold and a length threshold; when two lines with similar relative positions simultaneously satisfy the adaptive threshold value TaAnd TbThen the two straight lines have approximate slope and length, defining the two straight lines to satisfy the matching solution.
The step 4 specifically includes:
4.1, calculating an included angle between each straight line in the optimal matching team set S to obtain two included angle matrixes A and B, wherein each row and each column in the included angle matrixes A and B represent the included angle between the two straight lines and are represented by an upper triangular matrix;
4.2, calculating the similarity r of the included angle matrixes A and B:
wherein m and n respectively represent the row number and the column number of the included angle matrix A, B matrix;the mean value of the a matrix is represented,mean value representing B matrixRepresenting a weighting factor, i.e. the ratio of the number of lines extracted for the two images.
The step 5) specifically comprises the following steps:
5.1, image preprocessing:
reading a scaling rotation coefficient matrix stored in the contour extraction stage, transforming the historical picture according to the coefficient matrix, and filling a blank left by scaling by using a transparent pixel method during transformation to finally enable the scene before fusion to be consistent with the scene during shooting;
5.2, creating a mask matrix:
creating a mask matrix M, wherein the size of the mask matrix M is the same as that of the similar photo, the pixel value corresponding to each point on the mask matrix M is a numerical value from 0 to 255, the black color is 0, and the white color is 255;
5.3, gray-scale weighting:
taking the mask matrix as a template, taking the transparency of the pixel points in the mask matrix as a weight of each pixel point, and taking the weight as a reference to perform weighted average on the transparencies of the historical photos and the similar photos to obtain a new matrix:
H(i,j)=wfF(i,j)+wgG(i,j)
wf+wg=1
f represents a historical photo image, G represents a similar photo image, H represents a synthesized image, and i and j represent pixel points of the ith row and the jth column of an image matrix; w is afAnd wgAre respectively a weighting coefficient, where wfIs obtained by bit operation of M, wfAnd wgThe sum is 1;
5.4, generating an image:
and converting the new matrix into an image and displaying the image on a client interface.
Step 5.5 is also included after step 5.4:
and (4) carrying out overall characteristic change on the generated image, including the modification of the color and the style of the image, and obtaining a final fusion image.
Compared with the prior art, the invention has the beneficial effects that:
the invention provides a building image matching and fusing method based on contour extraction, which is used for properly improving contour extraction and matching and image fusion algorithms and optimizing the extraction effect of detail edges; the historical building can be matched with the existing preview picture in real time, and the matching degree of the two building images is judged, so that the comparison of the building pictures is more accurate and rapid; and the volume photos are fused through a fusion method, the current building scene is matched with the historical photos in real time, the building change details are compared and displayed, and a fusion image containing a new scene and an old scene is synthesized, so that the use interest of a user is promoted, the viscosity of the user is increased, and the user is attracted to participate in urban building protection.
Drawings
FIG. 1 is a flow chart of the building image matching and fusion method based on contour extraction
FIG. 2 is a schematic diagram of the 4-direction Scharr operator for contour extraction based building image matching and fusion according to the present invention;
FIG. 3 is a schematic diagram of building image matching and fusion based on contour extraction according to the present invention.
Detailed Description
The present invention will now be described in detail with reference to the drawings, wherein the described embodiments are only some, but not all embodiments of the invention. All other embodiments, which can be obtained by a person skilled in the art without any inventive step based on the embodiments of the present invention, belong to the scope of the present invention.
As shown in FIG. 1, the invention provides a building image matching and fusing method based on contour extraction, which comprises the following steps:
1) acquiring a historical photo of the building, and preprocessing the historical photo;
the preprocessing the historical photos comprises the following steps: and adjusting the size proportion of the historical photos, converting the historical photos into a gray-scale image, and finally, carrying out filtering smoothing treatment on the gray-scale image.
2) Extracting outlines of the preview photo and the preprocessed historical photo to obtain an outline image of the historical photo and the preview photo;
it should be noted that the preview photo includes a preview shot photo in the camera, and also includes a new building photo taken.
The step 2) specifically comprises the following steps:
2.1, marking the historical photo as F and the preview photo as G;
2.2, respectively carrying out edge detection on the historical photo F and the preview photo G by using an edge detection algorithm to obtain a historical photo contour map F 'and a preview photo contour map G'.
The method aims at extracting the long straight line in the building outline, replaces a Sobel operator with a Scharr operator in 4 directions on the basis of a Canny edge detection algorithm, and increases a local threshold adjusting function at the same time, so that the existing edge detection algorithm is improved, the improved algorithm can detect a fine edge, and the method plays a positive role in the straight line detection algorithm in the next step.
The traditional Canny edge detection algorithm calculates the gradient by default by using a 3 x 3 Sobel operator, and has the advantages of simple and quick calculation and low edge positioning precision. Since the Sobel operator is sensitive to gradient changes in both horizontal and vertical directions, it is not sensitive to gradient changes in other directions. In addition, the convolution kernel weight of the Sobel operator is small, the Sobel operator is easily influenced by noise points, and the edge detection precision is reduced. In order to detect a complete, continuous building outline. According to the invention, the Sobel operator is replaced by the Scharr operator, and the Scharr operator is as fast as the Sobel operator in efficiency but higher in precision. On the basis of the traditional Scharr operator, two directions of 45 degrees and 135 degrees are added, 4 directions are used for calculating gradients, as shown in figure 3, certain sensitivity is kept on edges in multiple directions, the sensitivity only on two directions of the horizontal direction and the vertical direction is avoided, and the accuracy of edge positioning is improved.
The improved image gradient calculation formula is defined as:
G=(|G45|-|G135|)/2+(|G45|+|G135|)/2+|Gx|+|Gy|
the convolution operator is G (x, y) (0< i < N,0< j < N), N being the order of G (x, y). The method for calculating the image gradient considers the diagonal direction of the pixel points, introduces the pixel points into differential calculation, improves the accuracy of edge positioning, and is very helpful for extracting fine edges.
The traditional Canny edge detection algorithm adopts a mode of manually setting a threshold value, the threshold value cannot be self-adaptively adjusted, and the edge positioning accuracy of different types of images is reduced. One improvement strategy is to use the Ostu algorithm to automatically choose the global threshold. The Ostu algorithm firstly calculates a histogram of an image, divides the image into different intervals and counts the number of pixels in each interval. And then selecting a pixel t, dividing the foreground pixel and the background pixel according to t, and setting the corresponding pixel t as a threshold value when the variance between the foreground pixel and the background pixel is maximum. Although the Ostu algorithm can automatically select the threshold value according to different images, certain problems exist. First, the algorithm is computationally intensive, requiring a search for pixels within the entire histogram bin for each image. Secondly, although the use of this algorithm results in a threshold that is maximally distinguishable between foreground and background, this threshold does not change throughout the edge detection process: when the average gradient value in a certain area in the image is lower, the gradient value of a possible edge point is also lower, and if the threshold value is set to be higher, a plurality of detailed edges are missed to be detected; on the contrary, when the average gradient value in a certain region in the image is high, if the threshold value is set low at this time, false detection of the edge is easily caused. Therefore, the Ostu algorithm is suitable for simple images, and is not suitable for images with complex background noise, such as building images.
In order to solve the problems and take the complexity and the efficiency of the algorithm into consideration, the invention designs a strategy for automatically adjusting the threshold value. The initial value of the low threshold is TlThe core idea of the algorithm is to use the gradient value T of the neighborhood deltaδThe current threshold is appropriately adjusted: when T isδ<TlWhen T is decreased, it is shown that the average gradient magnitude within δ is lowerlSo that the probability that a pixel with a low gradient magnitude is detected as an edge is increased; when T isδ>TlIn time, it is shown that the average gradient magnitude in δ is higher, when T is increasedlSo that the probability that pixels with lower gradient magnitude are detected as edges is reduced. Since the detail edge is aimed at, only the low threshold value needs to be changed, and the high threshold value is kept unchanged, so that the judgment of the thick edge is not influenced. The above strategy is summarized as follows:
wherein, Tl 1Is the adjusted threshold; n is width and height of delta, N2The number of pixels within δ; gradient (i, j) is a gradient value at the point (i, j) and is calculated by a Scharr operator; p is weight number with the value range of-1 or +1, Tδ<=TlWhen p is-1, Tδ>TlWhen p is + 1.
By the aid of the local threshold automatic adjustment strategy, the problems of missing detection and false detection of edge points caused by setting of a global threshold are effectively solved, noise is suppressed, and low-intensity edge details are protected.
The gradient amplitude of a pixel point is very large, and the pixel point cannot be explained as an edge point. Because the pixel points on the image edge are often the maximum value points of the gradient amplitudes of the adjacent pixels, the maximum value detection of the possible edge points is one of the necessary steps for judging whether the point is determined to be the edge point. The Canny edge detection algorithm employs a greedy strategy to perform non-maxima suppression in the 8 neighborhood of pixels. The method comprises the following steps: the current pixel is taken as the origin of coordinates, the first quadrant is divided into three areas of [0,22.5 ], [22.5,67.5 ] and [67.5,90], and each area represents a gradient change direction. If the gradient direction of the current pixel is less than 22.5, searching all pixel points in the region [0,22.5), if the gradient amplitude of the current pixel point is the maximum value of the gradient amplitudes of the pixel points, keeping the current pixel point, otherwise, rejecting the current pixel point. Similarly, if the gradient direction of the current pixel falls in the other two regions, the same calculation method is also adopted.
3) Respectively carrying out linear extraction on the contour diagrams of the historical photo and the preview photo by using an LSD linear extraction algorithm, and pairing the linear of the historical photo and the preview photo according to the length, the slope and the position characteristics of the linear by using a linear matching algorithm to obtain an optimal matching pair set;
3.1, extracting straight lines in the historical photo profile F 'and the preview photo profile G' by using an LSD straight line extraction algorithm, and respectively storing the straight lines in the historical photo straight line set LAAnd a set of straight lines L for previewing the photographsBPerforming the following steps;
respectively to the history photo straight line set LAAnd a set of straight lines L for previewing the photographsBThe linear machine in (1) carries out polymerization, and reduces edge repetition.
The invention divides the polymerizable straight lines in the building image into two types: a merged type and a connected type. Both the merged type and the connected type are generated from an edge repetition problem in an edge detection stage, which is unavoidable in the edge detection, thus resulting in generation of both types of straight lines in a straight line extraction stage. The common feature of the merged type and the connected type is that the slopes between the two straight lines are approximately equal while the intervals between the two straight lines are very close. Whether the two straight lines can be clustered or not is judged by setting a slope threshold and a spacing threshold.
The linear polymerization proceeds according to the following three principles:
first, the straight lines to be aggregated must be short straight lines, those longer straight lines are filtered out by setting a threshold value, and the remaining straight lines are the straight lines to be aggregated.
Secondly, if two short straight lines have the same slope and the distance between the two short straight lines is relatively short, the two short straight lines are considered to belong to the same straight line, polymerization can be carried out, and the longer straight line of the two short straight lines is reserved. Relative length: the relative length between the line and the image.
If the two straight lines have the same slope and are connected end to end or have a certain distance within a certain range, the two straight lines can be considered to belong to the same long straight line.
In summary, the present document incorporates, after linear extraction, a linear polymerization step, in which short straight lines are first filtered out by length, and then the joining-type straight lines are joined to polymerize short straight lines having the same characteristics.
3.2, adopting a greedy algorithm to collect the history photos into a straight line LAStraight lines, set L of straight lines from geometric features to preview picturesBMatching the middle straight lines to obtain an optimal matching team set S; wherein the geometric features include slope, length and position of a straight line.
The step 3.2 specifically comprises:
A. a linear set L of historical photographs in turnAEach of the straight line scan preview picture straight line sets LBFinding all feasible solutions by using the straight lines in the step (1), and judging whether the straight line pairs are matched or not through a threshold value; when two approximately-positioned straight lines simultaneously satisfy the two thresholds, the two straight lines are indicated to have approximate slopes and lengths, and the two straight lines are considered to have the possibility of matching with each other.
The specific steps of judging whether the straight line pairs are matched through the threshold value are as follows: calculating a history photo straight line set LAAnd a set of straight lines L for previewing the photographsBAdaptive threshold T of middle straight lineaAnd TbSaid T isaAnd TbIndividual watchShowing a slope threshold and a length threshold; when two lines with similar relative positions simultaneously satisfy the adaptive threshold value TaAnd TbThen the two straight lines have approximate slope and length, defining the two straight lines to satisfy the matching solution.
B. For the solution satisfying matching, calculating the difference between two straight lines, and finding the solution diff with the minimum difference between the straight lines:
in the above formula, /)1,l2Respectively representing a set of historical photograph lines LAAnd a set of straight lines L for previewing the photographsBThe length of the middle straight line; k is a radical of1,k2Respectively, the slope of the line.
C. And selecting the straight line with the minimum value for pairing according to the solution diff with the minimum difference between the straight lines.
4) Calculating included angles between every two straight lines in the optimal matching team set to obtain two included angle matrixes, calculating the similarity of the included angle matrixes to obtain the similarity of a historical photo and a preview photo, and taking photos or comparing the photos with the aid of the similarity to obtain a similar photo with high similarity;
the step 4 specifically includes:
4.1, calculating an included angle between each straight line in the optimal matching team set S to obtain two included angle matrixes A and B, wherein each row and each column in the included angle matrixes A and B represent the included angle between the two straight lines and are represented by an upper triangular matrix;
4.2, calculating the similarity r of the included angle matrixes A and B:
wherein m and n respectively represent the row number and the column number of the included angle matrix A, B matrix;mean value of A matrix,The mean value of the B matrix is represented,representing a weighting factor, i.e. the ratio of the number of lines extracted for the two images.
The larger the value of r is, the higher the similarity of the matrixes A and B is, and the higher the matching degree between straight lines is. It should be noted that, for the denominator of the formula, if each element in the A, B matrix is equal, the term of the denominator is 0, so the above formula requires that each element in the A, B matrix cannot be equal. In addition, if the result of the calculation to the left of the multiplication is equal to 1, the explanation A, B is the same matrix and it is not necessary to multiply by the match rate any more.
5) And performing image fusion processing on the similar photo and the historical photo so that the similar photo and the historical photo are displayed in one photo at the same time.
The step 5) specifically comprises the following steps:
5.1, image preprocessing:
reading a scaling rotation coefficient matrix stored in the contour extraction stage, transforming the historical picture according to the coefficient matrix, and filling a blank left by scaling by using a transparent pixel method during transformation to finally enable the scene before fusion to be consistent with the scene during shooting;
5.2, creating a mask matrix:
creating a mask matrix M, wherein the size of the mask matrix M is the same as that of the similar photo, the pixel value corresponding to each point on the mask matrix M is a numerical value from 0 to 255, the black color is 0, and the white color is 255;
5.3, gray-scale weighting:
taking the mask matrix as a template, taking the transparency of the pixel points in the mask matrix as a weight of each pixel point, and taking the weight as a reference to perform weighted average on the transparencies of the historical photos and the similar photos to obtain a new matrix:
H(i,j)=wfF(i,j)+wgG(i,j)
wf+wg=1
f represents a historical photo image, G represents a similar photo image, H represents a synthesized image, and i and j represent pixel points of the ith row and the jth column of an image matrix; w is afAnd wgAre respectively a weighting coefficient, where wfIs obtained by bit operation of M, wfAnd wgThe sum is 1;
5.4, generating an image:
and converting the new matrix into an image and displaying the image on a client interface.
The weight of the historical photograph is greater than that of the new photograph, so the image here is mainly the pixels in the historical photograph. In order to ensure that the core area of the historical photo can be displayed, the weight of the historical photo is greater than that of the new photo in the initial situation. With the change of the region position, the weight of the historical picture pixel is gradually reduced, and the weight of the new picture pixel is gradually increased, as shown in FIG. 3, at (x)1X) the weight of the new picture is greater than the historical picture, so the image here is dominated by the pixels in the new picture. The transparency transition of the pixel points at the corresponding positions of the new photo and the historical photo skillfully realizes the image fusion function on the basis of not dividing the image semantics.
And 5.5, carrying out overall characteristic change on the generated image, wherein the overall characteristic change comprises the modification of the color and the style of the image, and obtaining a final fusion image.
Unlike the pixel-level fusion, which fundamentally changes the semantics of an image, the feature-level fusion changes the image in terms of its overall style. The image features include semantic features and style features in addition to color, texture, and shape. Aiming at the color characteristics of the image, the color of the image is changed as a whole by adopting an image filter method. Aiming at the style characteristics of the image, a Convolutional Neural Network (CNN) and a deep learning tool are adopted to segment the style characteristics and the semantic characteristics of the image, so that the fusion of the image is increased from a simple pixel level to an integral style level.
Image color characteristics:
an image filter is a simple method for changing the overall color characteristics of an image. In order to realize the ancient effect of building images, filters with high saturation, black and white, nostalgic and the like are designed. Since the image is RGBA four-channel, a color matrix of 4 × 5 steps is designed as a component matrix of four channels, denoted as a. Wherein the components of the first to fourth rows represent the components of red, green, blue and transparency, respectively. Any pixel point in the image is composed of four channels of RGBA, and the pixel value of each channel of the pixel point can be represented by using a column vector of 5 x 1 to represent and marking as C. The transformed image R can be calculated using matrix multiplication, i.e., R ═ a × C, as shown in the following equation:
R'=a*R+b*G+c*B+d*A+e*1
G'=f*R+g*G+h*B+i*A+j*1
B'=k*R+l*G+m*B+n*A+o*1
A'=p*R+q*G+r*B+s*A+t*1
wherein, the fifth columns e, j, o, t of the component matrix A respectively represent the offset of RGBA. Modifying the offset may modify the pixel values of the corresponding channel without affecting other channel components.
And (3) image style conversion:
the pixel level fusion method of the image only utilizes an operator in the image science to modify the low-level information of the image, and controls the image based on the integral style, so that the processing capacity of the detail is poor. The image fusion method based on style conversion utilizes CNN to learn high-level characteristics of semantics and style in the image and applies the learning result to a new image, thereby creating image works with different contents and styles. The two principles are fundamentally different. In order to improve the usability and interactivity of the system in the practical application process, the image style conversion is used as an image fusion method.
It will be appreciated by those skilled in the art that the above embodiments are merely preferred embodiments of the invention, and thus, modifications and variations may be made in the invention by those skilled in the art, which will embody the principles of the invention and achieve the objects and objectives of the invention while remaining within the scope of the invention.
Claims (10)
1. A building image matching and fusing method based on contour extraction is characterized by comprising the following steps:
1) acquiring a historical photo of the building, and preprocessing the historical photo;
2) extracting outlines of the preview photo and the preprocessed historical photo to obtain an outline image of the historical photo and the preview photo;
3) respectively carrying out linear extraction on the contour diagrams of the historical photo and the preview photo by using an LSD linear extraction algorithm, and pairing the linear of the historical photo and the preview photo according to the length, the slope and the position characteristics of the linear by using a linear matching algorithm to obtain an optimal matching pair set;
4) calculating the included angle between every two straight lines in the optimal matching pair set to obtain two included angle matrixes, calculating the similarity of the included angle matrixes to obtain the similarity of the historical photos and the preview photos, and taking photos or comparing the photos of the building according to the similarity in an auxiliary mode to obtain similar photos with high similarity;
5) and performing image fusion processing on the similar photo and the historical photo so that the similar photo and the historical photo are displayed in one photo at the same time.
2. The method for contour extraction based building image matching and fusion as claimed in claim 1, wherein the pre-processing of the historical photos comprises: and adjusting the size proportion of the historical photos, converting the historical photos into a gray-scale image, and finally, carrying out filtering smoothing treatment on the gray-scale image.
3. The method for matching and fusing building images based on contour extraction as claimed in claim 1, wherein the step 2) specifically comprises:
2.1, marking the historical photo as F and the preview photo as G;
2.2, respectively carrying out edge detection on the historical photo F and the preview photo G based on an improved Canny edge detection algorithm to obtain a historical photo contour diagram F 'and a preview photo contour diagram G'.
4. The method for matching and fusing building images based on contour extraction as claimed in claim 3, wherein the step 3) specifically comprises:
3.1, extracting straight lines in the historical photo profile F 'and the preview photo profile G' by using an LSD straight line extraction algorithm, and respectively storing the straight lines in the historical photo straight line set LAAnd a set of straight lines L for previewing the photographsBPerforming the following steps;
3.2, adopting a greedy algorithm to collect the history photos into a straight line LAStraight lines, set L of straight lines from geometric features to preview picturesBMatching the middle straight line to obtain an optimal matching pair set S; wherein the geometric features include slope, length and position of a straight line.
5. The method for matching and fusing building images based on contour extraction as claimed in claim 4, further comprising after step 3.1, respectively aligning the historical photographs with a straight line set LAAnd a set of straight lines L for previewing the photographsBThe straight lines in (1) are polymerized to reduce edge repetition.
6. The method for matching and fusing building images based on contour extraction as claimed in claim 4, wherein said step 3.2 comprises in particular:
A. a linear set L of historical photographs in turnAEach of the straight line scan preview picture straight line sets LBFinding all feasible solutions by using the straight lines in the step (1), and judging whether the straight line pairs are matched or not through a threshold value;
B. for the solution satisfying matching, calculating the difference between two straight lines, and finding the solution diff with the minimum difference between the straight lines:
in the above formula, /)1,l2Respectively representing a set of historical photograph lines LAAnd a set of straight lines L for previewing the photographsBThe length of the middle straight line; k is a radical of1,k2Respectively represent the slope of the straight line;
C. and selecting the straight line with the minimum value for pairing according to the solution diff with the minimum difference between the straight lines.
7. The method for matching and fusing building images based on contour extraction as claimed in claim 6, wherein the determining whether the straight line pair matches or not by the threshold is specifically: calculating a history photo straight line set LAAnd a set of straight lines L for previewing the photographsBAdaptive threshold T of middle straight lineaAnd TbSaid T isaAnd TbRespectively representing a slope threshold and a length threshold; when two lines with similar relative positions simultaneously satisfy the adaptive threshold value TaAnd TbThen the two straight lines have approximate slope and length, defining the two straight lines to satisfy the matching solution.
8. The method for matching and fusing building images based on contour extraction according to claim 4, wherein the step 4) specifically comprises:
4.1, calculating an included angle between each straight line in the optimal matching pair set S to obtain two included angle matrixes A and B, wherein each row and each column in the included angle matrixes A and B represent the included angle between the two straight lines and are represented by an upper triangular matrix;
4.2, calculating the similarity r of the included angle matrixes A and B:
wherein m and n respectively represent the row number and the column number of the included angle matrix A, B matrix;the mean value of the a matrix is represented,the mean value of the B matrix is represented,representing a weighting factor, i.e. the ratio of the number of lines extracted for the two images.
9. The method for matching and fusing building images based on contour extraction as claimed in claim 8, wherein the step 5) specifically comprises:
5.1, image preprocessing:
reading a scaling rotation coefficient matrix stored in the contour extraction stage, transforming the historical picture according to the coefficient matrix, and filling a blank left by scaling by using a transparent pixel method during transformation to finally enable the scene before fusion to be consistent with the scene during shooting;
5.2, creating a mask matrix:
creating a mask matrix M, wherein the size of the mask matrix M is the same as that of the similar photo, the pixel value corresponding to each point on the mask matrix M is a numerical value from 0 to 255, the black color is 0, and the white color is 255;
5.3, gray-scale weighting:
taking the mask matrix as a template, taking the transparency of the pixel points in the mask matrix as a weight of each pixel point, and taking the weight as a reference to perform weighted average on the transparencies of the historical photos and the similar photos to obtain a new matrix:
H(i,j)=wfF(i,j)+wgG(i,j)
wf+wg=1
f represents a historical photo image, G represents a similar photo image, H represents a synthesized image, and i and j represent pixel points of the ith row and the jth column of an image matrix; w is afAnd wgAre respectively a weighting coefficient, where wfIs obtained by bit operation of M, wfAnd wgThe sum is 1;
5.4, generating an image:
and converting the new matrix into an image and displaying the image on a client interface.
10. The method for matching and fusing building images based on contour extraction as claimed in claim 9, further comprising step 5.5 after step 5.4:
and (4) carrying out overall characteristic change on the generated image, including the modification of the color and the style of the image, and obtaining a final fusion image.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810280577.0A CN108537782B (en) | 2018-04-02 | 2018-04-02 | Building image matching and fusing method based on contour extraction |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810280577.0A CN108537782B (en) | 2018-04-02 | 2018-04-02 | Building image matching and fusing method based on contour extraction |
Publications (2)
Publication Number | Publication Date |
---|---|
CN108537782A CN108537782A (en) | 2018-09-14 |
CN108537782B true CN108537782B (en) | 2021-08-31 |
Family
ID=63482169
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201810280577.0A Active CN108537782B (en) | 2018-04-02 | 2018-04-02 | Building image matching and fusing method based on contour extraction |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN108537782B (en) |
Families Citing this family (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109325497A (en) * | 2018-09-20 | 2019-02-12 | 珠海市君天电子科技有限公司 | A kind of image binaryzation method, device, electronic equipment and storage medium |
CN109308716A (en) * | 2018-09-20 | 2019-02-05 | 珠海市君天电子科技有限公司 | A kind of image matching method, device, electronic equipment and storage medium |
CN110544386A (en) * | 2019-09-18 | 2019-12-06 | 奇瑞汽车股份有限公司 | parking space identification method and device and storage medium |
CN111091146B (en) * | 2019-12-10 | 2024-02-02 | 广州品唯软件有限公司 | Picture similarity obtaining method and device, computer equipment and storage medium |
CN111652297B (en) * | 2020-05-25 | 2021-05-25 | 哈尔滨市科佳通用机电股份有限公司 | Fault picture generation method for image detection model training |
CN113298726A (en) * | 2021-05-14 | 2021-08-24 | 漳州万利达科技有限公司 | Image display adjusting method and device, display equipment and storage medium |
CN113362290B (en) * | 2021-05-25 | 2023-02-10 | 同济大学 | Method, storage device and device for quickly identifying collinear features of dot matrix planes |
CN113313101B (en) * | 2021-08-02 | 2021-10-29 | 杭州安恒信息技术股份有限公司 | Building contour automatic aggregation method, device, equipment and storage medium |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102073995A (en) * | 2010-12-30 | 2011-05-25 | 上海交通大学 | Color constancy method based on texture pyramid and regularized local regression |
CN105354866A (en) * | 2015-10-21 | 2016-02-24 | 郑州航空工业管理学院 | Polygon contour similarity detection method |
CN105957007A (en) * | 2016-05-05 | 2016-09-21 | 电子科技大学 | Image stitching method based on characteristic point plane similarity |
CN107680054A (en) * | 2017-09-26 | 2018-02-09 | 长春理工大学 | Multisource image anastomosing method under haze environment |
-
2018
- 2018-04-02 CN CN201810280577.0A patent/CN108537782B/en active Active
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102073995A (en) * | 2010-12-30 | 2011-05-25 | 上海交通大学 | Color constancy method based on texture pyramid and regularized local regression |
CN105354866A (en) * | 2015-10-21 | 2016-02-24 | 郑州航空工业管理学院 | Polygon contour similarity detection method |
CN105957007A (en) * | 2016-05-05 | 2016-09-21 | 电子科技大学 | Image stitching method based on characteristic point plane similarity |
CN107680054A (en) * | 2017-09-26 | 2018-02-09 | 长春理工大学 | Multisource image anastomosing method under haze environment |
Non-Patent Citations (5)
Title |
---|
A regional image fusion based on similarity characteristics;Xiaoyan Luo et al.;《Signal Processing》;20111201;第92卷(第5期);第1268-1280页 * |
Similarity-based multimodality image fusion with shiftable complex;Qiang Zhang et al.;《Pattern Recognition Letters》;20110612;第32卷(第13期);第1544-1553页 * |
一种多波段红外图像联合配准和融合方法;李英杰等;《电子与信息学报》;20160131;第38卷(第1期);第8-14页 * |
基于改进 SURF 算子的彩色图像配准算法;任克强等;《电子测量与仪器学报》;20160531;第30卷(第5期);第748-756页 * |
结合 NSCT 和压缩感知的红外与可见光图像融合;陈木生;《中国图象图形学报》;20160131;第21卷(第1期);第39-44页 * |
Also Published As
Publication number | Publication date |
---|---|
CN108537782A (en) | 2018-09-14 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN108537782B (en) | Building image matching and fusing method based on contour extraction | |
CN104834898B (en) | A kind of quality classification method of personage's photographs | |
US8577099B2 (en) | Method, apparatus, and program for detecting facial characteristic points | |
EP1918872B1 (en) | Image segmentation method and system | |
US20140079319A1 (en) | Methods for enhancing images and apparatuses using the same | |
US20110268359A1 (en) | Foreground/Background Segmentation in Digital Images | |
CN104978567B (en) | Vehicle checking method based on scene classification | |
CN104517095B (en) | A kind of number of people dividing method based on depth image | |
CN107705254B (en) | City environment assessment method based on street view | |
CN108876723A (en) | A kind of construction method of the color background of gray scale target image | |
CN107944437B (en) | A kind of Face detection method based on neural network and integral image | |
WO2018053952A1 (en) | Video image depth extraction method based on scene sample library | |
JP2006119817A (en) | Image processor | |
CN111242074B (en) | Certificate photo background replacement method based on image processing | |
CN114693760A (en) | Image correction method, device and system and electronic equipment | |
CN106529441A (en) | Fuzzy boundary fragmentation-based depth motion map human body action recognition method | |
KR20190080388A (en) | Photo Horizon Correction Method based on convolutional neural network and residual network structure | |
Wang et al. | Where2stand: A human position recommendation system for souvenir photography | |
CN113065404B (en) | Method and system for detecting train ticket content based on equal-width character segments | |
CN108647605B (en) | Human eye gaze point extraction method combining global color and local structural features | |
CN111669492A (en) | Method for processing shot digital image by terminal and terminal | |
CN103578121B (en) | Method for testing motion based on shared Gauss model under disturbed motion environment | |
Lai et al. | Single image dehazing with optimal transmission map | |
CN110430400B (en) | Ground plane area detection method of binocular movable camera | |
KR101513931B1 (en) | Auto-correction method of composition and image apparatus with the same technique |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |