CN111563867A - Image fusion method for improving image definition - Google Patents

Image fusion method for improving image definition Download PDF

Info

Publication number
CN111563867A
CN111563867A CN202010672535.9A CN202010672535A CN111563867A CN 111563867 A CN111563867 A CN 111563867A CN 202010672535 A CN202010672535 A CN 202010672535A CN 111563867 A CN111563867 A CN 111563867A
Authority
CN
China
Prior art keywords
layer
image
fusion
reference layer
points
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010672535.9A
Other languages
Chinese (zh)
Inventor
廖峪
林仁辉
苏茂才
唐泰可
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Chengdu Zhonggui Track Equipment Co ltd
Original Assignee
Chengdu Zhonggui Track Equipment Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Chengdu Zhonggui Track Equipment Co ltd filed Critical Chengdu Zhonggui Track Equipment Co ltd
Priority to CN202010672535.9A priority Critical patent/CN111563867A/en
Publication of CN111563867A publication Critical patent/CN111563867A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration by the use of more than one image, e.g. averaging, subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • G06T5/70
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/13Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging

Abstract

The embodiment of the invention discloses an image fusion method for improving image definition, which comprises the steps of obtaining at least two original images for image fusion, and respectively placing the two original images in a two-dimensional rectangular coordinate system to be used as a first image layer and a second image layer; selecting a plurality of feature points in the first image layer and the second image layer, matching the plurality of feature points one by one and determining a relative correction mode of the first image layer and the second image layer; preliminarily defining an image fusion range of a first image layer and a second image layer; calibrating the superposition boundary of the first image layer and the second image layer for the second time; carrying out image fusion on the image overlapping area in the overlapping boundary according to a fusion algorithm so as to improve the definition of the image overlapping part; according to the scheme, the reference layer and the non-reference layer are stacked through position matching of the same pixel point with the same characteristic structure, the coordinate range corresponding to the image overlapping area is determined, and image fusion operation is sequentially carried out on the pixel points within the coordinate range so as to improve the definition of the image.

Description

Image fusion method for improving image definition
Technical Field
The embodiment of the invention relates to the technical field of image processing, in particular to an image fusion method for improving image definition.
Background
The image fusion refers to that image data which are collected by a multi-source channel and related to the same target are subjected to image processing, computer technology and the like, beneficial information in respective channels is extracted to the maximum extent, and finally high-quality images are synthesized, so that the utilization rate of image information is improved, the computer interpretation precision and reliability are improved, the spatial resolution and the spectral resolution of original images are improved, and monitoring is facilitated.
The image fusion data is in the form of an image containing light and shade, color, temperature, distance, and other scene features. These images may be presented in one frame, or in a column. And the image fusion is to fuse 2 or more than 2 pieces of image information onto 1 piece of image, so that the fused image contains more information and can be observed by a person or processed by a computer more conveniently.
The existing fusion method mainly utilizes matrix operation and statistical estimation theory to calculate fused images, and realizes information complementation. The more classical methods are: a weighted fusion method, a pixel value maximization method, a pixel value minimization method, a principal component analysis method, a statistical estimation method and the like. However, the existing image fusion method has the following defects:
the overlapping matching preprocessing is not carried out on a plurality of images, so that the searching speed of pixel points with the same position is slow when the images are fused, and the fusion efficiency and accuracy are influenced.
Disclosure of Invention
Therefore, the embodiment of the invention provides an image fusion method for improving image definition, so as to solve the problem that in the prior art, overlapping matching preprocessing is not performed on a plurality of images, and the efficiency and accuracy of fusion are affected.
In order to achieve the above object, an embodiment of the present invention provides the following:
an image fusion method for improving image definition comprises the following steps:
step 100, acquiring at least two original images for image fusion, and respectively placing the two original images in a two-dimensional rectangular coordinate system to be used as a first image layer and a second image layer;
step 200, selecting a plurality of feature points in the first layer and the second layer, matching the plurality of feature points one by one, and determining a relative correction mode of the first layer and the second layer;
step 300, preliminarily defining an image fusion range of the first image layer and the second image layer, and confirming a fusion boundary of the first image layer and the second image layer for the first time;
step 400, accurately defining an extended fusion sideband of the first layer and the second layer near a fusion boundary, and secondarily calibrating a superposition boundary of the first layer and the second layer;
and 500, carrying out image fusion on the image overlapping area in the overlapping boundary according to a fusion algorithm so as to improve the definition of the image overlapping part.
As a preferred scheme of the present invention, in step 200, it is determined whether the first layer and the second layer overlap by matching feature points of the first layer and the second layer, and the specific implementation steps of matching feature points are as follows:
step 201, performing preliminary image processing on two original images to obtain the first image layer and the second image layer after filtering and denoising;
step 202, selecting a feature structure in the first image layer and the second image layer, and planning a feature edge area of the feature structure;
step 203, constructing pixel value distribution maps of the feature edge region of the second layer and the feature edge region of the first layer, and comparing the pixel value distribution maps of all the feature edge regions one by one to determine whether the first layer and the second layer have the same feature structure;
step 204, judging a relative turning angle between the first layer and the second layer according to a pixel value distribution diagram of a feature structure of the same feature edge region;
step 205, intercepting quasi-overlapping segments with the same characteristic structure in the first layer and the second layer, dividing the quasi-overlapping segments into a plurality of rows of pixel comparison regions according to relative turning angles, sequentially calculating gray value differences of two adjacent pixel points in each row of pixel comparison regions, and preliminarily defining an image fusion range.
As a preferred scheme of the present invention, in step 205, pseudo-overlap segments with the same feature structure are intercepted, and the image fusion range is determined twice for the corresponding pseudo-overlap segments in the first image layer and the second image layer, which is specifically implemented as the following steps:
defining a plurality of quasi-overlapping segments in a first image layer and a second image layer according to the characteristic edge region with image overlapping;
carrying out gray level processing on two quasi-overlapping segments with the same characteristic structure, taking the quasi-overlapping segments with few pixels in the inclusion range as a compared object, taking the quasi-overlapping segments with more pixels in the inclusion range as a compared object, and sequentially calculating a pixel value difference value of two adjacent pixels in the inclusion range of the compared object and the inclusion range of the compared object;
and defining the quasi-overlapping segments with the same pixel value difference or multiple pixel value differences as overlapping segments, and determining the image fusion range of the quasi-overlapping segments with the same number of pixel points of the overlapping segments and the compared object.
As a preferred scheme of the present invention, after the image fusion range is determined, two coordinate axes of the two-dimensional rectangular coordinate system are used as reference lines, a first layer or a second layer parallel to the coordinate axes of the two-dimensional rectangular coordinate system is determined as a reference layer, the second layer or the first layer parallel to the coordinate axes of the two-dimensional rectangular coordinate system is used as a non-reference layer, and pixels of the non-reference layer are rotated to be parallel to the reference layer according to the relative flip angle.
As a preferred embodiment of the present invention, in step 300, an image fusion range of a first layer and an image fusion range of a second layer are defined according to the same feature structure of the first layer and the second layer, where the fusion boundaries are specifically the same feature structure boundaries.
As a preferred scheme of the present invention, the specific implementation steps for preliminarily defining the image fusion range of the first layer and the second layer include:
setting a plurality of mark points on the same characteristic structure of the reference layer, wherein the parameters of the mark points comprise pixel coordinates, pixel values and a plurality of adjacent pixel value difference values;
determining coordination points which are matched with the pixel values of the mark points one by one on the non-reference layer;
and stacking the non-reference layer on the reference layer, wherein the mark points and the coordination points coincide one by one when the non-reference layer is stacked.
As a preferred scheme of the present invention, in step 400, the specific implementation steps of accurately defining the expanded fused sidebands of the first layer and the second layer near the fused boundary include:
respectively comparing the area of the preliminarily defined image fusion range with the area of the reference layer and the area of the non-reference layer;
when the area of the preliminarily defined image fusion range is smaller than the area of a non-reference layer, calculating an intersecting surface of the reference layer and the non-reference layer and a coordinate range corresponding to the intersecting surface;
and when the area of the preliminarily defined image fusion range is equal to the area of the non-reference layer, calculating the coordinate range corresponding to the reference layer.
As a preferred scheme of the present invention, the specific implementation step of calculating the intersecting surface of the reference layer and the non-reference layer is as follows: respectively extending vertical straight lines along the side edges of the non-reference layer covered on the reference layer, and respectively extending vertical straight lines along the covered side edges of the reference layer;
and respectively calculating the intersection points of the vertical straight line extended by the non-reference layer and the vertical straight line extended by the reference layer and the two-dimensional rectangular coordinate system, and counting the coordinate range corresponding to the intersection points.
As a preferred scheme of the present invention, the specific implementation steps of calculating the coordinate range corresponding to the reference layer include:
and extending the outer edge line of the reference layer until an intersection point is generated with the two-dimensional rectangular coordinate, and counting the coordinate range corresponding to the intersection point.
As a preferred scheme of the invention, the pixel points in the coordinate range corresponding to the reference layer and the non-reference layer are the image overlapping area.
The embodiment of the invention has the following advantages:
the method comprises the steps of firstly stacking a reference layer and a non-reference layer through position matching of the same pixel point with the same characteristic structure, determining a coordinate range corresponding to an image overlapping area through calculating the intersection surface of the reference layer and the non-reference layer after stacking, and sequentially carrying out image fusion operation on the pixel points in the coordinate range to improve the definition of an image.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below. It should be apparent that the drawings in the following description are merely exemplary, and that other embodiments can be derived from the drawings provided by those of ordinary skill in the art without inventive effort.
Fig. 1 is a schematic flow chart of an image fusion method according to an embodiment of the present invention.
Detailed Description
The present invention is described in terms of particular embodiments, other advantages and features of the invention will become apparent to those skilled in the art from the following disclosure, and it is to be understood that the described embodiments are merely exemplary of the invention and that it is not intended to limit the invention to the particular embodiments disclosed. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
As shown in fig. 1, the present invention provides an image fusion method for improving image definition, in which an overlapping region of two images is determined and calculated, and then the overlapping region of the two images is subjected to image fusion according to an existing image fusion algorithm, so as to improve the definition of the overlapping region.
The method specifically comprises the following steps:
the method comprises the steps of firstly, obtaining at least two original images for image fusion, and respectively placing the two original images in a two-dimensional rectangular coordinate system to be used as a first image layer and a second image layer.
And secondly, selecting a plurality of feature points in the first image layer and the second image layer, matching the plurality of feature points one by one to determine whether an overlapping structure exists, and simultaneously determining a relative correction mode of the first image layer and the second image layer with the overlapping structure.
In this step, it is determined whether the first layer and the second layer are overlapped by matching the feature points of the first layer and the second layer, and the specific implementation step of matching the feature points is as follows:
and T1, performing preliminary image processing on the two original images to obtain a first image layer and a second image layer after filtering and noise reduction.
In the process, the primary image processing is specifically noise reduction processing, and noise and double images on the two original images are filtered to obtain the two clearer original images.
And T2, selecting the feature structures in the first image layer and the second image layer, and planning the feature edge area of the feature structures.
And T3, constructing gray value distribution graphs of the characteristic edge areas of the second layer and the characteristic edge areas of the first layer, and comparing the gray value distribution graphs of all the characteristic edge areas one by one to determine whether the first layer and the second layer have the same characteristic structure.
Determining the same characteristic structure according to the gray value distribution graph of the characteristic edge area of the second layer and the gray value distribution graph of the characteristic edge area of the first layer, wherein the specific implementation steps are as follows:
s1, respectively carrying out homogenization gray level processing on the feature edge areas of each feature structure of the first image layer and the second image layer to obtain gray level images of all the feature edge areas;
s2, determining a gray value distribution graph of a characteristic edge region of each characteristic structure of the first layer, dividing a plurality of lines of pixel points in a mode of being parallel to the characteristic edge region, and calculating a gray value difference value of two adjacent pixel points in each line;
s3, determining a gray value distribution graph of a characteristic edge region of each characteristic structure of the second layer, dividing a plurality of lines of pixel points in a mode of being parallel to the characteristic edge region, and calculating a gray value difference value of two adjacent pixel points in each line;
s4, comparing the gray value difference values of the characteristic edge areas of the first image layer and the second image layer in sequence, and determining the characteristic structure with the same part of gray value difference values or the multiple gray value difference values of the characteristic edge areas as the characteristic structure with image overlapping.
In the embodiment, the step of determining the overlapping portion of the two images is divided into two steps, and in the first step, a plurality of feature structures are selected on the first image layer and the second image layer, wherein the feature structures refer to structural edge lines on the two image layers; and secondly, comparing the gray values in the image fragments surrounded by the same characteristic structure for the first image layer and the second image layer with the same characteristic structure for the second time, and determining whether pixel points in the image fragments surrounded by the same characteristic structure can prove that the two characteristic structures are completely consistent for the second time, thereby determining the overlapping part of the first image layer and the second image layer.
The implementation steps for determining whether the two layers have the same characteristic structure in the first step are as follows: firstly, determining a gray value distribution diagram of a characteristic edge area of each characteristic structure of a first layer and a second layer, wherein whether the gray value distribution diagram of each characteristic structure of the first layer can be overlapped with the second layer after rotary scaling or direct comparison is carried out;
when a certain feature structure on the first layer and the second layer is overlapped, comparing specific gray value data in a gray value distribution graph of the feature edge area;
and finally, determining the characteristic structure with the image overlapping, wherein the difference of partial gray values of the characteristic edge area is the same or the difference of the gray values is in a multiple relation.
In the computer field, a grayscale digital image is an image in which each pixel has only one sample color. Such images are typically displayed in gray scale from darkest black to brightest white, although in theory this sampling could be of different shades of any color and even different colors at different brightnesses. The gray image is different from the black and white image, the black and white image only has two colors of black and white in the computer image field, and the gray image has a plurality of levels of color depth between black and white.
The method converts the original image into the gray level image, is convenient for confirming the pixel value difference of each pixel point, does not need to compare the RGB data of each pixel point, and therefore, the calculation for comparing whether the characteristic edge areas are the same is simple, the algorithm operation difficulty is low, and the calculation amount can be reduced.
Therefore, as an innovative point of the present invention, when the luminance of two original images has a difference, the gray-scale value of the same image will change, so that the gray-scale value of the feature edge area is not directly compared in the embodiment, and if the calculation error of directly comparing the gray-scale values is large, it cannot be accurately determined whether the two feature edge areas are equal.
In the embodiment, the feature edge regions of each feature structure of the first layer and the second layer are respectively subjected to the normalization gray scale processing, so that the determination that the structures are the same due to the brightness difference between the first layer and the second layer can be effectively avoided, the gray values of two adjacent pixel points in each row of the feature edge regions are subtracted, and when the gray value difference between the two adjacent pixel points is the same or is in a multiple relationship, the feature edge regions of the first layer and the second layer are considered to have the same feature structure.
It should be added that when the first layer and the second layer are judged to have the same feature structure, it is not necessary that all pixel point gray value difference values are the same, a negligible error range is calculated by using statistics, and the number of gray value difference values in the error range can be regarded as that the feature edge regions of the two first layers and the second layer have the same feature structure.
And T4, judging the relative turnover angle between the first image layer and the second image layer according to the gray value distribution diagram of the feature structure of the same feature edge area.
And establishing a characteristic structure pixel point mapping relation between the first image layer and the second image layer according to the determined characteristic structure with image overlapping, and determining a relative turning angle between the first image layer and the second image layer through characteristic point matching of the same characteristic structure.
The specific implementation steps are as follows:
horizontally placing a first image layer and a second image layer in a two-dimensional rectangular coordinate system;
then, carrying out feature point matching on the same pixel points with the same feature structure, and determining the relative distribution angle of the same feature structure in the first image layer and the second image layer;
and finally, rotating a certain layer relatively until the pixel points of the first layer and the second layer are overlapped in parallel.
As another innovative point of the present embodiment, the present embodiment can determine whether or not images having the same characteristic structure capturing angle overlap, and can also determine whether or not images having different characteristic structure capturing angles overlap, and therefore, the present embodiment has a wide application range and a high accuracy of the determination method.
T5, intercepting quasi-overlapping segments with the same characteristic structure in the first image layer and the second image layer, dividing the quasi-overlapping segments into a plurality of rows of pixel comparison areas according to relative turning angles, sequentially calculating the gray value difference of two adjacent pixel points in each row of pixel comparison areas, and preliminarily defining the image fusion range.
Intercepting quasi-overlapping segments with the same characteristic structure, and determining the image fusion range for the second time for the corresponding quasi-overlapping segments in the first image layer and the second image layer, wherein the specific implementation steps are as follows:
p1, defining a plurality of quasi-overlapping segments in the first image layer and the second image layer according to the characteristic edge areas with image overlapping;
p2, taking the quasi-overlapping segments containing few pixels in the range as the compared object, taking the quasi-overlapping segments containing many pixels in the range as the compared object, and sequentially calculating the gray value difference value of two adjacent pixels in the range of the compared object and the range of the compared object;
and P3, defining the quasi-overlapping segments with the same gray value difference or multiple gray value differences as overlapping segments, and determining the image fusion range of the quasi-overlapping segments with the same number of pixel points of the overlapping segments and the compared object.
As another innovative point of this embodiment, this embodiment does not determine that the region included in the feature structure in the two image layers has an image overlapping portion only when the structure edge regions of the feature structure are the same, thereby further improving the accuracy of image processing and ensuring the integrity of judging that the two images have an overlapping structure.
In the embodiment, by comparing the image pixel points surrounded by the characteristic edge region, the situation that the characteristic structures are the same and the colors are different can be avoided, and when the gray values of the image pixel points surrounded by the characteristic structures are the same, the first image layer and the second image layer are considered to have the same characteristic structures.
In addition, it should be added that after the image fusion range is determined, two coordinate axes of the two-dimensional rectangular coordinate system are used as reference lines, a first layer or a second layer parallel to the coordinate axes of the two-dimensional rectangular coordinate system is determined as a reference layer, the second layer or the first layer parallel to the coordinate axes of the two-dimensional rectangular coordinate system is used as a non-reference layer, and the pixels of the non-reference layer are rotated to be parallel to the reference layer according to the relative flip angle.
The implementation steps of rotating the corresponding pixel points of the non-reference layer to be parallel to the reference layer according to the relative turning angle are as follows:
sequentially selecting pixel points of image fusion fragments in a plurality of rows of non-reference image layers according to the relative turning angle;
and rotating the pixel points of each row to be parallel to the pixel points of the image fusion fragments of the reference image layer.
After the relative turning angles of the first layer and the second layer are determined, the pixel points of the non-reference layer are divided into a plurality of rows according to the relative turning angles, then the pixel points of each row are sequentially rotated according to the relative turning angles, and finally the non-reference layers which are matched with the positions of the reference layers one by one are obtained, so that subsequent image fusion processing is facilitated.
After the non-reference layer rotates, the gray value of the image fusion fragment pixel points exceeding the boundary of the non-reference layer is set as an editable value, and meanwhile, the gray value of the pixel points between the image fusion fragment pixel points and the boundary of the non-reference layer is also set as an editable value, so that the influence of the pixel points exceeding the layer or being in blank positions with the layer edge in the turned over non-reference layer on the fusion accuracy when the first layer and the second layer are subjected to data fusion is avoided.
And thirdly, adjusting all the feature points of the first image layer and the second image layer in parallel according to a relative correction mode, delimiting an image fusion range of the first image layer and the second image layer, and confirming the superposition boundary of the first image layer and the second image layer.
In the third step, the specific implementation step of determining the superposition boundary of the first layer and the second layer is as follows:
preliminarily defining an image fusion range of a reference layer and a non-reference layer, and confirming a fusion boundary of the reference layer and the non-reference layer for the first time, namely defining the image fusion range of the reference layer and the non-reference layer according to the same characteristic structure of the reference layer and the non-reference layer, wherein the fusion boundary is specifically the same characteristic structure boundary.
The specific implementation steps for preliminarily defining the image fusion range of the reference layer and the non-reference layer are as follows:
setting a plurality of mark points on the same characteristic structure of the reference layer, wherein the parameters of the mark points comprise pixel coordinates, pixel values and a plurality of adjacent pixel value difference values;
determining coordination points which are matched with the pixel values of the mark points one by one on the non-reference layer;
and stacking the non-reference layer on the reference layer, wherein the mark points and the coordination points coincide one by one when the non-reference layer is stacked.
In this embodiment, after the first layer and the second layer are determined to have the same feature structure, the reference layer and the non-reference layer are stacked up and down through the position matching of the same feature structure, and the image fusion range based on the same feature structure matching is obtained through this operation.
And accurately defining the extended fusion sideband of the reference layer and the non-reference layer near the fusion boundary, and secondarily calibrating the superposition boundary of the reference layer and the non-reference layer.
The specific implementation steps of accurately defining the expanded fusion sideband of the reference layer and the non-reference layer near the fusion boundary are as follows:
respectively comparing the area of the preliminarily defined image fusion range with the area of the reference layer and the area of the non-reference layer;
when the area of the preliminarily defined image fusion range is smaller than the area of a non-reference layer, calculating an intersecting surface of the reference layer and the non-reference layer and a coordinate range corresponding to the intersecting surface;
and when the area of the preliminarily defined image fusion range is equal to the area of the non-reference layer, calculating the coordinate range corresponding to the reference layer.
When the reference layer and the non-reference layer are stacked, the stacked accurate region needs to be determined for subsequent image fusion processing, and therefore image fusion operation can be performed after the pixel points of the stacked accurate region are extracted.
Therefore, in the embodiment, the reference layer and the non-reference layer are stacked by matching the positions of the same pixel points with the same characteristic structure, the coordinate range corresponding to the image overlapping region is determined by calculating the intersection surface of the reference layer and the non-reference layer after stacking, and the pixel points in the coordinate range are subjected to image fusion operation in sequence to improve the definition of the image.
The specific implementation steps of calculating the intersecting surface of the reference layer and the non-reference layer are as follows: respectively extending vertical straight lines along the side edges of the non-reference layer covered on the reference layer, and respectively extending vertical straight lines along the covered side edges of the reference layer;
and respectively calculating the intersection points of the vertical straight line extended by the non-reference layer and the vertical straight line extended by the reference layer and the two-dimensional rectangular coordinate system, and counting the coordinate range corresponding to the intersection points.
The specific implementation steps for calculating the coordinate range corresponding to the reference layer are as follows:
and extending the outer edge line of the reference layer until an intersection point is generated with the two-dimensional rectangular coordinate, and counting the coordinate range corresponding to the intersection point.
In summary, the pixel points in the coordinate range corresponding to the reference layer and the non-reference layer are the image overlapping area.
And step 400, carrying out image fusion on the superposition boundary according to a fusion algorithm so as to improve the definition of the image superposition part.
Therefore, in the present embodiment, the definition of the overlapping area is improved by determining and calculating the overlapping area of the two images, and then performing image fusion on the overlapping area of the two images according to the conventional image fusion algorithm, and the present embodiment performs processing on the overlapping area not only by calculating the overlapping area of the two image layers corresponding to the two original images, but also by performing image fusion processing on the overlapping area, and by installing the steps described in the present embodiment to the processing method of n images, the overlapping area of a plurality of images can be obtained.
Therefore, as another innovative point of the present invention, the image fusion method adopted in the image processing-based process of the present embodiment is simple and easy to implement, the same or even higher accuracy is achieved by a hierarchical differentiation method, the overlapping portions of a plurality of original images are determined by image processing and screening, and the image fusion is performed on the overlapping portions, so that the implementation method is simple, and many operation procedures are reduced.
Although the invention has been described in detail above with reference to a general description and specific examples, it will be apparent to one skilled in the art that modifications or improvements may be made thereto based on the invention. Accordingly, such modifications and improvements are intended to be within the scope of the invention as claimed.

Claims (10)

1. An image fusion method for improving image definition is characterized by comprising the following steps:
step 100, acquiring at least two original images for image fusion, and respectively placing the two original images in a two-dimensional rectangular coordinate system to be used as a first image layer and a second image layer;
step 200, selecting a plurality of feature points in the first layer and the second layer, matching the plurality of feature points one by one, and determining a relative correction mode of the first layer and the second layer;
step 300, preliminarily defining an image fusion range of the first image layer and the second image layer, and confirming a fusion boundary of the first image layer and the second image layer for the first time;
step 400, accurately defining an extended fusion sideband of the first layer and the second layer near a fusion boundary, and secondarily calibrating a superposition boundary of the first layer and the second layer;
and 500, carrying out image fusion on the image overlapping area in the overlapping boundary according to a fusion algorithm so as to improve the definition of the image overlapping part.
2. An image fusion method for improving image sharpness according to claim 1, wherein in step 200, it is determined whether there is an overlap between the first image layer and the second image layer by matching feature points of the first image layer and the second image layer, and the specific implementation steps of matching feature points are as follows:
step 201, performing preliminary image processing on two original images to obtain the first image layer and the second image layer after filtering and denoising;
step 202, selecting a feature structure in the first image layer and the second image layer, and planning a feature edge area of the feature structure;
step 203, constructing pixel value distribution maps of the feature edge region of the second layer and the feature edge region of the first layer, and comparing the pixel value distribution maps of all the feature edge regions one by one to determine whether the first layer and the second layer have the same feature structure;
step 204, judging a relative turning angle between the first layer and the second layer according to a pixel value distribution diagram of a feature structure of the same feature edge region;
step 205, intercepting quasi-overlapping segments with the same characteristic structure in the first layer and the second layer, dividing the quasi-overlapping segments into a plurality of rows of pixel comparison regions according to relative turning angles, sequentially calculating gray value differences of two adjacent pixel points in each row of pixel comparison regions, and preliminarily defining an image fusion range.
3. The image fusion method for improving image sharpness according to claim 2, wherein in step 205, quasi-overlapping segments with the same feature structure are intercepted, and image fusion ranges are determined twice for the quasi-overlapping segments corresponding to the first image layer and the second image layer, and the specific implementation steps are as follows:
defining a plurality of quasi-overlapping segments in a first image layer and a second image layer according to the characteristic edge region with image overlapping;
carrying out gray level processing on two quasi-overlapping segments with the same characteristic structure, taking the quasi-overlapping segments with few pixels in the inclusion range as a compared object, taking the quasi-overlapping segments with more pixels in the inclusion range as a compared object, and sequentially calculating a pixel value difference value of two adjacent pixels in the inclusion range of the compared object and the inclusion range of the compared object;
and defining the quasi-overlapping segments with the same pixel value difference or multiple pixel value differences as overlapping segments, and determining the image fusion range of the quasi-overlapping segments with the same number of pixel points of the overlapping segments and the compared object.
4. The image fusion method according to claim 2, wherein after the image fusion range is determined, two coordinate axes of the two-dimensional rectangular coordinate system are used as reference lines, a first layer or a second layer parallel to the coordinate axes of the two-dimensional rectangular coordinate system is determined as a reference layer, the second layer or the first layer parallel to the coordinate axes of the two-dimensional rectangular coordinate system is determined as a non-reference layer, and pixel points of the non-reference layer are rotated to be parallel to the reference layer according to the relative flip angle.
5. An image fusion method for improving image sharpness according to claim 1, wherein in step 300, image fusion ranges of a first image layer and a second image layer are defined according to the same feature structure of the first image layer and the second image layer, and the fusion boundaries are specifically the same feature structure boundaries.
6. The image fusion method for improving image sharpness according to claim 5, wherein the specific implementation step of preliminarily defining the image fusion range of the first image layer and the second image layer is as follows:
setting a plurality of mark points on the same characteristic structure of the reference layer, wherein the parameters of the mark points comprise pixel coordinates, pixel values and a plurality of adjacent pixel value difference values;
determining coordination points which are matched with the pixel values of the mark points one by one on the non-reference layer;
and stacking the non-reference layer on the reference layer, wherein the mark points and the coordination points coincide one by one when the non-reference layer is stacked.
7. An image fusion method for improving image sharpness according to claim 6, characterized in that in step 400, the specific implementation steps of accurately defining extended fusion sidebands of the first layer and the second layer near a fusion boundary are as follows:
respectively comparing the area of the preliminarily defined image fusion range with the area of the reference layer and the area of the non-reference layer;
when the area of the preliminarily defined image fusion range is smaller than the area of a non-reference layer, calculating an intersecting surface of the reference layer and the non-reference layer and a coordinate range corresponding to the intersecting surface;
and when the area of the preliminarily defined image fusion range is equal to the area of the non-reference layer, calculating the coordinate range corresponding to the reference layer.
8. The image fusion method for improving image sharpness according to claim 7, wherein the specific implementation step of calculating the intersection surface of the reference layer and the non-reference layer is as follows: respectively extending vertical straight lines along the side edges of the non-reference layer covered on the reference layer, and respectively extending vertical straight lines along the covered side edges of the reference layer;
and respectively calculating the intersection points of the vertical straight line extended by the non-reference layer and the vertical straight line extended by the reference layer and the two-dimensional rectangular coordinate system, and counting the coordinate range corresponding to the intersection points.
9. The image fusion method for improving image sharpness according to claim 7, wherein the specific implementation step of calculating the coordinate range corresponding to the reference layer is as follows:
and extending the outer edge line of the reference layer until an intersection point is generated with the two-dimensional rectangular coordinate, and counting the coordinate range corresponding to the intersection point.
10. The image fusion method for improving image definition according to claim 7, wherein pixel points in the coordinate range corresponding to the reference layer and the non-reference layer are the image overlapping area.
CN202010672535.9A 2020-07-14 2020-07-14 Image fusion method for improving image definition Pending CN111563867A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010672535.9A CN111563867A (en) 2020-07-14 2020-07-14 Image fusion method for improving image definition

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010672535.9A CN111563867A (en) 2020-07-14 2020-07-14 Image fusion method for improving image definition

Publications (1)

Publication Number Publication Date
CN111563867A true CN111563867A (en) 2020-08-21

Family

ID=72070234

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010672535.9A Pending CN111563867A (en) 2020-07-14 2020-07-14 Image fusion method for improving image definition

Country Status (1)

Country Link
CN (1) CN111563867A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112233049A (en) * 2020-12-14 2021-01-15 成都中轨轨道设备有限公司 Image fusion method for improving image definition
CN114780004A (en) * 2022-04-11 2022-07-22 北京达佳互联信息技术有限公司 Image display method and device, electronic equipment and storage medium
CN117014704A (en) * 2023-03-31 2023-11-07 苏州宇懋医学科技有限公司 Image fusion device and method for parallelizing and optimizing image fusion algorithm

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103714547A (en) * 2013-12-30 2014-04-09 北京理工大学 Image registration method combined with edge regions and cross-correlation
CN103793894A (en) * 2013-12-04 2014-05-14 国家电网公司 Cloud model cellular automata corner detection-based substation remote viewing image splicing method
US8965121B2 (en) * 2012-10-04 2015-02-24 3Dmedia Corporation Image color matching and equalization devices and related methods
CN107622475A (en) * 2016-07-14 2018-01-23 上海联影医疗科技有限公司 Gray correction method in image mosaic
CN108230376A (en) * 2016-12-30 2018-06-29 北京市商汤科技开发有限公司 Remote sensing image processing method, device and electronic equipment
CN108648145A (en) * 2018-04-28 2018-10-12 北京东软医疗设备有限公司 Image split-joint method and device
CN110544204A (en) * 2019-07-31 2019-12-06 华南理工大学 image splicing method based on block matching
US20200202530A1 (en) * 2016-07-14 2020-06-25 Shanghai United Imaging Healthcare Co., Ltd. System and method for splicing images
CN111553870A (en) * 2020-07-13 2020-08-18 成都中轨轨道设备有限公司 Image processing method based on distributed system

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8965121B2 (en) * 2012-10-04 2015-02-24 3Dmedia Corporation Image color matching and equalization devices and related methods
CN103793894A (en) * 2013-12-04 2014-05-14 国家电网公司 Cloud model cellular automata corner detection-based substation remote viewing image splicing method
CN103714547A (en) * 2013-12-30 2014-04-09 北京理工大学 Image registration method combined with edge regions and cross-correlation
CN107622475A (en) * 2016-07-14 2018-01-23 上海联影医疗科技有限公司 Gray correction method in image mosaic
US20200202530A1 (en) * 2016-07-14 2020-06-25 Shanghai United Imaging Healthcare Co., Ltd. System and method for splicing images
CN108230376A (en) * 2016-12-30 2018-06-29 北京市商汤科技开发有限公司 Remote sensing image processing method, device and electronic equipment
CN108648145A (en) * 2018-04-28 2018-10-12 北京东软医疗设备有限公司 Image split-joint method and device
CN110544204A (en) * 2019-07-31 2019-12-06 华南理工大学 image splicing method based on block matching
CN111553870A (en) * 2020-07-13 2020-08-18 成都中轨轨道设备有限公司 Image processing method based on distributed system

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
张少坤: "全景图像拼接关键技术研究", 《中国优秀硕士学位论文全文数据库信息科技辑》 *
谷雨 等: "结合最佳缝合线和多分辨率融合的图像拼接", 《中国图象图形学报》 *

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112233049A (en) * 2020-12-14 2021-01-15 成都中轨轨道设备有限公司 Image fusion method for improving image definition
CN114780004A (en) * 2022-04-11 2022-07-22 北京达佳互联信息技术有限公司 Image display method and device, electronic equipment and storage medium
CN117014704A (en) * 2023-03-31 2023-11-07 苏州宇懋医学科技有限公司 Image fusion device and method for parallelizing and optimizing image fusion algorithm
CN117014704B (en) * 2023-03-31 2024-02-27 苏州宇懋医学科技有限公司 Image fusion device and method for parallelizing and optimizing image fusion algorithm

Similar Documents

Publication Publication Date Title
CN111563867A (en) Image fusion method for improving image definition
CN108760767B (en) Large-size liquid crystal display defect detection method based on machine vision
CN102175700B (en) Method for detecting welding seam segmentation and defects of digital X-ray images
CN103632359B (en) A kind of video super-resolution disposal route
CN109883654B (en) Checkerboard graph for OLED (organic light emitting diode) sub-pixel positioning, generation method and positioning method
US9646370B2 (en) Automatic detection method for defects of a display panel
CN111553870B (en) Image processing method based on distributed system
Oliveira et al. A probabilistic approach for color correction in image mosaicking applications
US10148895B2 (en) Generating a combined infrared/visible light image having an enhanced transition between different types of image information
US20070024639A1 (en) Method of rendering pixel images from abstract datasets
US7102637B2 (en) Method of seamless processing for merging 3D color images
CN110533036B (en) Rapid inclination correction method and system for bill scanned image
CN106952262B (en) Ship plate machining precision analysis method based on stereoscopic vision
JP6830712B1 (en) Random sampling Consistency-based effective area extraction method for fisheye images
CN112233049B (en) Image fusion method for improving image definition
CN114972575A (en) Linear fitting algorithm based on contour edge
CN117036641A (en) Road scene three-dimensional reconstruction and defect detection method based on binocular vision
CN110909772B (en) High-precision real-time multi-scale dial pointer detection method and system
CN110717910B (en) CT image target detection method based on convolutional neural network and CT scanner
CN112037128A (en) Panoramic video splicing method
CN109961393A (en) Subpixel registration and splicing based on interpolation and iteration optimization algorithms
JP2006133055A (en) Unevenness defect detection method and device, spatial filter, unevenness defect inspection system, and program for unevenness defect detection method
CN110717890B (en) Butt joint ring identification method and medium
CN113920423A (en) Immunoblotting image identification method
CN114354622A (en) Defect detection method, device, equipment and medium for display screen

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20200821