CN112258437A - Projection image fusion method, device, equipment and storage medium - Google Patents
Projection image fusion method, device, equipment and storage medium Download PDFInfo
- Publication number
- CN112258437A CN112258437A CN202011137931.8A CN202011137931A CN112258437A CN 112258437 A CN112258437 A CN 112258437A CN 202011137931 A CN202011137931 A CN 202011137931A CN 112258437 A CN112258437 A CN 112258437A
- Authority
- CN
- China
- Prior art keywords
- fused
- image
- point
- images
- corner
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000007500 overflow downdraw method Methods 0.000 title claims description 16
- 230000004927 fusion Effects 0.000 claims abstract description 48
- 239000011159 matrix material Substances 0.000 claims abstract description 46
- 238000000605 extraction Methods 0.000 claims abstract description 37
- 238000000034 method Methods 0.000 claims abstract description 22
- 241000282320 Panthera leo Species 0.000 claims abstract description 16
- 238000004364 calculation method Methods 0.000 claims description 7
- 238000001914 filtration Methods 0.000 claims description 7
- 230000007547 defect Effects 0.000 description 4
- 230000008878 coupling Effects 0.000 description 3
- 238000010168 coupling process Methods 0.000 description 3
- 238000005859 coupling reaction Methods 0.000 description 3
- 230000008569 process Effects 0.000 description 3
- 238000004891 communication Methods 0.000 description 2
- 230000003247 decreasing effect Effects 0.000 description 2
- 238000010586 diagram Methods 0.000 description 2
- 238000005516 engineering process Methods 0.000 description 2
- 238000009499 grossing Methods 0.000 description 2
- 230000009466 transformation Effects 0.000 description 2
- 230000008859 change Effects 0.000 description 1
- 238000006243 chemical reaction Methods 0.000 description 1
- 230000006870 function Effects 0.000 description 1
- 238000005286 illumination Methods 0.000 description 1
- 239000002932 luster Substances 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 238000012545 processing Methods 0.000 description 1
- 238000011084 recovery Methods 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
- 230000007704 transition Effects 0.000 description 1
- 238000013519 translation Methods 0.000 description 1
- 230000000007 visual effect Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/50—Image enhancement or restoration using two or more images, e.g. averaging or subtraction
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/30—Determination of transform parameters for the alignment of images, i.e. image registration
- G06T7/33—Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10004—Still image; Photographic image
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20212—Image combination
- G06T2207/20221—Image fusion; Image merging
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Image Processing (AREA)
Abstract
The application discloses a method, a device, equipment and a storage medium for fusing projected images, wherein the method comprises the following steps: acquiring images to be fused corresponding to all projection sources; extracting the characteristics of each image to be fused by a harris angular point algorithm to obtain a corresponding angular point characteristic matrix; extracting key angular points from each angular point feature matrix according to a preset key point extraction principle; based on the key angular points, acquiring excellent matching point pairs corresponding to the images to be fused by using a Lowe's algorithm; carrying out image registration on the corresponding image to be fused according to each excellent matching point pair set; the method comprises the steps of performing image fusion on all images to be fused after image registration to obtain a fused image, and solving the technical problems that the accuracy of extracted features is low due to the fact that the images to be fused cannot be accurately expressed by a feature extraction method in the existing projection fusion, and the local detail after image fusion is distorted due to the fact that the matching accuracy cannot be guaranteed in feature matching.
Description
Technical Field
The present application relates to the field of image fusion technologies, and in particular, to a method, an apparatus, a device, and a storage medium for fusing projected images.
Background
The projection fusion is to overlap the edges of the pictures projected by a group of projectors, and displays a bright picture which is naturally jointed without gaps through a fusion technology.
In the prior art, during projection fusion, a feature extraction method cannot accurately express an image to be fused, so that the accuracy of extracted features is low, and the matching accuracy cannot be ensured during feature matching, so that local detail distortion is caused after image fusion.
Disclosure of Invention
The application provides a projection image fusion method, a projection image fusion device, projection image fusion equipment and a storage medium, and solves the technical problems that when existing projection fusion is carried out, a feature extraction method cannot accurately express an image to be fused, so that the accuracy of extracted features is low, and the matching accuracy cannot be guaranteed when the features are matched, so that local detail after the image fusion is distorted.
In view of this, the first aspect of the present application provides a projection image fusion method, including:
acquiring images to be fused corresponding to all projection sources;
extracting the characteristics of each image to be fused by a harris angular point algorithm to obtain a corresponding angular point characteristic matrix;
extracting key angular points from each angular point feature matrix according to a preset key point extraction principle;
based on the key angular points, acquiring excellent matching point pairs corresponding to the images to be fused by using a Lowe's algorithm;
carrying out image registration on the corresponding image to be fused according to each excellent matching point pair set;
and carrying out image fusion on all the images to be fused after image registration to obtain fused images.
Optionally, the performing feature extraction on each image to be fused through a harris corner algorithm to obtain a corresponding corner feature matrix specifically includes:
performing feature extraction on each pixel point in each image to be fused through a harris corner algorithm to obtain corner feature corresponding to each pixel point;
and combining all the corner features corresponding to the images to be fused to obtain the corner feature matrix.
Optionally, the performing, by using a harris corner algorithm, feature extraction on each pixel point in each image to be fused to obtain a corner feature corresponding to each pixel point specifically includes:
filtering the pixel points in the images to be fused according to a horizontal difference operator to obtain a first information amount of each pixel point in the x direction;
filtering the pixel points in the images to be fused according to a vertical difference operator to obtain a second information amount of each pixel point in the y direction;
constructing an autocorrelation matrix corresponding to each pixel point according to the first information quantity and the second information quantity;
and calculating the angular point quantity corresponding to each pixel point according to the autocorrelation matrix based on an angular point calculation formula to obtain angular point characteristics.
Optionally, the corner point calculation formula is:
wherein cim is the angular point quantity, IxIs a first information quantity, IyIs the second information amount.
Optionally, the obtaining, based on the key corner point, an excellent matching point pair set corresponding to each image to be fused by using a Lowe's algorithm specifically includes:
calculating excellent matching point pairs corresponding to the key corner points of the images to be fused by using a Lowe's algorithm;
and combining all the excellent matching point pairs corresponding to the images to be fused to obtain the excellent matching point pair set.
Optionally, the calculating, by using the Lowe's algorithm, excellent matching point pairs corresponding to the key corner points of each image to be fused specifically includes:
selecting a first key corner point from a first image to be fused, wherein the first image to be fused is one image to be fused of all the images to be fused;
calculating the Euclidean distance between a key corner point in a second image to be fused and the first key corner point, wherein the second image to be fused is one image to be fused of all the images to be fused except the first image to be fused;
two key corner points are selected as selected corner points in an increasing manner from the key corner point with the shortest Euclidean distance from the key corner points of the second image to be fused;
calculating the ratio of the shortest Euclidean distance to the second shortest Euclidean distance in the selected corner points;
when the ratio is smaller than a preset ratio threshold, taking the selected corner with the shortest European distance from the selected corners as a second key corner;
and taking the first key corner point and the second key corner point as excellent matching point pairs of the first key corner point.
Optionally, the method further comprises:
and performing weighted fusion on the overlapped boundaries in the fused image to obtain a new fused image.
The present application provides in a second aspect a projection image fusion apparatus, including:
the first acquisition unit is used for acquiring the images to be fused corresponding to the projection sources;
the first extraction unit is used for extracting the characteristics of each image to be fused through a harris corner algorithm to obtain a corresponding corner characteristic matrix;
the second extraction unit is used for extracting key angular points from each angular point feature matrix according to a preset key point extraction principle;
the second obtaining unit is used for obtaining excellent matching point pairs corresponding to the images to be fused by using a Lowe's algorithm based on the key angular points;
the image registration unit is used for carrying out image registration on the corresponding image to be fused according to each excellent matching point pair set;
and the image fusion unit is used for carrying out image fusion on all the images to be fused after image registration to obtain fused images.
A third aspect of the present application provides a projection image fusion apparatus, the apparatus comprising a processor and a memory:
the memory is used for storing program codes and transmitting the program codes to the processor;
the processor is configured to execute the projection image fusion method according to the first aspect according to instructions in the program code.
A fourth aspect of the present application provides a storage medium characterized by storing program code for executing the projection image fusion method according to the first aspect.
According to the technical scheme, the method has the following advantages:
the application provides a projection image fusion method, which comprises the following steps: acquiring images to be fused corresponding to all projection sources; extracting the characteristics of each image to be fused by a harris angular point algorithm to obtain a corresponding angular point characteristic matrix; extracting key angular points from each angular point feature matrix according to a preset key point extraction principle; based on the key angular points, acquiring excellent matching point pairs corresponding to the images to be fused by using a Lowe's algorithm; carrying out image registration on the corresponding image to be fused according to each excellent matching point pair set; and performing image fusion on all images to be fused after image registration to obtain fused images. In the application, the corner feature matrix of each image to be fused is obtained through the harris corner algorithm, the defect that the feature extraction of the traditional extraction algorithm is incomplete is overcome, then the key corners are selected from the corner feature matrix, and then excellent matching point pairs are obtained based on the key corners, so that the excellent matching point pairs are more accurate, the accuracy of subsequent matching is improved, and the technical problem that when the existing projection fusion is carried out, the feature extraction method cannot accurately express the image to be fused, the accuracy of the extracted features is low, the matching accuracy cannot be guaranteed when the features are matched, and the local detail is distorted after the image fusion is caused is solved.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings needed to be used in the description of the embodiments are briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without inventive exercise.
Fig. 1 is a schematic flowchart of a first embodiment of a projection image fusion method in an embodiment of the present application;
fig. 2 is a schematic flowchart of a second embodiment of a projection image fusion method in an embodiment of the present application;
fig. 3 is a schematic structural diagram of an embodiment of a projection image fusion apparatus in an embodiment of the present application.
Detailed Description
The embodiment of the application provides a projection image fusion method, a projection image fusion device, projection image fusion equipment and a storage medium, and solves the technical problems that when the existing projection fusion is carried out, a feature extraction method cannot accurately express an image to be fused, so that the accuracy of extracted features is not high, and the matching accuracy cannot be guaranteed when the features are matched, so that the local detail after the image fusion is distorted.
In order to make the technical solutions of the present application better understood, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
Referring to fig. 1, a flowchart of a first embodiment of a method for fusing projected images in an embodiment of the present application is shown.
Before image fusion, images to be fused need to be acquired, one image to be fused corresponds to one projection source, in this embodiment, high-definition cameras of the same model are selected as acquisition devices for acquiring the images to be fused, and meanwhile, the built-in parameters of the high-definition cameras are initialized to be consistent.
And 102, extracting the characteristics of each image to be fused through a harris corner algorithm to obtain a corresponding corner characteristic matrix.
And 103, extracting key corner points from the corner point feature matrix according to a preset key point extraction principle.
It can be understood that the preset key point extraction principle in this embodiment is as follows: the key corner point is larger than a preset corner point threshold value, and the key corner point is a local maximum value in a neighborhood corresponding to the key corner point.
In the process of extracting the key corner points, if the preset corner point threshold value is increased, the number of the extracted key corner points is reduced, if the preset corner point threshold value is reduced, the number of the extracted key corner points is increased, and in addition, the number and tolerance of the key corner points are also increased by solving the neighborhood size of the local maximum value. The person skilled in the art can set the setting according to the specific needs, and is not limited in particular here.
And step 104, acquiring excellent matching point pairs corresponding to the images to be fused by using a Lowe's algorithm based on the key corner points.
And 105, carrying out image registration on the corresponding image to be fused according to each excellent matching point pair set.
And step 106, carrying out image fusion on all images to be fused after image registration to obtain fused images.
In the embodiment, the corner feature matrix of each image to be fused is obtained through the harris corner algorithm, the defect that feature extraction is incomplete through the traditional extraction algorithm is overcome, then key corners are selected from the corner feature matrix, and then excellent matching point pairs are obtained based on the key corners, so that the excellent matching point pairs are more accurate, the accuracy of subsequent matching is improved, and the technical problems that when the existing projection fusion is carried out, the feature extraction method cannot accurately express the image to be fused, the accuracy of extracted features is low, the matching accuracy cannot be guaranteed when the features are matched, and local detail distortion is caused after the image fusion are solved.
The above is a first embodiment of the projection image fusion method provided in the embodiments of the present application, and the following is a second embodiment of the projection image fusion method provided in the embodiments of the present application.
Referring to fig. 2, a flowchart of a second embodiment of a method for fusing projected images in an embodiment of the present application is shown.
It should be noted that the description of step 201 is the same as the description of step 101 in the first embodiment, and reference may be specifically made to the description of step 101, which is not described herein again.
And 203, filtering the pixel points in each image to be fused according to the vertical difference operator to obtain a second information amount of each pixel point in the y direction.
And 204, constructing an autocorrelation matrix corresponding to each pixel point according to the first information quantity and the second information quantity.
The autocorrelation matrix in this embodiment is:
where m is the autocorrelation matrix, IxIs a first information quantity, IyIn order to be the second amount of information,
it can be understood that, after obtaining the autocorrelation matrix, before performing the corner feature calculation, gaussian smoothing filtering may be performed on elements in the autocorrelation matrix to obtain a new autocorrelation matrix m, where the discrete two-dimensional zero-mean gaussian function is:
where σ is a smoothing degree parameter.
And step 205, calculating the corner amount corresponding to each pixel point according to the autocorrelation matrix based on a corner calculation formula to obtain the corner features.
The corner point calculation formula in this embodiment is:
in the formula, cim is the angular point quantity.
And step 206, combining all corner features corresponding to the images to be fused to obtain a corner feature matrix.
And step 207, extracting key corner points from the corner point feature matrix according to a preset key point extraction principle.
It should be noted that the description of step 207 is the same as the description of step 103 in the first embodiment, and reference may be specifically made to the description of step 103, which is not described herein again.
And 208, calculating excellent matching point pairs corresponding to the key corner points of the images to be fused by using a Lowe's algorithm.
It should be noted that, by using the Lowe's algorithm, calculating excellent matching point pairs corresponding to each key corner point of each image to be fused specifically includes:
selecting a first key corner point from a first image to be fused, wherein the first image to be fused is one image to be fused of all images to be fused;
calculating the Euclidean distance between a key corner point in a second image to be fused and the first key corner point, wherein the second image to be fused is one image to be fused of all the images to be fused except the first image to be fused;
two key angular points are selected as selected angular points in an increasing manner from the key angular point with the shortest Euclidean distance from the key angular points of the second image to be fused;
calculating the ratio of the shortest Euclidean distance to the second shortest Euclidean distance in the selected corner points;
when the ratio is smaller than a preset ratio threshold, taking the selected corner with the shortest European distance from the selected corners as a second key corner;
and taking the first key corner point and the second key corner point as excellent matching point pairs of the first key corner point.
It is understood that, by decreasing the preset ratio threshold in this embodiment, the number of excellent matching point pairs is decreased, but is more stable, and vice versa.
In this embodiment, the preset ratio threshold is 0.8, and a large number of two pictures with any scale, rotation, and brightness change are matched, and the result shows that the preset ratio threshold is best between 0.4 and 0.6, few matching points less than 0.4 exist, and a large number of error matching points exist if the preset ratio threshold is greater than 0.6, so the principle of the preset ratio threshold is suggested as follows:
the preset ratio threshold is 0.4: matching with high accuracy requirements;
the preset ratio threshold is 0.6: the number of matching points is required to be more;
the preset ratio threshold is 0.5: in general.
And 209, combining all excellent matching point pairs corresponding to the images to be fused to obtain an excellent matching point pair set.
And step 210, carrying out image registration on the corresponding image to be fused according to each excellent matching point pair set.
It should be noted that the purpose of image registration is to enable two images to be fused to be matched in a certain sense, in the embodiment, image registration is to convert a plurality of images to be fused into the same coordinate system, and a perspective transformation matrix is solved by using points on the images to be registered.
Let the coordinates of the viewpoint be (x)e,ye,ze) Translation to (0,0,0) results in a view matrix T, which is:
the recovery matrix T' corresponding to the viewpoint matrix T is:
viewpoint (x)e,ye,ze) Translating to the origin, and the equation of the original plane is:
ax+by+cz+d=0;
x'=x-xe;
y'=y-ye;
z'=z-ze;
ax'+by'+cz'+axe+bye+cze+d=0;
a'=a;
b'=b;
c'=c;
d'=axe+bye+cze+d;
in the formula, a, b, c and d are undetermined coefficients of a plane equation, x ' y ' z ' is a corresponding coordinate after the viewpoint is translated, and a ' b ' c'd ' is an equation coefficient after conversion.
For a viewpoint at (0,0,0), the plane equation for this time is:
a'x'+b'y'+cz'+d=0;
the three points of the viewpoint, any point p in the space and the projection p' of the point p are collinear: the projection p 'of the spatial point p (x, y, z) on the plane at this time is t (x, y, z), and the plane equation at this time is substituted with p', to obtain:
t=-d'/(a'x'+b'x'+c'z');
for any point and direction to the projection plane is also arbitrary, and its perspective projection transformation matrix, P ═ T-1P 'T, where P' is the inverse of P.
And step 211, performing image fusion on all images to be fused after image registration to obtain fused images.
And step 212, performing weighted fusion on the overlapped boundaries in the fused image to obtain a new fused image.
Generally, the directly fused images have an unnatural feeling in a visual sense because of the overlapping boundary, and the transition of the two images at the boundary is unnatural due to the color and luster of the illumination, so that in this embodiment, the overlapping boundary is weighted and fused, and the previous image is slowly transitioned to the second image at the overlapping part, that is, the pixel values of the overlapping area of the images are added according to a certain weight to synthesize a new image.
In the embodiment, the corner feature matrix of each image to be fused is obtained through the harris corner algorithm, the defect that feature extraction is incomplete through the traditional extraction algorithm is overcome, then key corners are selected from the corner feature matrix, and then excellent matching point pairs are obtained based on the key corners, so that the excellent matching point pairs are more accurate, the accuracy of subsequent matching is improved, and the technical problems that when the existing projection fusion is carried out, the feature extraction method cannot accurately express the image to be fused, the accuracy of extracted features is low, the matching accuracy cannot be guaranteed when the features are matched, and local detail distortion is caused after the image fusion are solved.
The second embodiment of the method for fusing projected images provided in the embodiments of the present application is described above, and the following is an embodiment of the device for fusing projected images provided in the embodiments of the present application, please refer to fig. 3.
Referring to fig. 3, a schematic structural diagram of an embodiment of a projection image fusion apparatus in an embodiment of the present application includes:
a first obtaining unit 301, configured to obtain an image to be fused corresponding to each projection source;
a first extraction unit 302, configured to perform feature extraction on each image to be fused through a harris corner algorithm to obtain a corresponding corner feature matrix;
a second extraction unit 303, configured to extract a key corner from each corner feature matrix according to a preset key point extraction principle;
a second obtaining unit 304, configured to obtain, based on the key corner points, an excellent matching point pair set corresponding to each image to be fused by using a Lowe's algorithm;
an image registration unit 305, configured to perform image registration on the corresponding image to be fused according to each excellent matching point pair set;
and the image fusion unit 306 is configured to perform image fusion on all images to be fused after the image registration to obtain a fused image.
In the embodiment, the corner feature matrix of each image to be fused is obtained through the harris corner algorithm, the defect that feature extraction is incomplete through the traditional extraction algorithm is overcome, then key corners are selected from the corner feature matrix, and then excellent matching point pairs are obtained based on the key corners, so that the excellent matching point pairs are more accurate, the accuracy of subsequent matching is improved, and the technical problems that when the existing projection fusion is carried out, the feature extraction method cannot accurately express the image to be fused, the accuracy of extracted features is low, the matching accuracy cannot be guaranteed when the features are matched, and local detail distortion is caused after the image fusion are solved.
The embodiment of the application also provides projection image fusion equipment, which comprises a processor and a memory; the memory is used for storing the program codes and transmitting the program codes to the processor; the processor is configured to execute the projection image fusion method of the first embodiment or the second embodiment according to instructions in the program code.
The embodiment of the application further provides a storage medium, wherein the storage medium is used for storing program codes, and the program codes are used for executing the projection image fusion method in the first embodiment or the second embodiment.
It is clear to those skilled in the art that, for convenience and brevity of description, the specific working processes of the above-described apparatuses and units may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
In the several embodiments provided in the present application, it should be understood that the disclosed apparatus and method may be implemented in other ways. For example, the above-described apparatus embodiments are merely illustrative, and for example, the division of the units is only one logical functional division, and other divisions may be implemented, for example, a plurality of units or components may be combined or integrated into another grid network to be installed, or some features may be omitted or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to the needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The integrated unit, if implemented in the form of a software functional unit and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present application may be substantially implemented or contributed to by the prior art, or all or part of the technical solution may be embodied in a software product, which is stored in a storage medium and includes instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present application. And the aforementioned storage medium includes: various media capable of storing program codes, such as a usb disk, a removable hard disk, a Read-only memory (ROM), a Random Access Memory (RAM), a magnetic disk, or an optical disk.
The above embodiments are only used for illustrating the technical solutions of the present application, and not for limiting the same; although the present application has been described in detail with reference to the foregoing embodiments, it should be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; and such modifications or substitutions do not depart from the spirit and scope of the corresponding technical solutions in the embodiments of the present application.
Claims (10)
1. A method of fusion of projected images, comprising:
acquiring images to be fused corresponding to all projection sources;
extracting the characteristics of each image to be fused by a harris angular point algorithm to obtain a corresponding angular point characteristic matrix;
extracting key angular points from each angular point feature matrix according to a preset key point extraction principle;
based on the key angular points, acquiring excellent matching point pairs corresponding to the images to be fused by using a Lowe's algorithm;
carrying out image registration on the corresponding image to be fused according to each excellent matching point pair set;
and carrying out image fusion on all the images to be fused after image registration to obtain fused images.
2. The method for fusing the projection images according to claim 1, wherein the extracting the features of each image to be fused by a harris corner algorithm to obtain the corresponding corner feature matrix specifically comprises:
performing feature extraction on each pixel point in each image to be fused through a harris corner algorithm to obtain corner feature corresponding to each pixel point;
and combining all the corner features corresponding to the images to be fused to obtain the corner feature matrix.
3. The method for fusing the projection images according to claim 2, wherein the extracting the features of the pixel points in the images to be fused by a harris corner algorithm to obtain the corner features corresponding to the pixel points specifically comprises:
filtering the pixel points in the images to be fused according to a horizontal difference operator to obtain a first information amount of each pixel point in the x direction;
filtering the pixel points in the images to be fused according to a vertical difference operator to obtain a second information amount of each pixel point in the y direction;
constructing an autocorrelation matrix corresponding to each pixel point according to the first information quantity and the second information quantity;
and calculating the angular point quantity corresponding to each pixel point according to the autocorrelation matrix based on an angular point calculation formula to obtain angular point characteristics.
5. The method for fusing the projection images according to claim 1, wherein the obtaining of the excellent matching point pair set corresponding to each image to be fused by using the Lowe's algorithm based on the key corner points specifically comprises:
calculating excellent matching point pairs corresponding to the key corner points of the images to be fused by using a Lowe's algorithm;
and combining all the excellent matching point pairs corresponding to the images to be fused to obtain the excellent matching point pair set.
6. The method for fusing the projection images according to claim 1, wherein the calculating of the excellent matching point pairs corresponding to the key corner points of the images to be fused by using the Lowe's algorithm specifically comprises:
selecting a first key corner point from a first image to be fused, wherein the first image to be fused is one image to be fused of all the images to be fused;
calculating the Euclidean distance between a key corner point in a second image to be fused and the first key corner point, wherein the second image to be fused is one image to be fused of all the images to be fused except the first image to be fused;
two key corner points are selected as selected corner points in an increasing manner from the key corner point with the shortest Euclidean distance from the key corner points of the second image to be fused;
calculating the ratio of the shortest Euclidean distance to the second shortest Euclidean distance in the selected corner points;
when the ratio is smaller than a preset ratio threshold, taking the selected corner with the shortest European distance from the selected corners as a second key corner;
and taking the first key corner point and the second key corner point as excellent matching point pairs of the first key corner point.
7. The method of projection image fusion according to claim 1, further comprising:
and performing weighted fusion on the overlapped boundaries in the fused image to obtain a new fused image.
8. A projection image fusion apparatus, comprising:
the first acquisition unit is used for acquiring the images to be fused corresponding to the projection sources;
the first extraction unit is used for extracting the characteristics of each image to be fused through a harris corner algorithm to obtain a corresponding corner characteristic matrix;
the second extraction unit is used for extracting key angular points from each angular point feature matrix according to a preset key point extraction principle;
the second obtaining unit is used for obtaining excellent matching point pairs corresponding to the images to be fused by using a Lowe's algorithm based on the key angular points;
the image registration unit is used for carrying out image registration on the corresponding image to be fused according to each excellent matching point pair set;
and the image fusion unit is used for carrying out image fusion on all the images to be fused after image registration to obtain fused images.
9. A projected image fusion device, the device comprising a processor and a memory:
the memory is used for storing program codes and transmitting the program codes to the processor;
the processor is configured to execute the projection image fusion method of any one of claims 1 to 7 according to instructions in the program code.
10. A storage medium characterized by storing a program code for executing the projection image fusion method according to any one of claims 1 to 7.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202011137931.8A CN112258437A (en) | 2020-10-22 | 2020-10-22 | Projection image fusion method, device, equipment and storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202011137931.8A CN112258437A (en) | 2020-10-22 | 2020-10-22 | Projection image fusion method, device, equipment and storage medium |
Publications (1)
Publication Number | Publication Date |
---|---|
CN112258437A true CN112258437A (en) | 2021-01-22 |
Family
ID=74264586
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202011137931.8A Pending CN112258437A (en) | 2020-10-22 | 2020-10-22 | Projection image fusion method, device, equipment and storage medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN112258437A (en) |
Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20130208997A1 (en) * | 2010-11-02 | 2013-08-15 | Zte Corporation | Method and Apparatus for Combining Panoramic Image |
CN106060493A (en) * | 2016-07-07 | 2016-10-26 | 广东技术师范学院 | Multi-source projection seamless edge stitching method and system |
CN106898019A (en) * | 2017-02-21 | 2017-06-27 | 广西大学 | Method for registering images and device based on Scale invariant Harris features |
CN109993718A (en) * | 2019-03-05 | 2019-07-09 | 北京当红齐天国际文化发展集团有限公司 | A kind of multi-channel projection image interfusion method and device |
US20200090303A1 (en) * | 2016-12-16 | 2020-03-19 | Hangzhou Hikvision Digital Technology Co., Ltd. | Method and device for fusing panoramic video images |
CN111062927A (en) * | 2019-12-18 | 2020-04-24 | 广东电网有限责任公司 | Method, system and equipment for detecting image quality of unmanned aerial vehicle |
CN111524139A (en) * | 2020-04-02 | 2020-08-11 | 西安电子科技大学 | Bilateral filter-based corner detection method and system |
-
2020
- 2020-10-22 CN CN202011137931.8A patent/CN112258437A/en active Pending
Patent Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20130208997A1 (en) * | 2010-11-02 | 2013-08-15 | Zte Corporation | Method and Apparatus for Combining Panoramic Image |
CN106060493A (en) * | 2016-07-07 | 2016-10-26 | 广东技术师范学院 | Multi-source projection seamless edge stitching method and system |
US20200090303A1 (en) * | 2016-12-16 | 2020-03-19 | Hangzhou Hikvision Digital Technology Co., Ltd. | Method and device for fusing panoramic video images |
CN106898019A (en) * | 2017-02-21 | 2017-06-27 | 广西大学 | Method for registering images and device based on Scale invariant Harris features |
CN109993718A (en) * | 2019-03-05 | 2019-07-09 | 北京当红齐天国际文化发展集团有限公司 | A kind of multi-channel projection image interfusion method and device |
CN111062927A (en) * | 2019-12-18 | 2020-04-24 | 广东电网有限责任公司 | Method, system and equipment for detecting image quality of unmanned aerial vehicle |
CN111524139A (en) * | 2020-04-02 | 2020-08-11 | 西安电子科技大学 | Bilateral filter-based corner detection method and system |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US11632537B2 (en) | Method and apparatus for obtaining binocular panoramic image, and storage medium | |
CN109242961B (en) | Face modeling method and device, electronic equipment and computer readable medium | |
CN103337094B (en) | A kind of method of applying binocular camera and realizing motion three-dimensional reconstruction | |
Varol et al. | Template-free monocular reconstruction of deformable surfaces | |
CN105100770B (en) | Three-dimensional source images calibration method and equipment | |
WO2011049046A1 (en) | Image processing device, image processing method, image processing program, and recording medium | |
CN107798702B (en) | Real-time image superposition method and device for augmented reality | |
US10726612B2 (en) | Method and apparatus for reconstructing three-dimensional model of object | |
CN112241933A (en) | Face image processing method and device, storage medium and electronic equipment | |
KR20110059506A (en) | System and method for obtaining camera parameters from multiple images and computer program products thereof | |
CN109525786B (en) | Video processing method and device, terminal equipment and storage medium | |
CN106296789B (en) | It is a kind of to be virtually implanted the method and terminal that object shuttles in outdoor scene | |
CN103313081A (en) | Image processing apparatus and method | |
US20190206117A1 (en) | Image processing method, intelligent terminal, and storage device | |
CN112348958A (en) | Method, device and system for acquiring key frame image and three-dimensional reconstruction method | |
US11812154B2 (en) | Method, apparatus and system for video processing | |
Wang et al. | Stereo vision–based depth of field rendering on a mobile device | |
WO2023093279A1 (en) | Image processing method and apparatus, and device, storage medium and computer program product | |
JP2008217593A (en) | Subject area extraction device and subject area extraction program | |
US11954832B2 (en) | Three-dimensional reconstruction method and apparatus | |
Wang et al. | Perf: Panoramic neural radiance field from a single panorama | |
CN117218320B (en) | Space labeling method based on mixed reality | |
CN114399610A (en) | Texture mapping system and method based on guide prior | |
CN112365589B (en) | Virtual three-dimensional scene display method, device and system | |
CN112258437A (en) | Projection image fusion method, device, equipment and storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20210122 |