CN115953332B - Dynamic image fusion brightness adjustment method, system and electronic equipment - Google Patents

Dynamic image fusion brightness adjustment method, system and electronic equipment Download PDF

Info

Publication number
CN115953332B
CN115953332B CN202310246425.XA CN202310246425A CN115953332B CN 115953332 B CN115953332 B CN 115953332B CN 202310246425 A CN202310246425 A CN 202310246425A CN 115953332 B CN115953332 B CN 115953332B
Authority
CN
China
Prior art keywords
brightness
img2
points
images img1
feature points
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202310246425.XA
Other languages
Chinese (zh)
Other versions
CN115953332A (en
Inventor
宋小民
刘征
姜春桐
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Sichuan Xinshi Chuangwei Ultra High Definition Technology Co ltd
Original Assignee
Sichuan Xinshi Chuangwei Ultra High Definition Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Sichuan Xinshi Chuangwei Ultra High Definition Technology Co ltd filed Critical Sichuan Xinshi Chuangwei Ultra High Definition Technology Co ltd
Priority to CN202310246425.XA priority Critical patent/CN115953332B/en
Publication of CN115953332A publication Critical patent/CN115953332A/en
Application granted granted Critical
Publication of CN115953332B publication Critical patent/CN115953332B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Abstract

The application provides a brightness adjustment method, a system and electronic equipment for dynamic image fusion, wherein images img1 and img2 are preprocessed, and the images img1 and img2 have the same size and contain overlapping areas; then extracting key feature points in the preprocessed images img1 and img2; the brightness of the image img2 is adjusted according to the average brightness ratio of key feature points in the images img1 and img2. According to the scheme, brightness of four pixel points adjacent to the feature points is utilized to conduct bilinear interpolation to obtain brightness of the feature point positions, brightness ratio of the feature points is calculated to unify brightness of a plurality of image sequences, and accuracy of brightness ratio calculation results is improved; meanwhile, a two-stage preprocessing mode is adopted, and filtering processing is carried out through image graying and large filtering kernels, so that the operation amount is reduced, meanwhile, the influence of abnormal points is innovatively removed aiming at the filtering process, the filtering preprocessing effect is ensured, and the relation between the effect and the efficiency is well solved.

Description

Dynamic image fusion brightness adjustment method, system and electronic equipment
Technical Field
The present application relates to the field of image processing technologies, and in particular, to a brightness adjustment method, system, and electronic device for dynamic image fusion.
Background
As a core technology in video and image processing, image stitching refers to connecting two or more small-view images with overlapping areas together, and finally obtaining an image with a large view. The method is widely applied to the fields of motion tracking and analysis, large scene monitoring, virtual reality, automobile auxiliary driving, broadcast and television media and the like. Image registration and image fusion are two key technologies of image stitching, wherein in the process of image fusion, the brightness difference of two images to be fused is likely to be larger due to the influence of shooting conditions or external factors and the like, and a common method is to calculate the brightness sum of all pixel points in the overlapped part of the two images, namely the part to be fused, calculate the ratio of the brightness sum of a reference image and an observed image, and then adjust the brightness of the observed image according to the ratio. However, for a moving scene, if a moving object is just located outside an overlapping area of a reference image and is located inside an overlapping area of an observation image, the brightness ratio is calculated according to a conventional method, and the result of calculating the brightness ratio is inaccurate due to the difference of fusion parts between the reference image and the observation image, so that the image fusion effect is affected.
Aiming at the instability of the image fusion brightness effect of the traditional method in a moving scene, the patent provides an improved image fusion brightness adjustment method in the moving scene, brightness of four pixel points adjacent to a characteristic point is utilized to conduct bilinear interpolation to obtain brightness of the position of the characteristic point, brightness of a plurality of image sequences is unified by calculating brightness ratio of the characteristic point, accuracy of brightness ratio calculation results is improved, and the optimal image splicing effect is obtained.
Therefore, it is necessary to provide a brightness adjustment method, system and electronic device for dynamic image fusion, which solve the above technical problems.
Disclosure of Invention
In order to solve the technical problems, the application provides a brightness adjustment method, a system and electronic equipment for dynamic image fusion.
The application provides a brightness adjustment method for dynamic image fusion, which comprises the following steps:
preprocessing images img1 and img2, the images img1 and img2 being the same size and including overlapping regions;
extracting key feature points in the preprocessed images img1 and img2;
the brightness of the image img2 is adjusted according to the average brightness ratio of key feature points in the images img1 and img2.
Preferably, the preprocessing images img1 and img2 include:
graying processing images img1 and img2 to obtain gray images img1 and img2;
the noise reduction process is performed on the gradation images img1 and img2.
Preferably, the extracting key feature points in the preprocessed images img1 and img2 includes:
extracting feature point information of preprocessed images img1 and img2, wherein the feature point information comprises positions, scales and directions;
carrying out feature description on the feature points;
key feature points in the images img1 and img2 are matched, and feature points with matching errors are removed.
Preferably, the extracting feature point information of the preprocessed images img1 and img2 includes:
creating gaussian pyramid using gaussian functionDifference pyramid->
Searching local extremum points through a Gaussian differential pyramid;
fitting discrete characteristic points;
the main direction of each feature point is calculated.
Preferably, the characterizing the feature points includes:
the feature description includes feature points and surrounding pixels that contribute thereto.
Preferably, the key feature points in the matching images img1 and img2 include:
calculating Euclidean distance of feature vectors of feature points of the images img1 and img2 in 128-dimensional directions;
judging the similarity of the corresponding feature points of the images img1 and img2 according to the Euclidean distance;
and matching the characteristic points meeting the Euclidean distance condition.
Preferably, the feature points with the matching error removed adopt a robust parameter estimation method.
Preferably, the adjusting the brightness of the image img2 according to the average brightness ratio of the key feature points in the images img1 and img2 includes:
obtaining the positions and the brightness of the four adjacent pixel points according to the positions of the characteristic points;
screening the brightness of four adjacent pixel points;
calculating the position brightness of the characteristic points according to the screening result;
calculating the average value of the brightness ratios of all the characteristic points of the images img1 and img2;
the brightness of the image img2 is adjusted using the average value.
A brightness adjustment system for dynamic image fusion, comprising:
a preprocessing module for preprocessing images img1 and img2;
the feature extraction module is used for extracting key feature points in the preprocessed images img1 and img2;
and the brightness adjustment module is used for adjusting the brightness of the image img2 according to the average brightness ratio of the key feature points in the images img1 and img2.
An electronic device comprising a memory, a processor, said memory having stored thereon a computer program executable on said processor, characterized in that said processor executes said computer program to perform the steps of the method described above.
Compared with the related art, the brightness adjustment method, the system and the electronic equipment for dynamic image fusion have the following beneficial effects:
1. the application is achieved by preprocessing images img1 and img2, wherein the images img1 and img2 are the same in size and comprise overlapping areas; then extracting key feature points in the preprocessed images img1 and img2; finally, the brightness of the image img2 is adjusted according to the average brightness ratio of the key feature points in the images img1 and img2. According to the scheme, brightness of four pixel points adjacent to the feature points is utilized to conduct bilinear interpolation to obtain brightness of the feature point positions, brightness of a plurality of image sequences is unified by calculating brightness ratio of the feature points, accuracy of brightness ratio calculation results is improved, and the optimal image splicing effect is obtained.
2. The application adopts a two-stage preprocessing mode, reduces the operation amount by image graying and filtering processing by adopting a large filtering kernel, innovatively eliminates the influence of abnormal points aiming at the filtering process, ensures the filtering preprocessing effect, better solves the relation between the effect and the efficiency, and overcomes the defect that the preprocessing effect and the operation complexity cannot be well considered in the color image preprocessing in the prior art.
Drawings
Fig. 1 is a schematic flow chart of a brightness adjustment method for dynamic image fusion disclosed by the application;
fig. 2 is a schematic overall flow chart of a brightness adjustment method for dynamic image fusion according to the present application;
FIG. 3 is a schematic diagram of a preprocessing flow of a brightness adjustment method for dynamic image fusion according to the present application;
fig. 4 is a schematic flow chart of a feature extraction process of a brightness adjustment method for dynamic image fusion disclosed by the application;
FIG. 5 is a schematic diagram of an image brightness adjustment flow of a brightness adjustment method for dynamic image fusion according to the present application;
FIG. 6 is a schematic diagram showing distribution of feature points and adjacent pixel points in a brightness adjustment method for dynamic image fusion according to the present application;
fig. 7 is a schematic structural diagram of a brightness adjustment system for dynamic image fusion according to the present application;
fig. 8 is a schematic structural diagram of an electronic device according to the present disclosure.
Detailed Description
The application is further described below with reference to the drawings and embodiments.
The brightness adjustment method for dynamic image fusion provided by the application, in this embodiment, as shown in fig. 1 and fig. 2, includes:
step S100: images img1 and img2 are preprocessed by a preprocessing module, the images img1 and img2 being the same size and including overlapping regions.
Specifically, for example, the images img1 and img2 are each set to 8000 (W) in resolution6000 (H), an 8K color digital image of the RGB color channel, i.e., img1 (x, y, z), img2 (x, y, z), where x=8000, y=6000, z=3.
The steps of preprocessing the images img1 and img2 are shown in fig. 3, and include the following operation steps:
step S101: graying processing images img1 and img2 to obtain gray images img1 and img2;
specifically, in view of the independence of the present method on color information while reducing the amount of computation, it is first necessary to convert the color image img1 (x, y, z), img2 (x, y, z) into a grayscale image img1 (x, y), img2 (x, y).
Step S102: the noise reduction process is performed on the gradation images img1 and img2.
Specifically, in order to further improve the accuracy of the calculation result, the method needs to perform noise reduction processing on the gray level image obtained in the step S101, perform mean value filtering by using a filter kernel with a size of 7*7, and remove the maximum value and the minimum value of 10% as the filtering result in order to avoid the influence of abnormal data (such as dead pixels, isolated pixels and the like);
step S200: extracting key feature points in the preprocessed images img1 and img2 by using a feature extraction module;
specifically, as shown in fig. 4, this step includes the steps of:
step S201: extracting feature point information of preprocessed images img1 and img2, wherein the feature point information comprises positions, scales and directions;
specifically, the feature point extraction method comprises the following steps:
first, a Gaussian pyramid is established by using a Gaussian functionDifference pyramid->
Where m, n represent the dimensions of the gaussian template, (x, y) represent the positions of the image pixels,as a factor of the scale-space,is a convolution.
Secondly, searching local extremum point and function by using differential pyramidEach sampling point in the image is compared with the surrounding pixel points, if the sampling point is an extreme point, the sampling point is a characteristic point of the image under the scale, and the surrounding pixel points of the sampling point comprise eight adjacent pixel points of the same scale and nine pixel points of other same scale and same position.
And then, performing three-dimensional quadratic function fitting on the discrete feature points to obtain accurate positions and scales of the feature points, obtaining the principal curvature of the peak points in the edge direction by calculating a 2 multiplied by 2 Hessian matrix, removing edge response, and taking T to be 1.2.
Wherein, the liquid crystal display device comprises a liquid crystal display device,,D xx x-direction continuous derivation of image representing certain scale in Gaussian differential pyramid twice, D yy The y-direction continuous derivation of an image representing a certain scale in a Gaussian differential pyramid is carried out twice, D xy The x and y directions representing an image of a certain scale in the gaussian differential pyramid are derived once each.
And finally, calculating a direction for each feature point, specifically using the gradient and the direction of pixels in the neighborhood of the histogram statistical feature point, and taking the maximum value in the histogram as the main direction of the feature point.
Step S202: carrying out feature description on the feature points;
specifically, feature points of the images img1 and img2 are characterized, namely, a group of vectors are used for describing the feature points so as to ensure that the feature points are not changed along with changes of visual angles and the like;
in addition, the feature description includes feature points and surrounding pixel points contributing thereto; the feature description was characterized using eight-directional gradient information calculated in a grid of 4*4 within the feature point scale space, for a total of 4 x 8 = 128-dimensional vector characterization.
Step S203: the key feature points in the images img1 and img2 are matched, and feature points with matching errors are eliminated, and the method specifically comprises the following operation steps:
first, euclidean distances of feature vectors of feature points of the images img1 and img2 in 128 dimensions are calculated as follows:
where j is the dimension, r i Sum s i Represents the position of the horizontal and vertical coordinates of the pixel point, and d is the Euclidean distance.
And secondly, judging the similarity of the two feature points by utilizing the Euclidean distance, and for the feature point A in the img1, matching the feature point B with the nearest Euclidean distance with the next nearest feature point C in the img2, wherein the ratio of the Euclidean distance is smaller than 0.6, and the matching between the feature point A and the feature point B is successful.
And finally, screening out feature points based on Euclidean distance matching errors by using a robustness estimation method.
Step S300: the brightness of the image img2 is adjusted according to the average brightness ratio of key feature points in the images img1 and img2.
Specifically, as shown in fig. 5, the method comprises the following operation steps:
step S301: obtaining the positions and the brightness of the four adjacent pixel points according to the positions of the characteristic points;
in particular, as shown in fig. 6,the positions and the brightness of the adjacent four pixels are obtained according to the positions of the feature points, because the precision of the feature points is required to be ensured in the matching process of the feature points, the coordinates are usually in decimal, so that the brightness of the feature points cannot be directly obtained when brightness calculation is carried out, and the scheme utilizes the brightness of the adjacent four pixels Q11 (x 1, y 1), Q12 (x 1, y 2), Q21 (x 2, y 1) and Q22 (x 2, y 2) around the feature points P (x, y),/>,/>,/>And calculating the brightness of the characteristic points.
Step S302: screening the brightness of four adjacent pixel points;
specifically, the brightness of adjacent pixel points,/>,/>,/>Screening is carried out, the difference between the brightness of each point and the brightness average value is judged, and the brightness of the pixel point with the difference being more than 15% is regarded as the point with larger fluctuation.
Step S303: calculating the position brightness of the characteristic points according to the screening result;
specifically, calculating the brightness of the characteristic points, and calculating the brightness of the characteristic points according to the brightness of the characteristic points if the characteristic points have only one difference point and are closest to the characteristic points according to the screening result of the brightness;
in other cases, according to bilinear interpolationCalculating brightness of feature point positions
Step S304: calculating the average value of the brightness ratios of all the characteristic points of the images img1 and img2;
specifically, the luminance ratio of all feature points of img1 and img2 is calculatedFor->Sequencing, namely removing the maximum and minimum 10% average value R, and taking the average value R as the adjusted brightness ratio, wherein the calculation formula is as follows:
wherein f 1 Representing the brightness of img1, f 2 Representing the brightness of img2.
Step S305: adjusting the brightness of the image img2 by using the average value;
specifically, the luminance of img2 is adjusted by using the calculated luminance ratio average value R, and the formula is as follows:
the method is simple and efficient, has good universality, is suitable for various motion non-motion scenes, does not distinguish videos and images, does not distinguish resolution, is compatible with 8K/4K/2K/1080P and the like, and is more accurate in calculation result by comparing the brightness ratio of characteristic points compared with the traditional method for calculating the brightness sum of the fusion parts of two images to obtain the brightness ratio.
The following table shows that the brightness ratio calculated by the traditional scheme and the scheme is normalized by the brightness ratio calculated when no object moves, and the result shows that the larger the moving object is, the more accurate the calculation of the method is.
Motion plan object size Traditional scheme (normalization) This scheme (normalization)
Without movement of objects 1 1
Small moving object 0.993 0.997
Medium moving object 0.972 1.001
Large moving object 0.953 0.996
A brightness adjustment system for dynamic image fusion, as shown in fig. 6, includes:
a preprocessing module for preprocessing images img1 and img2;
specifically, the preprocessing module is used for graying the processed images img1 and img2, obtaining the gray images img1 and img2, and reducing the noise to process the gray images img1 and img2
The feature extraction module is used for extracting key feature points in the preprocessed images img1 and img2;
specifically, the feature extraction module is used for extracting feature point information of the preprocessed images img1 and img2, wherein the feature point information comprises positions, scales and directions;
further, the feature extraction module is used for establishing a Gaussian pyramid and a Gaussian difference pyramid by utilizing a Gaussian function; searching local extremum points through a Gaussian differential pyramid; fitting discrete characteristic points; calculating the main direction of each feature point;
specifically, the feature extraction module is used for carrying out feature description on the feature points;
specifically, the feature extraction module is used for matching key feature points in the images img1 and img2 and eliminating feature points which are incorrectly matched.
Still further, the feature extraction module is configured to calculate euclidean distances in a dimension of feature vectors of feature points of the images img1 and img2; judging the similarity of the corresponding feature points of the images img1 and img2 according to the Euclidean distance; and matching the characteristic points meeting the Euclidean distance condition.
The brightness adjustment module is used for adjusting the brightness of the image img2 according to the average brightness ratio of the key feature points in the images img1 and img2;
specifically, the brightness adjustment module is used for obtaining the positions and brightness of the four adjacent pixel points according to the positions of the characteristic points; screening the brightness of four adjacent pixel points; calculating the position brightness of the characteristic points according to the screening result; calculating the average value of the brightness ratios of all the characteristic points of the images img1 and img2; the brightness of the image img2 is adjusted using the average value.
The brightness adjustment system for dynamic image fusion disclosed in this embodiment is implemented based on the brightness adjustment method for dynamic image fusion disclosed in the foregoing embodiment, and will not be described herein.
The brightness adjustment system for dynamic image fusion disclosed by the embodiment is characterized in that images img1 and img2 are preprocessed, wherein the images img1 and img2 have the same size and contain an overlapping area; then extracting key feature points in the preprocessed images img1 and img2; finally, the brightness of the image img2 is adjusted according to the average brightness ratio of the key feature points in the images img1 and img2. According to the scheme, brightness of four pixel points adjacent to the characteristic points is utilized to conduct bilinear interpolation to obtain brightness of the position of the characteristic points, brightness of a plurality of image sequences is unified by calculating brightness ratio of the characteristic points, and accuracy of brightness ratio calculation results is improved; meanwhile, a two-stage preprocessing mode is adopted, and the image graying and large filtering kernel is adopted to carry out filtering processing, so that the operation amount is reduced, meanwhile, the influence of abnormal points is innovatively removed aiming at the filtering process, the filtering preprocessing effect is ensured, and the relation between the effect and the efficiency is well solved.
The embodiment discloses an electronic device, as shown in fig. 7, including a memory and a processor, where the memory stores a computer program that can run on the processor, and the processor executes the computer program to implement the steps of a brightness adjustment method for dynamic image fusion:
wherein the processor is configured to pre-process images img1 and img2, the images img1 and img2 being the same size and comprising overlapping regions; extracting key feature points in the preprocessed images img1 and img2;
the brightness of the image img2 is adjusted according to the average brightness ratio of key feature points in the images img1 and img2.
The memory is used for storing programs for executing the processing procedures by the processor.
The electronic device disclosed in this embodiment is implemented based on the brightness adjustment method for dynamic image fusion disclosed in the foregoing embodiment, and will not be described herein.
The electronic device disclosed by the embodiment preprocesses the images img1 and img2, wherein the images img1 and img2 have the same size and contain an overlapping area; then extracting key feature points in the preprocessed images img1 and img2; finally, the brightness of the image img2 is adjusted according to the average brightness ratio of the key feature points in the images img1 and img2. According to the scheme, brightness of four pixel points adjacent to the characteristic points is utilized to conduct bilinear interpolation to obtain brightness of the position of the characteristic points, brightness of a plurality of image sequences is unified by calculating brightness ratio of the characteristic points, and accuracy of brightness ratio calculation results is improved; meanwhile, a two-stage preprocessing mode is adopted, and the image graying and large filtering kernel is adopted to carry out filtering processing, so that the operation amount is reduced, meanwhile, the influence of abnormal points is innovatively removed aiming at the filtering process, the filtering preprocessing effect is ensured, and the relation between the effect and the efficiency is well solved.
Those of skill would further appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware, computer software, or combinations of both, and that the various illustrative elements and steps are described above generally in terms of functionality in order to clearly illustrate the interchangeability of hardware and software. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
The steps of a method or algorithm described in connection with the embodiments disclosed herein may be embodied directly in hardware, in a software module executed by a processor, or in a combination of the two. The software modules may be disposed in Random Access Memory (RAM), memory, read Only Memory (ROM), electrically programmable ROM, electrically erasable programmable ROM, registers, hard disk, a removable disk, a CD-ROM, or any other form of storage medium known in the art.
The previous description of the disclosed embodiments is provided to enable any person skilled in the art to make or use the present application. Various modifications to these embodiments will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other embodiments without departing from the spirit or scope of the application. Thus, the present application is not intended to be limited to the embodiments shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.

Claims (9)

1. A brightness adjustment method for dynamic image fusion, comprising:
preprocessing images img1 and img2, the images img1 and img2 being the same size and including overlapping regions;
extracting key feature points in the preprocessed images img1 and img2;
adjusting the brightness of the image img2 according to the average brightness ratio of key feature points in the images img1 and img2;
the adjusting the brightness of the image img2 according to the average brightness ratio of the key feature points in the images img1 and img2 comprises the following steps:
obtaining the positions and the brightness of the four adjacent pixel points according to the positions of the characteristic points;
screening the brightness of four adjacent pixels: for the brightness f (Q) of adjacent pixel points 11 ),f(Q 12 ),f(Q 21 ),f(Q 22 ) Screening, judging the difference between the brightness of each point and the brightness average value, wherein the brightness of the pixel point with the difference being more than 15% is regarded as the point with larger fluctuation;
calculating the position brightness of the feature points according to the screening result: calculating the brightness of the characteristic points, and calculating the brightness of the characteristic points according to the brightness of the characteristic points if the characteristic points have only one difference point and are closest to the characteristic points;
if the other cases are the cases, calculating the brightness f (x, y) of the characteristic point positions according to bilinear interpolation
Calculating the average value of the brightness ratios of all the characteristic points of the images img1 and img2;
the brightness of the image img2 is adjusted using the average value.
2. The brightness adjustment method of dynamic image fusion according to claim 1, wherein the preprocessing images img1 and img2 include:
graying processing images img1 and img2 to obtain gray images img1 and img2;
the noise reduction process is performed on the gradation images img1 and img2.
3. The brightness adjustment method of dynamic image fusion according to claim 1, wherein the extracting key feature points in the preprocessed images img1 and img2 comprises:
extracting feature point information of preprocessed images img1 and img2, wherein the feature point information comprises positions, scales and directions;
carrying out feature description on the feature points;
key feature points in the images img1 and img2 are matched, and feature points with matching errors are removed.
4. A brightness adjustment method for dynamic image fusion according to claim 3, wherein the extracting of the feature point information of the preprocessed images img1 and img2 comprises:
establishing a Gaussian pyramid L (x, y, delta) and a Gaussian differential pyramid D (x, y, delta) by using a Gaussian function;
searching local extremum points through a Gaussian differential pyramid;
fitting discrete characteristic points;
the main direction of each feature point is calculated.
5. A brightness adjustment method for dynamic image fusion according to claim 3, wherein said characterizing said feature points comprises:
the feature description includes feature points and surrounding pixels that contribute thereto.
6. A brightness adjustment method for dynamic image fusion according to claim 3, wherein the key feature points in the matching images img1 and img2 comprise:
calculating Euclidean distance of feature vectors of feature points of the images img1 and img2 in 128-dimensional directions;
judging the similarity of the corresponding feature points of the images img1 and img2 according to the Euclidean distance;
and matching the characteristic points meeting the Euclidean distance condition.
7. A brightness adjustment method for dynamic image fusion according to claim 3, wherein robust parameter estimation method is adopted for eliminating feature points of matching errors.
8. A brightness adjustment system for dynamic image fusion, comprising:
a preprocessing module for preprocessing images img1 and img2;
the feature extraction module is used for extracting key feature points in the preprocessed images img1 and img2;
the brightness adjustment module is used for adjusting the brightness of the image img2 according to the average brightness ratio of the key feature points in the images img1 and img2;
the adjusting the brightness of the image img2 according to the average brightness ratio of the key feature points in the images img1 and img2 comprises the following steps:
obtaining the positions and the brightness of the four adjacent pixel points according to the positions of the characteristic points;
screening the brightness of four adjacent pixels: for the brightness f (Q) of adjacent pixel points 11 ),f(Q 12 ),f(Q 21 ),f(Q 22 ) Screening, judging the difference between the brightness of each point and the brightness average value, wherein the brightness of the pixel point with the difference being more than 15% is regarded as the point with larger fluctuation;
calculating the position brightness of the feature points according to the screening result: calculating the brightness of the characteristic points, and calculating the brightness of the characteristic points according to the brightness of the characteristic points if the characteristic points have only one difference point and are closest to the characteristic points;
if the other cases are the cases, calculating the brightness f (x, y) of the characteristic point positions according to bilinear interpolation
Calculating the average value of the brightness ratios of all the characteristic points of the images img1 and img2;
the brightness of the image img2 is adjusted using the average value.
9. An electronic device comprising a memory, a processor, the memory having stored therein a computer program executable on the processor, characterized in that the processor executes the computer program to implement the steps of the method of any of the preceding claims 1 to 7.
CN202310246425.XA 2023-03-15 2023-03-15 Dynamic image fusion brightness adjustment method, system and electronic equipment Active CN115953332B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310246425.XA CN115953332B (en) 2023-03-15 2023-03-15 Dynamic image fusion brightness adjustment method, system and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310246425.XA CN115953332B (en) 2023-03-15 2023-03-15 Dynamic image fusion brightness adjustment method, system and electronic equipment

Publications (2)

Publication Number Publication Date
CN115953332A CN115953332A (en) 2023-04-11
CN115953332B true CN115953332B (en) 2023-08-18

Family

ID=87286395

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310246425.XA Active CN115953332B (en) 2023-03-15 2023-03-15 Dynamic image fusion brightness adjustment method, system and electronic equipment

Country Status (1)

Country Link
CN (1) CN115953332B (en)

Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2011013862A1 (en) * 2009-07-28 2011-02-03 주식회사 유진로봇 Control method for localization and navigation of mobile robot and mobile robot using same
CN103279939A (en) * 2013-04-27 2013-09-04 北京工业大学 Image stitching processing system
CN107220955A (en) * 2017-04-24 2017-09-29 东北大学 A kind of brightness of image equalization methods based on overlapping region characteristic point pair
CN108416732A (en) * 2018-02-02 2018-08-17 重庆邮电大学 A kind of Panorama Mosaic method based on image registration and multi-resolution Fusion
CN108460724A (en) * 2018-02-05 2018-08-28 湖北工业大学 The Adaptive image fusion method and system differentiated based on mahalanobis distance
CN108986174A (en) * 2018-06-06 2018-12-11 链家网(北京)科技有限公司 A kind of global tone mapping method and system for high dynamic range across picture
JP2019101997A (en) * 2017-12-07 2019-06-24 キヤノン株式会社 Image processing apparatus and image processing method reducing noise by composing plural captured images
CN111260543A (en) * 2020-01-19 2020-06-09 浙江大学 Underwater image splicing method based on multi-scale image fusion and SIFT features
CN113255696A (en) * 2021-05-25 2021-08-13 深圳市亚辉龙生物科技股份有限公司 Image recognition method and device, computer equipment and storage medium
CN113284063A (en) * 2021-05-24 2021-08-20 维沃移动通信有限公司 Image processing method, image processing apparatus, electronic device, and readable storage medium
WO2021237732A1 (en) * 2020-05-29 2021-12-02 北京小米移动软件有限公司南京分公司 Image alignment method and apparatus, electronic device, and storage medium
CN114913071A (en) * 2022-05-16 2022-08-16 扬州大学 Underwater image splicing method integrating feature point matching of brightness region information
CN115471682A (en) * 2022-09-13 2022-12-13 杭州电子科技大学 Image matching method based on SIFT fusion ResNet50

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9977978B2 (en) * 2011-11-14 2018-05-22 San Diego State University Research Foundation Image station matching, preprocessing, spatial registration and change detection with multi-temporal remotely-sensed imagery
KR102463169B1 (en) * 2015-09-07 2022-11-04 삼성전자주식회사 Method and apparatus for eye tracking

Patent Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2011013862A1 (en) * 2009-07-28 2011-02-03 주식회사 유진로봇 Control method for localization and navigation of mobile robot and mobile robot using same
CN103279939A (en) * 2013-04-27 2013-09-04 北京工业大学 Image stitching processing system
CN107220955A (en) * 2017-04-24 2017-09-29 东北大学 A kind of brightness of image equalization methods based on overlapping region characteristic point pair
JP2019101997A (en) * 2017-12-07 2019-06-24 キヤノン株式会社 Image processing apparatus and image processing method reducing noise by composing plural captured images
CN108416732A (en) * 2018-02-02 2018-08-17 重庆邮电大学 A kind of Panorama Mosaic method based on image registration and multi-resolution Fusion
CN108460724A (en) * 2018-02-05 2018-08-28 湖北工业大学 The Adaptive image fusion method and system differentiated based on mahalanobis distance
CN108986174A (en) * 2018-06-06 2018-12-11 链家网(北京)科技有限公司 A kind of global tone mapping method and system for high dynamic range across picture
CN111260543A (en) * 2020-01-19 2020-06-09 浙江大学 Underwater image splicing method based on multi-scale image fusion and SIFT features
WO2021237732A1 (en) * 2020-05-29 2021-12-02 北京小米移动软件有限公司南京分公司 Image alignment method and apparatus, electronic device, and storage medium
CN113284063A (en) * 2021-05-24 2021-08-20 维沃移动通信有限公司 Image processing method, image processing apparatus, electronic device, and readable storage medium
CN113255696A (en) * 2021-05-25 2021-08-13 深圳市亚辉龙生物科技股份有限公司 Image recognition method and device, computer equipment and storage medium
CN114913071A (en) * 2022-05-16 2022-08-16 扬州大学 Underwater image splicing method integrating feature point matching of brightness region information
CN115471682A (en) * 2022-09-13 2022-12-13 杭州电子科技大学 Image matching method based on SIFT fusion ResNet50

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
基于动态场景的高动态范围图像合成算法研究;张雨荷;《中国优秀硕士学位论文全文数据库 工程科技I辑》(第2期);第I138-1460页 *

Also Published As

Publication number Publication date
CN115953332A (en) 2023-04-11

Similar Documents

Publication Publication Date Title
CN109785291B (en) Lane line self-adaptive detection method
CN110827200B (en) Image super-resolution reconstruction method, image super-resolution reconstruction device and mobile terminal
US8401333B2 (en) Image processing method and apparatus for multi-resolution feature based image registration
CN111080529A (en) Unmanned aerial vehicle aerial image splicing method for enhancing robustness
CN107945111B (en) Image stitching method based on SURF (speeded up robust features) feature extraction and CS-LBP (local binary Pattern) descriptor
CN111340701B (en) Circuit board image splicing method for screening matching points based on clustering method
CN108961286B (en) Unmanned aerial vehicle image segmentation method considering three-dimensional and edge shape characteristics of building
TWI639136B (en) Real-time video stitching method
WO2010095460A1 (en) Image processing system, image processing method, and image processing program
Lee et al. Accurate registration using adaptive block processing for multispectral images
CN113723399A (en) License plate image correction method, license plate image correction device and storage medium
CN112163995A (en) Splicing generation method and device for oversized aerial photographing strip images
Lin et al. Image stitching by disparity-guided multi-plane alignment
CN113793266A (en) Multi-view machine vision image splicing method, system and storage medium
CN111553927B (en) Checkerboard corner detection method, detection system, computer device and storage medium
JPH10149449A (en) Picture division method, picture identification method, picture division device and picture identification device
CN115953332B (en) Dynamic image fusion brightness adjustment method, system and electronic equipment
CN113642397A (en) Object length measuring method based on mobile phone video
US20120038785A1 (en) Method for producing high resolution image
CN112435283A (en) Image registration method, electronic device and computer-readable storage medium
US20070280555A1 (en) Image registration based on concentric image partitions
Rong et al. Mosaicing of microscope images based on SURF
CN111951295A (en) Method and device for determining flight trajectory based on polynomial fitting high precision and electronic equipment
JP5928465B2 (en) Degradation restoration system, degradation restoration method and program
CN115829943A (en) Image difference region detection method based on super-pixel segmentation

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant