CN116109540B - Image registration fusion method and system based on particle swarm optimization gray curve matching - Google Patents

Image registration fusion method and system based on particle swarm optimization gray curve matching Download PDF

Info

Publication number
CN116109540B
CN116109540B CN202310279927.2A CN202310279927A CN116109540B CN 116109540 B CN116109540 B CN 116109540B CN 202310279927 A CN202310279927 A CN 202310279927A CN 116109540 B CN116109540 B CN 116109540B
Authority
CN
China
Prior art keywords
image
infrared
camera
visible light
gray
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202310279927.2A
Other languages
Chinese (zh)
Other versions
CN116109540A (en
Inventor
赵砚青
韩颖颖
徐鹏翱
王飞
朱言庆
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhiyang Innovation Technology Co Ltd
Original Assignee
Zhiyang Innovation Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhiyang Innovation Technology Co Ltd filed Critical Zhiyang Innovation Technology Co Ltd
Priority to CN202310279927.2A priority Critical patent/CN116109540B/en
Publication of CN116109540A publication Critical patent/CN116109540A/en
Application granted granted Critical
Publication of CN116109540B publication Critical patent/CN116109540B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration using two or more images, e.g. averaging or subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/004Artificial life, i.e. computing arrangements simulating life
    • G06N3/006Artificial life, i.e. computing arrangements simulating life based on simulated virtual individual or collective life forms, e.g. social simulations or particle swarm optimisation [PSO]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/13Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/30Determination of transform parameters for the alignment of images, i.e. image registration
    • G06T7/33Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods
    • G06T7/337Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods involving reference images or patches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/30Determination of transform parameters for the alignment of images, i.e. image registration
    • G06T7/35Determination of transform parameters for the alignment of images, i.e. image registration using statistical methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10048Infrared image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Biomedical Technology (AREA)
  • Data Mining & Analysis (AREA)
  • Probability & Statistics with Applications (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Image Processing (AREA)

Abstract

The invention discloses an image registration fusion method and system based on particle swarm optimization gray curve matching, and mainly relates to the technical field of image processing. The method comprises the following steps: collecting an image; correcting an optical axis of the acquired image and aligning the scales of the visible light image and the infrared image; performing particle swarm optimization gray curve matching on the acquired image to finish image pixel alignment; and (5) performing double-light image fusion on the acquired images and outputting final images. The invention has the beneficial effects that: the method can improve the registration effect while facilitating calculation.

Description

Image registration fusion method and system based on particle swarm optimization gray curve matching
Technical Field
The invention relates to the technical field of image processing, in particular to an image registration fusion method and an image registration fusion system based on particle swarm optimization gray curve matching.
Background
Infrared thermal imaging and visible light imaging are two important imaging means, and have important application in the field of electric power inspection. The visible light image is formed by utilizing the reflected light of the target, has high imaging resolution, can provide rich and clear texture information, has large information content, and is favorable for the cognition of the model to the scene and the target. However, when the visible light camera is poor in light (such as night or dense fog) or has a view blocking condition, the visible light sensor is difficult to clearly capture the characteristics of the key target.
The imaging mechanism and the application scene of the information acquired by the sensors of different types are different, so that the sensor has natural information complementarity. The infrared image can distinguish the target from the background based on the difference in radiation, which works well in all weather and all day/night situations. However, due to hardware conditions and application environments, the appearance features in the infrared image are seriously lost in the infrared image, the image is blurred, and the resolution ratio is low. In contrast, the manner in which the visible image is consistent with the human visual system may provide texture details with high spatial resolution and clarity. Therefore, two images can be fused together, not only can the image information be enriched, the image resolution is improved, and the incompleteness of a single sensor on the expression of a specific scene is compensated. The premise of realizing image fusion is that images in different states are in a consistent state, namely image registration.
The dual-optical camera can be divided into a coaxial-axis camera and a beam-splitting-axis camera according to structural differences, but a coaxial-axis system has the problems of high structural design difficulty, serious energy loss in long-distance detection and the like. Compared with a common optical axis system, the infrared lens and the visible light lens in the split optical axis system are provided with independent lenses and focal plane arrays, so that the problem of serious remote detection energy loss caused by the common optical axis can be effectively avoided. However, because there is a center distance between the two lenses in the spectroscopic axis system, parallax exists in the captured infrared image and the visible light image, and the parallax changes with the scene change, when the captured scene is closer to the camera, the parallax is larger, so that we cannot use a set of parameters to eliminate the registration error caused by the parallax.
Most of the existing methods solve the problem of registering infrared and visible light images based on a feature matching mode. However, due to the limitation of hardware conditions and application environments, the infrared image often has the defects of blurred details and large noise, and the appearance characteristics of the target are seriously lost. The features of the two images are greatly different due to the imaging difference of the visible light image and the infrared image. It is difficult to find key point pairs that can be matched based on the feature matching mode, so that the accuracy of registration of an infrared image and a visible light image is low.
Disclosure of Invention
In order to solve the problem of low registration accuracy of an infrared image and a visible light image in the existing image registration method, the invention provides an image registration fusion method and system based on particle swarm optimization gray curve matching, which can improve registration accuracy while being convenient for calculation.
The invention aims to achieve the aim, and the aim is achieved by the following technical scheme:
the image registration fusion method based on particle swarm optimization gray curve matching comprises the following steps:
s1: collecting an image;
s2: correcting an optical axis of the acquired image and aligning the scales of the visible light image and the infrared image;
s3: performing particle swarm optimization gray curve matching on the acquired image to finish image pixel alignment;
s4: and (5) performing double-light image fusion on the acquired images and outputting final images.
Preferably, the acquired image specifically includes: and shooting an infrared image and a corresponding visible light image of the target object by using a vertical split-optical axis double-light device.
Preferably, the optical axis correction of the acquired image and the scale alignment of the visible light image and the infrared image specifically comprise the following steps:
s21: the method comprises the steps of obtaining an infrared camera internal reference and a visible light camera internal reference by performing monocular camera calibration on the infrared camera and the visible light camera in the vertical split-axis dual-light equipment;
s22: the horizontal deviation between the optical axis of the infrared camera and the optical axis of the visible light camera generated by the production process is corrected by carrying out joint calibration on the infrared camera and the visible light camera in the vertical split-optical axis double-light equipment;
s23: step S21 is utilized to obtain camera internal parameters of the infrared camera and the visible light camera, and the angles of view of the infrared camera and the visible light camera are calculated respectively;
s24: determining scaling of the infrared image and the visible light image in the horizontal direction and the vertical direction by using the field angle of the infrared image camera and the field angle of the visible light camera obtained in the step S23;
s25: and (5) completing the scale alignment in the registering process of the infrared image and the visible light image.
Preferably, the field angle of the camera is determined by the focal length of the camera and the sensor of the camera, and the field angle of the camera is calculated as follows:
horizontal direction:
vertical direction:
wherein, the liquid crystal display device comprises a liquid crystal display device,and->Representing the width and height of the sensor, respectively, < >>Representing the focal length of the camera.
Preferably, the scaling in the horizontal direction and the scaling in the vertical direction are respectively:and->The method specifically comprises the following steps:
wherein, the liquid crystal display device comprises a liquid crystal display device,is the angle of view of the infrared camera in the horizontal direction;
is the angle of view of the infrared camera in the vertical direction;
in the horizontal direction for a visible light cameraIs a field angle of view of (2);
is the angle of view of the visible camera in the vertical direction.
Preferably, the performing the particle swarm optimization gray curve matching on the acquired image to complete the image pixel alignment specifically includes the following steps:
s31: performing blurring processing on the visible light image by using a Gaussian blurring operator, and performing edge extraction on the infrared image and the visible light image by using a Canny operator to respectively generate a high-frequency infrared contour image and a visible light contour image;
s32: the infrared contour image and the visible contour image are processed in columnsAliquoting for each nodeRadix seu herba Desmodii Multifloi>A pixel, constitute a->Infrared image gray value array and +.>A visible light gray value array;
s33: for each of the infrared profile image and the visible profile image, dividing the nodesAnd->Vector averaging, obtaining one +_at each aliquoting node>Is an infrared image gray scale distribution column vector of +.>Is a visible light image gray level distribution column vector;
s34: performing polynomial fitting on the infrared contour image gray level distribution column vector and the visible contour image gray level distribution column vector of each bisection node of the infrared contour image and the visible contour image, and using the data after polynomial fitting to replace the original infrared contour image gray level distribution column vector and visible contour image gray level distribution column vector;
s35: and (3) carrying out gray distribution column vector curve matching on the gray distribution column vector of the visible light contour image and the gray distribution column vector of the infrared contour image in the step (S34), wherein a matching function is constructed as follows:
wherein, the liquid crystal display device comprises a liquid crystal display device,for infrared contour image +.>For visible light profile image, +.>An image representing an infrared profile image and a visible profile image of the hob;
s36: solving the function described in step S35 through particle swarm optimizationOptimal solution of minimum->
Preferably, the performing dual-light image fusion on the acquired image and outputting a final image specifically includes: and (3) obtaining conversion parameters for completing pixel alignment of the infrared image and the visible light image according to the step (S3), carrying out weighted fusion on the infrared image and the visible light image, and outputting a heterologous fusion image.
The image registration fusion system based on particle swarm optimization gray curve matching comprises an image acquisition module, an image processing module and an image output module, wherein the image acquisition module is in data connection with the image processing module, and the image processing module is in data connection with the image output module.
Preferably, the image acquisition module is configured to: collecting an image; the image processing module is used for: correcting an optical axis of the acquired image, aligning the scales of a visible light image and an infrared image, and performing particle swarm optimization gray curve matching on the acquired image to finish image pixel alignment; the image output module is used for: and (5) performing double-light image fusion on the acquired images and outputting final images.
Compared with the prior art, the invention has the beneficial effects that:
1. and gray distribution matching is performed based on the one-dimensional vector, so that the calculation complexity is low.
2. And the data is optimized by using polynomial fitting data, so that characteristic differences brought by heterologous imaging are relieved, and the registration effect is improved.
3. And the registration process is effectively restrained by constructing an adaptive function based on posterior information, so that the search space of the solution is reduced, and the solution is more quickly and easily converged to the optimal solution.
4. And the optimal registration parameters are searched based on particle swarm optimization, so that the calculation amount is low, and the realization is simple.
Drawings
Fig. 1 is a flow chart of the method of the present invention.
Fig. 2 is a graph of the registration fusion result of the present invention.
Detailed Description
The invention will be further illustrated with reference to specific examples. It is to be understood that these examples are illustrative of the present invention and are not intended to limit the scope of the present invention. Further, it will be understood that various changes or modifications may be made by those skilled in the art after reading the teachings of the invention, and such equivalents are intended to fall within the scope of the invention as defined herein.
Examples: image registration fusion method and system based on particle swarm optimization gray curve matching
As shown in fig. 1, an image registration fusion method based on particle swarm optimization gray curve matching includes the following steps:
s1: collecting an image;
s2: correcting an optical axis of the acquired image and aligning the scales of the visible light image and the infrared image;
s3: performing particle swarm optimization gray curve matching on the acquired image to finish image pixel alignment;
s4: and (5) performing double-light image fusion on the acquired images and outputting final images.
In this embodiment, step S1 specifically includes: infrared image of power equipment of short-distance transmission channel within 10 m is shot by using vertical split-axis double-light equipment (the image size is) With a corresponding visible light image (image size is)。
The step S2 specifically comprises the following steps:
s21: the method comprises the steps of obtaining an infrared camera internal reference and a visible light camera internal reference by performing monocular camera calibration on the infrared camera and the visible light camera in the vertical split-axis dual-light equipment;
s22: the horizontal deviation between the optical axis of the infrared camera and the optical axis of the visible light camera generated by the production process is corrected by carrying out joint calibration on the infrared camera and the visible light camera in the vertical split-optical axis double-light equipment;
s23: the camera internal parameters of the infrared camera and the visible light camera are obtained in the step S21 to respectively calculate the field angles of the infrared camera and the visible light camera, wherein the field angles of the camera are determined by the focal length of the camera and the sensors of the camera, and the calculation mode of the field angles of the camera is as follows:
horizontal direction:
vertical directionAnd (3) the direction is as follows:
wherein, the liquid crystal display device comprises a liquid crystal display device,and->Representing the width and height of the sensor, respectively, < >>Representing the focal length of the camera;
s24: determining the scaling of the infrared image and the visible light image in the horizontal direction and the vertical direction by using the angle of view of the infrared image camera and the angle of view of the visible light camera obtained in the step S23And->The method specifically comprises the following steps:
wherein, the liquid crystal display device comprises a liquid crystal display device,is the angle of view of the infrared camera in the horizontal direction;
is the angle of view of the infrared camera in the vertical direction;
is the angle of view of the visible light camera in the horizontal direction;
is the angle of view of the visible light camera in the vertical direction;
s25: and (5) completing the scale alignment in the registering process of the infrared image and the visible light image.
The step S3 specifically comprises the following steps:
s31: performing blurring processing on the visible light image by using a Gaussian blurring operator, and performing edge extraction on the infrared image and the visible light image by using a Canny operator to respectively generate a high-frequency infrared contour image and a visible light contour image;
s32: the infrared contour image and the visible contour image are processed in columnsAliquoting for each nodeRadix seu herba Desmodii Multifloi>A pixel, constitute a->Infrared image gray value array and +.>A visible light gray value array;
s33: for each of the infrared profile image and the visible profile image, dividing the nodesAnd->Vector averaging, obtaining one +_at each aliquoting node>Is an infrared image gray scale distribution column vector of +.>Is a visible light image gray level distribution column vector;
s34: performing polynomial fitting on the infrared contour image gray level distribution column vector and the visible contour image gray level distribution column vector of each bisection node of the infrared contour image and the visible contour image, and using the data after polynomial fitting to replace the original infrared contour image gray level distribution column vector and visible contour image gray level distribution column vector;
s35: and (3) carrying out gray distribution column vector curve matching on the gray distribution column vector of the visible light contour image and the gray distribution column vector of the infrared contour image in the step (S34), wherein a matching function is constructed as follows:
wherein, the liquid crystal display device comprises a liquid crystal display device,for infrared contour image +.>For visible light profile image, +.>An image representing an infrared profile image and a visible profile image of the hob;
s36: solving the function described in step S35 through particle swarm optimizationOptimal solution of minimum->In the particle swarm algorithm, the main parameters of the algorithm are as follows: d dimensional space, N particles, K iterations, each particle representing a solution, there are:
first, theThe positions of the individual particles are:
first, theThe velocity of the individual particles is (the size and direction of movement of the particles):
first, theOptimal position (individual optimal solution) searched by each particle:
optimum position searched by population (population optimum solution):
first, theThe optimal position adaptation value searched by the individual particles is +.>
First, theThe adaptation value of the optimal position of the group search found by the iteration is +.>
First, theThe updated formula of the distance and direction of the next iterative movement of the individual particles is as follows:
wherein, the liquid crystal display device comprises a liquid crystal display device,is inertial weight, ++>For individual learning factors->Is a group learning factor;
first, theThe next step of the position updating formula of the individual particles is as follows:
stopping iteration when the group history optimal adaptation value reaches convergence, and outputting an optimal solutionNamely, the translation transformation parameters for completing pixel alignment of the infrared image and the visible light image;
the step S4 specifically includes performing weighted fusion on the infrared image and the visible light image according to the transformation parameters for completing pixel alignment of the infrared image and the visible light image obtained in the step S3, so as to obtain a heterologous fusion image (as shown in fig. 2).

Claims (7)

1. The image registration fusion method based on particle swarm optimization gray curve matching is characterized by comprising the following steps of:
s1: collecting an image;
s2: correcting an optical axis of the acquired image and aligning the scales of the visible light image and the infrared image;
s3: performing particle swarm optimization gray curve matching on the acquired image to finish image pixel alignment;
s4: double-light image fusion is carried out on the acquired images, and a final image is output;
the image pixel alignment completion method based on particle swarm optimization gray curve matching for the acquired image specifically comprises the following steps:
s31: performing blurring processing on the visible light image by using a Gaussian blurring operator, and performing edge extraction on the infrared image and the visible light image by using a Canny operator to respectively generate a high-frequency infrared contour image and a visible light contour image;
s32: dividing the infrared contour image and the visible contour image into C equal parts according to the columns, and dividing each node into three equal partsExtracting delta pixels to form m 1 X C x delta infrared image gray value array and m 2 X C x delta visible light gray value array;
s33: m at each bisecting node for the infrared profile image and the visible profile image 1 X delta and m 2 Averaging the x delta vectors to obtain an m at each bisecting node 1 X 1 infrared image gray scale distribution column vector and m 2 X 1 visible light image gray scale distribution column vector;
s34: performing polynomial fitting on the infrared contour image gray level distribution column vector and the visible contour image gray level distribution column vector of each bisection node of the infrared contour image and the visible contour image, and using the data after polynomial fitting to replace the original infrared contour image gray level distribution column vector and visible contour image gray level distribution column vector;
s35: and (3) carrying out gray distribution column vector curve matching on the gray distribution column vector of the visible light contour image and the gray distribution column vector of the infrared contour image in the step (S34), wherein a matching function is constructed as follows:
wherein I is an infrared contour image, V is a visible light contour image, and VI is an image obtained by superposing the infrared contour image and the visible light contour image;
s36: and solving the optimal solution dy of the minimum value of the function E in the step S35 through particle swarm optimization.
2. The particle swarm optimization gray curve matching-based image registration fusion method according to claim 1, wherein the acquired images are specifically: and shooting an infrared image and a corresponding visible light image of the target object by using a vertical split-optical axis double-light device.
3. The particle swarm optimization gray curve matching-based image registration fusion method according to claim 1 or 2, wherein the optical axis correction of the acquired image and the scale alignment of the visible image and the infrared image specifically comprise the following steps:
s21: the method comprises the steps of obtaining an infrared camera internal reference and a visible light camera internal reference by performing monocular camera calibration on the infrared camera and the visible light camera in the vertical split-axis dual-light equipment;
s22: the horizontal deviation between the optical axis of the infrared camera and the optical axis of the visible light camera generated by the production process is corrected by carrying out joint calibration on the infrared camera and the visible light camera in the vertical split-optical axis double-light equipment;
s23: step S21 is utilized to obtain camera internal parameters of the infrared camera and the visible light camera, and the angles of view of the infrared camera and the visible light camera are calculated respectively;
s24: determining scaling of the infrared image and the visible light image in the horizontal direction and the vertical direction by using the field angle of the infrared image camera and the field angle of the visible light camera obtained in the step S23;
s25: and (5) completing the scale alignment in the registering process of the infrared image and the visible light image.
4. The particle swarm optimization gray curve matching-based image registration fusion method according to claim 3, wherein the field angle of the camera is determined by the focal length of the camera and the sensor of the camera, and the field angle of the camera is calculated as follows:
horizontal direction:
vertical direction:
where w and h represent the width and height of the sensor, respectively, and f represents the focal length of the camera.
5. The particle swarm optimization gray curve matching-based image registration fusion method according to claim 3, wherein the scaling in the horizontal direction and the vertical direction are respectively: k (K) x And K y The method specifically comprises the following steps:
wherein IFOV x Is the angle of view of the infrared camera in the horizontal direction;
IFOV y is the angle of view of the infrared camera in the vertical direction;
VFOV x is the angle of view of the visible light camera in the horizontal direction;
VFOV y is the angle of view of the visible camera in the vertical direction.
6. The particle swarm optimization gray curve matching-based image registration fusion method according to claim 1, wherein the performing double-light image fusion on the acquired image and outputting a final image is specifically as follows: and (3) obtaining conversion parameters for completing pixel alignment of the infrared image and the visible light image according to the step (S3), carrying out weighted fusion on the infrared image and the visible light image, and outputting a heterologous fusion image.
7. An image registration fusion system based on particle swarm optimization gray curve matching, characterized by a computer program capable of executing the method of any of the above claims 1-6.
CN202310279927.2A 2023-03-22 2023-03-22 Image registration fusion method and system based on particle swarm optimization gray curve matching Active CN116109540B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310279927.2A CN116109540B (en) 2023-03-22 2023-03-22 Image registration fusion method and system based on particle swarm optimization gray curve matching

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310279927.2A CN116109540B (en) 2023-03-22 2023-03-22 Image registration fusion method and system based on particle swarm optimization gray curve matching

Publications (2)

Publication Number Publication Date
CN116109540A CN116109540A (en) 2023-05-12
CN116109540B true CN116109540B (en) 2023-07-18

Family

ID=86265703

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310279927.2A Active CN116109540B (en) 2023-03-22 2023-03-22 Image registration fusion method and system based on particle swarm optimization gray curve matching

Country Status (1)

Country Link
CN (1) CN116109540B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117056377B (en) * 2023-10-09 2023-12-26 长沙军顺航博科技有限公司 Infrared image processing method, system and storage medium based on graph theory

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102915524A (en) * 2012-09-14 2013-02-06 武汉大学 Method for eliminating shadow based on match of inside and outside check lines of shadow area
CN110443776A (en) * 2019-08-07 2019-11-12 中国南方电网有限责任公司超高压输电公司天生桥局 A kind of Registration of Measuring Data fusion method based on unmanned plane gondola

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105160641B (en) * 2015-08-04 2018-05-29 成都多贝科技有限责任公司 X-ray welded seam area extracting method based on image procossing
CN106097407A (en) * 2016-05-30 2016-11-09 清华大学 Image processing method and image processing apparatus
CN107665486B (en) * 2017-09-30 2020-04-17 深圳绰曦互动科技有限公司 Automatic splicing method and device applied to X-ray images and terminal equipment

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102915524A (en) * 2012-09-14 2013-02-06 武汉大学 Method for eliminating shadow based on match of inside and outside check lines of shadow area
CN110443776A (en) * 2019-08-07 2019-11-12 中国南方电网有限责任公司超高压输电公司天生桥局 A kind of Registration of Measuring Data fusion method based on unmanned plane gondola

Also Published As

Publication number Publication date
CN116109540A (en) 2023-05-12

Similar Documents

Publication Publication Date Title
Zhu et al. Camvox: A low-cost and accurate lidar-assisted visual slam system
CN105096329B (en) Method for accurately correcting image distortion of ultra-wide-angle camera
Chatterjee et al. Algorithms for coplanar camera calibration
CN102156969B (en) Processing method for correcting deviation of image
CN106683139A (en) Fisheye-camera calibration system based on genetic algorithm and image distortion correction method thereof
WO2021098080A1 (en) Multi-spectral camera extrinsic parameter self-calibration algorithm based on edge features
WO2021098081A1 (en) Trajectory feature alignment-based multispectral stereo camera self-calibration algorithm
CN110874854B (en) Camera binocular photogrammetry method based on small baseline condition
CN116109540B (en) Image registration fusion method and system based on particle swarm optimization gray curve matching
CN111899164B (en) Image splicing method for multi-focal-segment scene
CN109118429A (en) A kind of medium-wave infrared-visible light multispectral image rapid generation
CN107492080A (en) Exempt from calibration easily monocular lens image radial distortion antidote
CN112270698A (en) Non-rigid geometric registration method based on nearest curved surface
Wang et al. Corners positioning for binocular ultra-wide angle long-wave infrared camera calibration
CN113936047A (en) Dense depth map generation method and system
Liu et al. A general relative radiometric correction method for vignetting and chromatic aberration of multiple CCDs: Take the Chinese series of Gaofen satellite Level-0 images for example
CN108898585B (en) Shaft part detection method and device
CN112396687B (en) Binocular stereoscopic vision three-dimensional reconstruction system and method based on infrared micro-polarizer array
Xianzhi Research on kinect calibration and depth error compensation based on BP neural network
CN112700504A (en) Parallax measurement method of multi-view telecentric camera
CN103491361B (en) A kind of method improving sparse corresponding points images match precision and stereo image correction
Zhu et al. A stereo vision depth estimation method of binocular wide-field infrared camera
CN112199815A (en) Method for reducing influence of temperature on camera internal parameters
Guan et al. An improved fast camera calibration method for mobile terminals
Duan et al. A method of camera calibration based on image processing

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant