CN115393406B - Image registration method based on twin convolution network - Google Patents

Image registration method based on twin convolution network Download PDF

Info

Publication number
CN115393406B
CN115393406B CN202210985592.1A CN202210985592A CN115393406B CN 115393406 B CN115393406 B CN 115393406B CN 202210985592 A CN202210985592 A CN 202210985592A CN 115393406 B CN115393406 B CN 115393406B
Authority
CN
China
Prior art keywords
roi
image
images
twin
region
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202210985592.1A
Other languages
Chinese (zh)
Other versions
CN115393406A (en
Inventor
范强
邹尔博
李忠
张瑞文
何亦舟
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
China Shipbuilding Intelligent Control Technology Wuhan Co ltd
717Th Research Institute of CSSC
Original Assignee
China Shipbuilding Intelligent Control Technology Wuhan Co ltd
717Th Research Institute of CSSC
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by China Shipbuilding Intelligent Control Technology Wuhan Co ltd, 717Th Research Institute of CSSC filed Critical China Shipbuilding Intelligent Control Technology Wuhan Co ltd
Priority to CN202210985592.1A priority Critical patent/CN115393406B/en
Publication of CN115393406A publication Critical patent/CN115393406A/en
Application granted granted Critical
Publication of CN115393406B publication Critical patent/CN115393406B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/30Determination of transform parameters for the alignment of images, i.e. image registration
    • G06T7/33Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/70Denoising; Smoothing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20092Interactive image processing based on input by user
    • G06T2207/20104Interactive definition of region of interest [ROI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20172Image enhancement details
    • G06T2207/20192Edge enhancement; Edge preservation

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • General Engineering & Computer Science (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Biomedical Technology (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

The invention discloses an image registration method based on a twin convolution network, which comprises the steps of selecting a first frame and a second frame from input continuous multi-frame images to cut to obtain an ROI_1 and an ROI_2, then carrying out twin convolution network registration, calculating peak signal-to-noise ratio PSNR of the region of the ROI_1 and the region of the ROI_2, superposing the registered two images to serve as a reference image, inputting a third frame image to calculate PSNR, and comparing until a condition is met; the method is simple and convenient to implement in the calculation process, has good effect on sensitive application scenes such as the outline edge of the target, and can be applied to image enhancement scenes of equipment such as photoelectric reconnaissance and detection.

Description

Image registration method based on twin convolution network
Technical Field
The invention belongs to the technical field of image processing, and particularly relates to an image registration method based on a twin convolution network.
Background
In object tracking, for known dim objects, the signal-to-noise ratio of the object is typically increased by centering the optical axis of the optoelectronic system on the object while increasing the exposure time. However, the method needs to track the target in real time stably to ensure the alignment of the optical axis, otherwise, background interference is increased, the target information is weakened, and even valuable target information is submerged.
Based on the analysis, the information of the target can be enhanced by a multi-frame accumulation method, but the accumulated frame number needs to be adaptively adjusted, otherwise, when the manually set frame number is too small, the weak target is possibly not highlighted, and when the frame number is too large, the output frame frequency of the photoelectric system is reduced, so that the real-time performance of tasks is reduced, and meanwhile, the image registration is needed when the adaptive multi-frame accumulation is carried out.
The image registration by the method of extracting the angular points in the image is mature, and has the advantages of complete principle, stable algorithm and good instantaneity. However, when the target is in a complex background, particularly an infrared dark and weak target is in a complex background, the accuracy of corner calculation is greatly reduced, and registration failure is caused when the accuracy is severe.
Disclosure of Invention
Aiming at the defects of the registration method based on the corner points, the patent application provides an image registration method based on a twin convolution network.
The technical scheme adopted for solving the technical problems is as follows: an image registration method based on a twin convolution network comprises the following steps of
S1, selecting a continuous multi-frame image of a region where a known target possibly appears as input, selecting a region containing the target possibly appears from a first frame image, and selecting a region which is 1.5 times of the region for clipping to be used as an ROI_1 region in order to ensure the subsequent registration and clipping of the images;
s2, cutting the input second frame image to be used as an ROI_2 area according to the method;
S3, registering the twin convolution network images: using a cross entropy loss function as a target optimization function used during training of the twin convolution network, inputting images of the ROI_1 region and the ROI_2 region into the twin convolution network, using feature vectors output by the twin convolution network as key points to construct feature descriptors of the images, using an industry maturation algorithm RANSAC algorithm to obtain final matching point pairs of front and rear ROI region images, and finally calculating a transformation matrix through a least square algorithm, wherein registration of the ROI_2 region and the ROI_1 region can be realized through the transformation matrix;
The twin convolutional network is provided with two sub-networks which have the same structure and share the same convolutional kernel parameters, each sub-network is formed by connecting a convolutional layer conv, a network splicing module conat and a maximum pooling operation module maxpool in series up and down, wherein the convolutional layer conv is formed by a plurality of 3 multiplied by 3 convolutional kernels, the network splicing module conat directly splices the two convolutional layers conv which need to be spliced without channel superposition and calculation, and the maximum pooling operation module maxpool finally outputs a feature vector;
s4, calculating peak signal-to-noise ratio PSNR values of the ROI_1 area and the ROI_2 area:
Wherein MAX I represents the maximum value of the image color, 8-bit sampling points are 255, m and n represent the width and height of the ROI area respectively, I (I, j) and K (I, j) refer to the gray values of coordinates (I, j) in the ROI_1 and the ROI_2 area respectively, PSNR is in dB, and the larger the value is, the smaller the distortion is;
s5, overlapping the two images of the registered region ROI_1 and the registered region ROI_2, and taking the images as reference images of the next operation;
S6, setting a PSNR value at the moment as a basic threshold value T_base, taking the reference image overlapped in the step S5 as reference input, taking the third frame of input image as input to be processed, obtaining a new PSNR value T according to the steps S1-S5, comparing the new PSNR value T with the T_base, and if the T-T_base is more than or equal to 2dB, turning to the step S7; otherwise, inputting a fourth frame of image, and continuously repeating the steps S1 to S5 until the condition is met;
And S7, at the moment, according to the operation, the accumulated frame number for enhancing the target can be adaptively selected, at the moment, the processed image is required to be denoised, and the signal-to-noise ratio is further improved, so that the registration image is obtained.
Further, the image superposition in the step S5 is the summation of gray values of the pixel positions corresponding to the roi_1 and the roi_2 regions.
Furthermore, in the step S7, in order to consider the real-time performance of the calculation and effectively preserve the contour and edge information of the target, a non-local mean denoising method is selected for denoising.
Still further, the step S7 specifically includes: selecting 8 pixel points around the pixel point, namely 8 neighborhood pixel points, calculating the Euclidean distance between each neighborhood pixel and the current pixel, taking the distance value as a weight, taking the sum of Gaussian weighted average values of gray values of the 8 pixel points as new estimated values of the current pixel point, traversing all pixels in the image needing denoising in sequence, and outputting each new estimated value of the pixel as the denoised image.
The beneficial effects of the invention are as follows: compared with the traditional corner-based registration method, the registration method has better accuracy under a complex background; the method selects non-local mean denoising, achieves better effects in removing noise and keeping texture details, adopts self-adaptive multi-frame accumulation of threshold judgment, can automatically adjust according to accumulation results, avoids the problem of unstable effect caused by manually setting accumulation frame numbers, and finally uses a non-local mean filtering method to denoise accumulated images.
Drawings
Fig. 1 is a schematic diagram of the network architecture of the twin convolutional network of the present invention.
Detailed Description
In practical applications of situational awareness, detection of distant important targets is required. Meanwhile, in order to avoid the probability of being found and detected, most enemies adopt avoidance or camouflage. These problems can cause objects of interest in the acquired optoelectronic image to appear as dark and weak objects. Therefore, the image enhancement needs to be performed on the dark and weak targets, the detection probability is increased, and the false alarm is removed. This is also a prerequisite for improving other performance of the optoelectronic system, such as target detection, target tracking, etc.
The twin neural network of the similarity measurement method takes a convolutional neural network as a main network, takes information with highest similarity of the front and rear frames of images as network output, maps input to a target space through a function, and compares the similarity in the target space by using a distance function (such as Euclidean distance and the like). During the training phase, the loss function values of a pair of samples from the same class are minimized, and the loss function values of a pair of samples from different classes are maximized, thus being particularly suitable for image registration in complex contexts.
The invention will be described in further detail with reference to the following specific examples of the drawings. The following examples are only for the purpose of illustrating the invention and are not to be construed as limiting the invention.
The invention discloses an image registration method based on a twin convolution network, which is a multi-frame accumulated weak target detection method based on twin network registration, and comprises the following steps:
S1, selecting a continuous multi-frame image of a region where a known target possibly appears as input, selecting a region containing the target possibly appears from a first frame image, and selecting a region which is 1.5 times of the region for clipping to be used as an ROI_1 region in order to ensure the subsequent registration and clipping of the images;
s2, cutting the input second frame image to be used as an ROI_2 area according to the method;
S3, registering the twin convolution network images: the cross entropy loss function is used as a target optimization function used during twin convolution network training, images of an application scene of the photoelectric system are acquired as far as possible, the ROI_1 and the ROI_2 which are separated by a certain time from front to back are used as images to perform network training on the twin convolution network, the images of the ROI_1 and the ROI_2 are input into the twin convolution network, feature vectors output by the twin convolution network are used as key points to construct feature descriptors of the images, an industrial maturation algorithm RANSAC algorithm is used to obtain final matching point pairs of the images of the front and back ROI areas, a transformation matrix is calculated through a least square algorithm, and registration of the ROI_2 and the ROI_1 can be realized through the transformation matrix;
As shown in fig. 1, the twin convolutional network has two sub-networks with the same structure and sharing the same convolutional kernel parameters, each sub-network is formed by connecting a convolutional layer conv, a network splicing module conat and a maximum pooling operation module maxpool in series up and down, wherein the convolutional layer conv is formed by a plurality of 3×3 convolutional kernels, the network splicing module conat directly splices the two convolutional layers conv to be spliced without channel superposition and calculation, and the maximum pooling operation module maxpool finally outputs a feature vector;
s4, calculating peak signal-to-noise ratio PSNR values of the ROI_1 area and the ROI_2 area:
Wherein MAX I represents the maximum value of the image color, 8-bit sampling points are 255, m and n represent the width and height of the ROI area respectively, I (I, j) and K (I, j) refer to the gray values of coordinates (I, j) in the ROI_1 and the ROI_2 area respectively, PSNR is in dB, and the larger the value is, the smaller the distortion is;
S5, overlapping the two images of the registered ROI_1 and the registered ROI_2 (summing gray values of corresponding pixel positions, namely adding gray values of the same coordinate position in the two images of the ROI_1 and the ROI_2, such as adding pixel gray values at (1, 1) of the image A and (1, 1) of the image B), and taking the two images as reference images of the next operation;
S6, setting a PSNR value at the moment as a basic threshold value T_base, taking the reference image overlapped in the step S5 as reference input, taking the third frame of input image as input to be processed, obtaining a PSNR value T according to the steps S1-S5, comparing the PSNR value T with the T_base, and if the T-T_base is more than or equal to 2dB, turning to the step S7; otherwise, inputting a fourth frame of image, and continuously repeating the steps S1 to S5 until the condition is met;
S7, according to the operation, the accumulated frame number for enhancing the target can be selected in a self-adaptive mode, denoising is needed to be carried out on the processed image, and the signal to noise ratio is further improved. The accumulated images enhance the information of the target and increase the interference of noise. Thus post-accumulation image noise reduction is a necessary step to further improve the target signal-to-noise ratio.
For an infrared complex scene, detail information such as the edge contour of a target is reserved while background noise is removed.
In order to consider the real-time performance of calculation and effectively reserve the outline and edge information of the target, a non-local mean denoising method is selected for denoising. The non-local mean denoising method fully utilizes redundant information in the image, and can keep the detail characteristics of the image to the greatest extent while denoising. The method comprises the following steps: selecting 8 pixel points around the pixel point, namely 8 neighborhood pixel points, calculating the Euclidean distance between each neighborhood pixel and the current pixel, taking the distance value as a weight, taking the sum of Gaussian weighted average values of gray values of the 8 pixel points as new estimated values of the current pixel point, traversing all pixels in the image needing denoising in sequence, and outputting each new estimated value of the pixel as the denoised image.
The foregoing is illustrative only and not limiting, and any person skilled in the art, having the benefit of the teachings disclosed herein, may make modifications and variations to the equivalent embodiments, and it should be understood that any modifications and equivalents that do not depart from the spirit and scope of the invention are intended to be encompassed by the scope of the claims.

Claims (4)

1. An image registration method based on a twin convolution network is characterized by comprising the following steps of: comprises the following steps of
S1, selecting a continuous multi-frame image of a possible occurrence area of a known target, selecting the possible occurrence area of the target from a first frame image, and selecting 1.5 times of the area range for clipping to be used as an ROI_1 area;
S2, clipping the input second frame image according to the step S1 to obtain an ROI_2 area;
S3, using a cross entropy loss function as a target optimization function used during training of the twin convolution network, inputting images of the ROI_1 region and the ROI_2 region into the twin convolution network, using feature vectors output by the twin convolution network as key points to construct feature descriptors of the images, using a RANSAC algorithm to obtain final matching point pairs of front and rear ROI region images, and finally calculating a transformation matrix through a least square algorithm, and realizing registration of the ROI_2 region and the ROI_1 region through the transformation matrix;
The twin convolutional network is provided with two sub-networks which have the same structure and share convolutional kernel parameters, each sub-network is formed by connecting a convolutional layer conv, a network splicing module conat and a maximum pooling operation module maxpool, wherein the convolutional layer conv is formed by a plurality of 3 multiplied by 3 convolutional kernels, the network splicing module conat directly splices the two convolutional layers conv which need to be spliced without channel superposition and calculation, and the maximum pooling operation module maxpool finally outputs feature vectors;
s4, calculating peak signal-to-noise ratio PSNR values of the ROI_1 area and the ROI_2 area:
Wherein MAX I represents the maximum value of the image color, m and n represent the width and height of the ROI area respectively, and I (I, j) and K (I, j) refer to the gray values of coordinates (I, j) in the ROI_1 and the ROI_2 area respectively;
s5, overlapping the registered region images of the ROI_1 and the registered region images of the ROI_2, and taking the images as reference images of the next operation;
S6, setting a PSNR value at the moment as a basic threshold value T_base, taking a reference image as a reference input, taking an input third frame image as an input to be processed, obtaining a new PSNR value T according to steps S1-S5, and if the T-T_base is more than or equal to 2dB, turning to step S7; otherwise, inputting a fourth frame of image, and continuously repeating the steps S1 to S5 until the condition is met;
S7, denoising the processed image to obtain a registration image.
2. The image registration method based on a twin convolution network according to claim 1, wherein the image superposition in step S5 is a gray value summation of pixel positions corresponding to the roi_1 and roi_2 regions.
3. The image registration method based on a twin convolutional network according to claim 1, wherein the step S7 is to select a non-local mean denoising method for denoising.
4. The image registration method based on the twin convolutional network according to claim 3, wherein the step S7 specifically comprises: selecting 8 pixel points around the pixel point, calculating Euclidean distance between each neighborhood pixel and the current pixel, taking the distance value as a weight, taking the sum of Gaussian weighted average values of gray values of the 8 pixel points as new estimated values of the current pixel point, traversing all pixels in the image needing denoising in sequence, and outputting each new estimated value of the pixel as the denoised image.
CN202210985592.1A 2022-08-17 2022-08-17 Image registration method based on twin convolution network Active CN115393406B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210985592.1A CN115393406B (en) 2022-08-17 2022-08-17 Image registration method based on twin convolution network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210985592.1A CN115393406B (en) 2022-08-17 2022-08-17 Image registration method based on twin convolution network

Publications (2)

Publication Number Publication Date
CN115393406A CN115393406A (en) 2022-11-25
CN115393406B true CN115393406B (en) 2024-05-10

Family

ID=84120260

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210985592.1A Active CN115393406B (en) 2022-08-17 2022-08-17 Image registration method based on twin convolution network

Country Status (1)

Country Link
CN (1) CN115393406B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117079197B (en) * 2023-10-18 2024-03-05 山东诚祥建设集团股份有限公司 Intelligent building site management method and system

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105844587A (en) * 2016-03-17 2016-08-10 河南理工大学 Low-altitude unmanned aerial vehicle-borne hyperspectral remote-sensing-image automatic splicing method
CN108304883A (en) * 2018-02-12 2018-07-20 西安电子科技大学 Based on the SAR image matching process for improving SIFT
CN109191491A (en) * 2018-08-03 2019-01-11 华中科技大学 The method for tracking target and system of the twin network of full convolution based on multilayer feature fusion
CN111141997A (en) * 2019-11-26 2020-05-12 北京瑞盈智拓科技发展有限公司 Inspection robot based on ultraviolet and visible light image fusion and detection method
CN111369601A (en) * 2020-02-12 2020-07-03 西北工业大学 Remote sensing image registration method based on twin network
WO2021051593A1 (en) * 2019-09-19 2021-03-25 平安科技(深圳)有限公司 Image processing method and apparatus, computer device, and storage medium
CN114119443A (en) * 2021-11-28 2022-03-01 特斯联科技集团有限公司 Image fusion system based on multispectral camera
KR20220057691A (en) * 2020-10-30 2022-05-09 계명대학교 산학협력단 Image registration method and apparatus using siamese random forest

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP2736011B1 (en) * 2012-11-26 2019-04-24 Nokia Technologies Oy Method, apparatus and computer program product for generating super-resolved images
US11030759B2 (en) * 2018-04-27 2021-06-08 Aselsan Elektronik Sanayi Ve Ticaret Anonim Sirketi Method for confident registration-based non-uniformity correction using spatio-temporal update mask
CN109873953A (en) * 2019-03-06 2019-06-11 深圳市道通智能航空技术有限公司 Image processing method, shooting at night method, picture processing chip and aerial camera
WO2020219915A1 (en) * 2019-04-24 2020-10-29 University Of Virginia Patent Foundation Denoising magnetic resonance images using unsupervised deep convolutional neural networks

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105844587A (en) * 2016-03-17 2016-08-10 河南理工大学 Low-altitude unmanned aerial vehicle-borne hyperspectral remote-sensing-image automatic splicing method
CN108304883A (en) * 2018-02-12 2018-07-20 西安电子科技大学 Based on the SAR image matching process for improving SIFT
CN109191491A (en) * 2018-08-03 2019-01-11 华中科技大学 The method for tracking target and system of the twin network of full convolution based on multilayer feature fusion
WO2021051593A1 (en) * 2019-09-19 2021-03-25 平安科技(深圳)有限公司 Image processing method and apparatus, computer device, and storage medium
CN111141997A (en) * 2019-11-26 2020-05-12 北京瑞盈智拓科技发展有限公司 Inspection robot based on ultraviolet and visible light image fusion and detection method
CN111369601A (en) * 2020-02-12 2020-07-03 西北工业大学 Remote sensing image registration method based on twin network
KR20220057691A (en) * 2020-10-30 2022-05-09 계명대학교 산학협력단 Image registration method and apparatus using siamese random forest
CN114119443A (en) * 2021-11-28 2022-03-01 特斯联科技集团有限公司 Image fusion system based on multispectral camera

Non-Patent Citations (6)

* Cited by examiner, † Cited by third party
Title
Ordinary Differential Equation and Complex Matrix Exponential for Multi-resolution Image Registration;Abhishek Nan 等;《arXiv:2007.13683v1》;20200727;全文 *
Restoration of Atmospheric Turbulence-distorted Images via RPCA and Quasiconformal Maps;Chun Pong Lau 等;《arXiv:1704.03140v2》;20170919;全文 *
一种基于双通道深度网络的多时相卫星 遥感影像匹配研究;安谱阳 等;《江西科学》;20200229;全文 *
基于3D-Harris 与FPFH 改进的3D-NDT 配准算法;范强 等;《图学学报》;20200831;全文 *
基于生成对抗网络探索潜在空间的医学图像融合算法;肖儿良 等;《信息与控制》;20211231;全文 *
结合仿射变换及梯度描述符的密集匹配方法;范强 等;《测绘科学》;20211031;全文 *

Also Published As

Publication number Publication date
CN115393406A (en) 2022-11-25

Similar Documents

Publication Publication Date Title
CN104796582B (en) Video image denoising and Enhancement Method and device based on random injection retinex
CN110782477A (en) Moving target rapid detection method based on sequence image and computer vision system
CN109034184B (en) Grading ring detection and identification method based on deep learning
CN111882504B (en) Method and system for processing color noise in image, electronic device and storage medium
CN105913404A (en) Low-illumination imaging method based on frame accumulation
CN111275652B (en) Method for removing haze in urban remote sensing image
CN115393406B (en) Image registration method based on twin convolution network
CN104318529A (en) Method for processing low-illumination images shot in severe environment
CN112907493A (en) Multi-source battlefield image rapid mosaic fusion algorithm under unmanned aerial vehicle swarm cooperative reconnaissance
CN111179186A (en) Image denoising system for protecting image details
CN113658067B (en) Water body image enhancement method and system in air tightness detection based on artificial intelligence
CN113379861B (en) Color low-light-level image reconstruction method based on color recovery block
CN111311503A (en) Night low-brightness image enhancement system
CN113205494B (en) Infrared small target detection method and system based on adaptive scale image block weighting difference measurement
CN109003247B (en) Method for removing color image mixed noise
Ponomaryov et al. Fuzzy color video filtering technique for sequences corrupted by additive Gaussian noise
CN111461999A (en) SAR image speckle suppression method based on super-pixel similarity measurement
Parihar et al. A study on dark channel prior based image enhancement techniques
CN114399440B (en) Image processing method, image processing network training method and device and electronic equipment
CN113269686B (en) Method and device for processing brightness noise, storage medium and terminal
CN111080560B (en) Image processing and identifying method
CN113469889B (en) Image noise reduction method and device
Park et al. Automatic radial un-distortion using conditional generative adversarial network
CN116894794B (en) Quick denoising method for video
Lian et al. Learning tone mapping function for dehazing

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information
CB02 Change of applicant information

Country or region after: China

Address after: 430223 Jiangxia Road 1, Mian Shan development area, Jiangxia District, Wuhan, Hubei

Applicant after: China Shipbuilding Intelligent Control Technology (Wuhan) Co.,Ltd.

Address before: 430223 Jiangxia Road 1, Mian Shan development area, Jiangxia District, Wuhan, Hubei

Applicant before: WUHAN HUAZHONG TIANJING TONGSHI TECHNOLOGY CO.,LTD.

Country or region before: China

TA01 Transfer of patent application right
TA01 Transfer of patent application right

Effective date of registration: 20240411

Address after: 430223 Jiangxia Road 1, Mian Shan development area, Jiangxia District, Wuhan, Hubei

Applicant after: China Shipbuilding Intelligent Control Technology (Wuhan) Co.,Ltd.

Country or region after: China

Applicant after: Huazhong Optoelectronic Technology Research Institute (717 Research Institute of China Shipbuilding Corp.)

Address before: 430223 Jiangxia Road 1, Mian Shan development area, Jiangxia District, Wuhan, Hubei

Applicant before: China Shipbuilding Intelligent Control Technology (Wuhan) Co.,Ltd.

Country or region before: China

GR01 Patent grant
GR01 Patent grant