CN117495741B - Distortion restoration method based on large convolution contrast learning - Google Patents

Distortion restoration method based on large convolution contrast learning Download PDF

Info

Publication number
CN117495741B
CN117495741B CN202311843193.2A CN202311843193A CN117495741B CN 117495741 B CN117495741 B CN 117495741B CN 202311843193 A CN202311843193 A CN 202311843193A CN 117495741 B CN117495741 B CN 117495741B
Authority
CN
China
Prior art keywords
convolution
contrast learning
large convolution
distortion
image data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202311843193.2A
Other languages
Chinese (zh)
Other versions
CN117495741A (en
Inventor
谯曦月
王双
李映
陈伟
胡佳良
吴梦迪
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Chengdu Huoan Metrology Technical Center Co ltd
Original Assignee
Chengdu Huoan Metrology Technical Center Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Chengdu Huoan Metrology Technical Center Co ltd filed Critical Chengdu Huoan Metrology Technical Center Co ltd
Priority to CN202311843193.2A priority Critical patent/CN117495741B/en
Publication of CN117495741A publication Critical patent/CN117495741A/en
Application granted granted Critical
Publication of CN117495741B publication Critical patent/CN117495741B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/7715Feature extraction, e.g. by transforming the feature space, e.g. multi-dimensional scaling [MDS]; Mappings, e.g. subspace methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/0464Convolutional networks [CNN, ConvNet]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/774Generating sets of training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Evolutionary Computation (AREA)
  • Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Computing Systems (AREA)
  • General Health & Medical Sciences (AREA)
  • General Physics & Mathematics (AREA)
  • Software Systems (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Databases & Information Systems (AREA)
  • Medical Informatics (AREA)
  • Multimedia (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Molecular Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Image Processing (AREA)

Abstract

The invention discloses a distortion recovery method based on large convolution contrast learning, which relates to the technical field of image processing, and comprises the following steps: acquiring initial image data, expanding the image data, and setting all the image data to be in a unified specification size m; constructing a large convolution comparison learning model, wherein the model comprises a large convolution comparison learning module and a position coding module; dividing the training data photo images into two groups, and respectively inputting the two groups into a constructed large convolution contrast learning model for alternate training iteration; and inputting the distorted image to be detected into a trained large convolution contrast learning model, and recalculating pixel values of the image distortion position through a mapping layer to realize distortion reduction. Based on the idea of the neural network, the invention combines a plurality of convolution and full connection layers, performs contrast learning of distortion and normal images by referring to pixel values and surrounding information of the convolution and full connection layers, and finally can accurately recover the distorted images.

Description

Distortion restoration method based on large convolution contrast learning
Technical Field
The invention relates to the technical field of image processing, in particular to a distortion reduction method based on large convolution contrast learning.
Background
The existing image distortion recovery technology can only aim at some simple distortion, such as fish eye distortion, and distortion of some different situations: for example, severe distortion, stretching distortion, and distortion deformation of different local degrees and modes caused by the influence of natural environment, the distortion problem can not be well solved by the traditional formula solving method based on the distortion coefficient, and in some extreme cases, the effect may be unsatisfactory.
The traditional method for restoring the image through the distortion coefficient is seriously dependent on the condition of camera hardware, the applicable scene is single, the standard value of the coefficient is not unique due to different light conditions, parameter configuration, physical fittings and shooting postures, the result is also imperfect, and some macroscopic distortion is not restored to normal; and because the calculation steps of the traditional method are more complicated, a function approximation processing method using Taylor expansion and multiple coordinate transformations are needed, a small point error generated in the calculation process or the hardware generation and use process is easier to accumulate and amplify, so that the stability and the accuracy of a distortion reduction result are unsatisfactory.
Disclosure of Invention
The invention aims to overcome the defects of the prior art, and provides a distortion restoration method based on large convolution contrast learning, which improves the effect of restoring distorted images in natural business scenes.
The aim of the invention is realized by the following technical scheme:
a distortion recovery method based on large convolution contrast learning comprises the following steps:
s1: acquiring initial image data, expanding the image data, and setting all the image data to be in a unified specification size m;
s2: constructing a large convolution comparison learning model, wherein the model comprises a large convolution comparison learning module and a position coding module;
s3: dividing the training data photo images into two groups, and respectively inputting the two groups into a constructed large convolution contrast learning model for alternate training iteration;
s4: and inputting the distorted image to be detected into a trained large convolution contrast learning model, and recalculating pixel values of the image distortion position through a mapping layer to realize distortion reduction.
Further, the initial image data is normal image, slightly distorted and severely distorted car image data acquired from four different evenly distributed time periods.
Further, the large convolution contrast learning module is composed of a plurality of convolution layers with adjustable quantity, the convolution kernel size of the convolution layers is (m-1) ×m-1, and the convolution layers with adjustable quantity share convolution parameters.
Further, the position coding module firstly expands the whole input picture area to form a one-dimensional vector, the index i starts from 0, and then performs position coding on the one-dimensional vector to obtain an image position code PosO, and the adopted position coding formula is as follows:
wherein pos represents the position of the object in the input sequence, 0<= pos<=L-1,PE(pos,2i)Representing an even number of pixel point location encodings,PE(pos,2i+1)representing an odd number of pixel point location encodings;d model representing the dimension of the output embedding space; i is used to map to column index, 0<= i<d/2, the single value i maps to sine and cosine functions.
Further, the large convolution contrast learning model further includes a merging calculation module, where a merged output picture pixel value NI (x, y) is obtained by adding an original pixel value OI (x, y) of each distorted picture, a position code PosO, and a convolution result, and is expressed as:
wherein, represents convolution operation, K is a convolution layer;
the merging calculation module outputs merging results by adopting an ELU activation function.
Further, the large convolution contrast learning model further comprises a full-connection layer with 5 layers of public gradients and a single-node output layer, each layer of the full-connection layer is respectively connected with 2000 neurons, and the single-node output layer adopts a Sigmoid nonlinear activation function.
Further, the loss function of the large convolution contrast learning model adopts the mean square errorJ MSE Measuring the average of the square difference between the predicted value and the true value, the mean square errorJ MSE The calculation formula of (2) is as follows:
where N is the total number of pixel samples in the picture,ithe value is taken from 1 until N,BI i is the normal pictureiThe pixel values of the individual samples are then used,NI i is to combine and output the first pictureiPixel values of the individual samples.
Further, the specific process of expanding the image data is to perform strengthening treatment on the image, including brightness, contrast and saturation adjustment.
Further, the step of setting all image data to a uniform specification size m is to convert the collected initial normal picture into a gray scale picture; scaling to m by the same proportion, filling the area with 0 value, and adopting interpolation filling or affine transformation.
Further, the mapping layer specifically rounds and rounds the result of multiplying the training value or the test value of 0 to 1 output by the Sigmoid nonlinear activation function by the mapping value 255, and maps the result to the real pixel value interval of 0 to 255.
The beneficial effects of the invention are as follows:
1) The method gets rid of the limitation of hardware level and omits a complicated transformation step, and only focuses on the problem of how to recover the distortion of the image.
2) Based on the idea of the neural network, a plurality of convolution and full connection layers are combined, and distortion and normal image comparison learning is performed by referring to self pixel values and surrounding information.
3) The method adopts a mode of combining linear and nonlinear weights as a result expression, strengthens the expression capacity of the model, and greatly enhances the capacity of pixel value mapping recovery.
4) By adopting the large convolution contrast learning method, training is carried out by sharing the large convolution, and results are spliced, so that the effect and the performance are both considered.
5) The position coding information of the transducer is combined, so that the model obtains the information of the current position of the pixel, more sufficient information is provided, and the convergence speed and effect of the model are greatly improved.
Drawings
FIG. 1 is a schematic overall flow chart of the present invention.
Detailed Description
The technical solutions of the present invention will be clearly and completely described below with reference to the embodiments, and it is apparent that the described embodiments are only some embodiments of the present invention, but not all embodiments. All other embodiments, which can be made by a person skilled in the art without any inventive effort, are intended to be within the scope of the present invention, based on the embodiments of the present invention.
Referring to fig. 1, the present invention provides a technical solution:
a distortion recovery method based on large convolution contrast learning comprises the following steps:
s1: and acquiring initial image data, expanding the image data, and setting all the image data to be in a unified specification size m x m.
In this embodiment, the initial image data is normal image, slightly distorted and severely distorted car image data, and the car image data is collected from four different evenly distributed time periods; in the embodiment, the carriage data are subjected to image acquisition in four time periods of noon (11:00:00-12:59:59), midnight (23:00:00-00:59:59), evening (17:00:00-18:59:59) and dawn (05:00:00-06:59:59) in one day, so that the data proportion of the typical time period of the data set is kept evenly distributed, and the diversity and balance of the data sources are ensured.
The specific process of expanding the image data is to carry out strengthening treatment on the image, including brightness, contrast and saturation adjustment; therefore, in order to increase the fitting property of the later model, the color adjustment condition of the cameras adapting to different brands of manufacturers is maximized, the model can deal with the damage distortion of the cameras to a certain extent, so that the image is subjected to reinforcement processing, the data diversity is greatly enriched, and the adequate preparation is made for the applicability of the business scene of the model.
The step of setting all the image data to be the uniform specification size m is to convert the collected initial normal picture into a gray level picture, and the distortion picture does not care about the color condition of the image, so that the operation amount is reduced, and the operation resource is saved; scaling to m by the same proportion, filling the area with 0 value, and adopting interpolation filling or affine transformation. The image data of the present embodiment is scaled to 640 x 640 in the same proportion.
S2: and constructing a large convolution comparison learning model, wherein the model comprises a large convolution comparison learning module and a position coding module.
The large convolution contrast learning module is composed of a plurality of convolution layers with adjustable quantity, the convolution kernel size of the convolution layers is (m-1) ×m-1, and the convolution layers with adjustable quantity share convolution parameters.
In this embodiment, all initial image data is scaled by 640 x 640, so the convolution kernel size is designed to be 639 x 639, and the number of convolution layers is set to 20.
The position coding module firstly expands the whole input picture area to form a one-dimensional vector, the index i starts from 0, and then the one-dimensional vector is subjected to position coding, and the adopted position coding formula is as follows:
wherein pos represents the position of the object in the input sequence, 0<= pos<=L-1,PE(pos,2i)Representing an even number of pixel point location encodings,PE(pos,2i+1)representing an odd number of pixel point location encodings;d model representing the dimension of the output embedding space; i is used to map to column index, 0<= i<d/2, the single value i maps to sine and cosine functions.
The position coding module uses sin and cos alternately to represent the relative positions of 2i and 2i+1, i.e. even positions using sin and odd positions using cos, the calculated position coding information value ranges between 0 and 1.
The large convolution contrast learning model of this embodiment further includes a merging calculation module, which obtains a merged output picture pixel value NI (x, y) by adding the original pixel value OI (x, y) of each distorted picture, the position coding PosO, and the convolution result, and is expressed as:
wherein, represents convolution operation, K is a convolution layer;
the merging calculation module outputs a merging result by adopting an ELU (ele unit) activation function, namely an exponential linear unit activation function.
The large convolution contrast learning model of the embodiment further comprises a full-connection layer with 5 layers of public gradients and a single-node output layer, wherein each layer of the full-connection layer is respectively connected with 2000 neurons, and the single-node output layer adopts a Sigmoid nonlinear activation function.
S3: dividing the training data photo images into two groups, and respectively inputting the two groups into the constructed large convolution contrast learning model for alternate training iteration. One group comprises the same number of distorted pictures and corresponding normal pictures, and the other group comprises the same number of normal pictures and the same pictures copied from the normal pictures.
After the training process of Batchsize=50 and epochs=10000+, the loss function of the large convolution contrast learning model adopts the mean square errorJ MSE Measuring the average of the square difference between the predicted value and the true value, the mean square errorJ MSE The calculation formula of (2) is as follows:
where N is the total number of pixel samples in the picture,ithe value is taken from 1 until N,BI i is the normal pictureiThe pixel values of the individual samples are then used,NI i is to combine and output the first pictureiPixel values of the individual samples.
S4: and inputting the distorted image to be detected into a trained large convolution contrast learning model, and recalculating pixel values of the image distortion position through a mapping layer to realize distortion reduction.
Further, the mapping layer specifically rounds and rounds the result of multiplying the training value or the test value of 0 to 1 output by the Sigmoid nonlinear activation function by the mapping value 255, and maps the result to the real pixel value interval of 0 to 255.
By adopting the image distortion restoration method provided by the invention, the distorted image can be accurately restored through the large convolution contrast learning model, and compared with the previous manual contrast and traditional restoration modes, the accuracy and timeliness are greatly improved.
The foregoing is merely a preferred embodiment of the invention, and it is to be understood that the invention is not limited to the form disclosed herein but is not to be construed as excluding other embodiments, but is capable of numerous other combinations, modifications and environments and is capable of modifications within the scope of the inventive concept, either as taught or as a matter of routine skill or knowledge in the relevant art. And that modifications and variations which do not depart from the spirit and scope of the invention are intended to be within the scope of the appended claims.

Claims (6)

1. A distortion recovery method based on large convolution contrast learning is characterized by comprising the following steps:
s1: acquiring initial image data, expanding the image data, and setting all the image data to be in a unified specification size m;
s2: constructing a large convolution comparison learning model, wherein the model comprises a large convolution comparison learning module and a position coding module;
s3: dividing the training data photo images into two groups, and respectively inputting the two groups into a constructed large convolution contrast learning model for alternate training iteration;
s4: inputting the distorted image to be detected into a trained large convolution contrast learning model, and recalculating pixel values of the image distortion position through a mapping layer to realize distortion reduction;
the large convolution contrast learning module consists of a plurality of convolution layers with adjustable quantity, wherein the convolution kernel size of the convolution layers is (m-1) ×m-1, and the convolution layers with adjustable quantity share convolution parameters;
the position coding module firstly expands the whole input picture area to form a one-dimensional vector, the index i starts from 0, and then the one-dimensional vector is subjected to position coding to obtain an image position coding PosO, and the adopted position coding formula is as follows:
wherein pos represents the position of the object in the input sequence, 0<= pos<=L-1,PE(pos,2i)Representing an even number of pixel point location encodings,PE(pos,2i+1)representing an odd number of pixel point location encodings;d model representing the dimension of the output embedding space; i is used to map to column index, 0<= i <d/2, the single value i maps to sine and cosine functions;
the large convolution contrast learning model further comprises a merging calculation module, and a merging output picture pixel value NI (x, y) is obtained by adding the original pixel value OI (x, y) of each distorted picture, the position coding PosO and the convolution result, and is expressed as follows:
wherein, represents convolution operation, K is a convolution layer;
the merging calculation module outputs a merging result by adopting an ELU activation function;
the large convolution contrast learning model also comprises a full-connection layer with 5 layers of public gradients and a single-node output layer, wherein each layer of the full-connection layer is respectively connected with 2000 neurons, and the single-node output layer adopts a Sigmoid nonlinear activation function.
2. The distortion reduction method based on large convolution contrast learning according to claim 1, wherein: the initial image data are normal image, slightly distorted and severely distorted car image data acquired from four different evenly distributed time periods.
3. The distortion reduction method based on large convolution contrast learning according to claim 1, wherein: the loss function of the large convolution contrast learning model adopts mean square errorJ MSE Measuring the average of the square difference between the predicted value and the true value, the mean square errorJ MSE The calculation formula of (2) is as follows:
where N is the total number of pixel samples in the picture,ithe value is taken from 1 until N,BI i is the normal pictureiThe pixel values of the individual samples are then used,NI i is to combine and output the first pictureiPixel values of the individual samples.
4. The distortion reduction method based on large convolution contrast learning according to claim 1, wherein: the specific process of expanding the image data is to carry out strengthening treatment on the image, including brightness, contrast and saturation adjustment.
5. The distortion reduction method based on large convolution contrast learning according to claim 1, wherein: the step of setting all image data to be in a unified specification size m is to convert the collected initial normal picture into a gray level picture; scaling to m by the same proportion, filling the area with 0 value, and adopting interpolation filling or affine transformation.
6. The distortion reduction method based on large convolution contrast learning according to claim 1, wherein: the mapping layer specifically rounds and rounds the result of multiplying the training value or the test value of 0 to 1 output by the Sigmoid nonlinear activation function by the mapping value 255, and maps the result to the real pixel value interval of 0 to 255.
CN202311843193.2A 2023-12-29 2023-12-29 Distortion restoration method based on large convolution contrast learning Active CN117495741B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311843193.2A CN117495741B (en) 2023-12-29 2023-12-29 Distortion restoration method based on large convolution contrast learning

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311843193.2A CN117495741B (en) 2023-12-29 2023-12-29 Distortion restoration method based on large convolution contrast learning

Publications (2)

Publication Number Publication Date
CN117495741A CN117495741A (en) 2024-02-02
CN117495741B true CN117495741B (en) 2024-04-12

Family

ID=89676809

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311843193.2A Active CN117495741B (en) 2023-12-29 2023-12-29 Distortion restoration method based on large convolution contrast learning

Country Status (1)

Country Link
CN (1) CN117495741B (en)

Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106599939A (en) * 2016-12-30 2017-04-26 深圳市唯特视科技有限公司 Real-time target detection method based on region convolutional neural network
CN106960456A (en) * 2017-03-28 2017-07-18 长沙全度影像科技有限公司 A kind of method that fisheye camera calibration algorithm is evaluated
CN113240677A (en) * 2021-05-06 2021-08-10 浙江医院 Retina optic disc segmentation method based on deep learning
CN113658115A (en) * 2021-07-30 2021-11-16 华南理工大学 Image anomaly detection method for generating countermeasure network based on deep convolution
CN114062839A (en) * 2021-11-01 2022-02-18 山东职业学院 Railway power line fault positioning device and method thereof
CN115359457A (en) * 2022-08-24 2022-11-18 纵目科技(上海)股份有限公司 3D target detection method and system based on fisheye image
WO2022245434A1 (en) * 2021-05-21 2022-11-24 Qualcomm Incorporated Implicit image and video compression using machine learning systems
CN115424127A (en) * 2022-09-22 2022-12-02 苏州浪潮智能科技有限公司 Device installation detection method, device, computer equipment and storage medium
CN116012850A (en) * 2023-01-29 2023-04-25 河海大学 Handwritten mathematical expression recognition method based on octave convolution and encoding and decoding
CN116071292A (en) * 2022-10-08 2023-05-05 中国人民解放军国防科技大学 Ophthalmoscope retina image blood vessel identification method based on contrast generation learning
CN117237801A (en) * 2023-08-22 2023-12-15 西北工业大学 Multi-mode remote sensing image change detection method based on self-supervision learning

Patent Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106599939A (en) * 2016-12-30 2017-04-26 深圳市唯特视科技有限公司 Real-time target detection method based on region convolutional neural network
CN106960456A (en) * 2017-03-28 2017-07-18 长沙全度影像科技有限公司 A kind of method that fisheye camera calibration algorithm is evaluated
CN113240677A (en) * 2021-05-06 2021-08-10 浙江医院 Retina optic disc segmentation method based on deep learning
WO2022245434A1 (en) * 2021-05-21 2022-11-24 Qualcomm Incorporated Implicit image and video compression using machine learning systems
CN113658115A (en) * 2021-07-30 2021-11-16 华南理工大学 Image anomaly detection method for generating countermeasure network based on deep convolution
CN114062839A (en) * 2021-11-01 2022-02-18 山东职业学院 Railway power line fault positioning device and method thereof
CN115359457A (en) * 2022-08-24 2022-11-18 纵目科技(上海)股份有限公司 3D target detection method and system based on fisheye image
CN115424127A (en) * 2022-09-22 2022-12-02 苏州浪潮智能科技有限公司 Device installation detection method, device, computer equipment and storage medium
CN116071292A (en) * 2022-10-08 2023-05-05 中国人民解放军国防科技大学 Ophthalmoscope retina image blood vessel identification method based on contrast generation learning
CN116012850A (en) * 2023-01-29 2023-04-25 河海大学 Handwritten mathematical expression recognition method based on octave convolution and encoding and decoding
CN117237801A (en) * 2023-08-22 2023-12-15 西北工业大学 Multi-mode remote sensing image change detection method based on self-supervision learning

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
一种基于神经网络的畸变图像校正方法;王珂娜, 邹北骥, 黄文梅;中国图象图形学报;20050525(第05期);第603-606页 *

Also Published As

Publication number Publication date
CN117495741A (en) 2024-02-02

Similar Documents

Publication Publication Date Title
Chen et al. Real-world image denoising with deep boosting
CN110458765B (en) Image quality enhancement method based on perception preserving convolution network
Yu et al. A unified learning framework for single image super-resolution
CN111582483B (en) Unsupervised learning optical flow estimation method based on space and channel combined attention mechanism
CN109636721B (en) Video super-resolution method based on countermeasure learning and attention mechanism
Liu et al. Exploit camera raw data for video super-resolution via hidden Markov model inference
CN110288526B (en) Optimization method for improving imaging quality of single-pixel camera by image reconstruction algorithm based on deep learning
CN109389667B (en) High-efficiency global illumination drawing method based on deep learning
CN113837938A (en) Super-resolution method for reconstructing potential image based on dynamic vision sensor
CN115484410B (en) Event camera video reconstruction method based on deep learning
Guan et al. Srdgan: learning the noise prior for super resolution with dual generative adversarial networks
US11974050B2 (en) Data simulation method and device for event camera
CN112508812A (en) Image color cast correction method, model training method, device and equipment
CN111460876A (en) Method and apparatus for identifying video
Yu et al. Luminance attentive networks for hdr image and panorama reconstruction
CN115496663A (en) Video super-resolution reconstruction method based on D3D convolution intra-group fusion network
CN110047038B (en) Single-image super-resolution reconstruction method based on hierarchical progressive network
CN113379606B (en) Face super-resolution method based on pre-training generation model
CN112150363B (en) Convolutional neural network-based image night scene processing method, computing module for operating method and readable storage medium
CN107729885B (en) Face enhancement method based on multiple residual error learning
CN117495741B (en) Distortion restoration method based on large convolution contrast learning
CN113724134A (en) Aerial image blind super-resolution reconstruction method based on residual distillation network
CN103226818B (en) Based on the single-frame image super-resolution reconstruction method of stream shape canonical sparse support regression
CN116245968A (en) Method for generating HDR image based on LDR image of transducer
CN115841523A (en) Double-branch HDR video reconstruction algorithm based on Raw domain

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant