CN108665410B - Image super-resolution reconstruction method, device and system - Google Patents

Image super-resolution reconstruction method, device and system Download PDF

Info

Publication number
CN108665410B
CN108665410B CN201710210828.3A CN201710210828A CN108665410B CN 108665410 B CN108665410 B CN 108665410B CN 201710210828 A CN201710210828 A CN 201710210828A CN 108665410 B CN108665410 B CN 108665410B
Authority
CN
China
Prior art keywords
image
images
frames
angle
acquired
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201710210828.3A
Other languages
Chinese (zh)
Other versions
CN108665410A (en
Inventor
蔡晓望
肖飞
范蒙
俞海
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hangzhou Hikvision Digital Technology Co Ltd
Original Assignee
Hangzhou Hikvision Digital Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hangzhou Hikvision Digital Technology Co Ltd filed Critical Hangzhou Hikvision Digital Technology Co Ltd
Priority to CN201710210828.3A priority Critical patent/CN108665410B/en
Publication of CN108665410A publication Critical patent/CN108665410A/en
Application granted granted Critical
Publication of CN108665410B publication Critical patent/CN108665410B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting
    • G06T3/4053Scaling of whole images or parts thereof, e.g. expanding or contracting based on super-resolution, i.e. the output image resolution being higher than the sensor resolution
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/60Rotation of whole images or parts thereof

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Image Processing (AREA)
  • Studio Devices (AREA)

Abstract

The embodiment of the invention discloses a method, a device and a system for reconstructing image super-resolution, wherein the method comprises the following steps: obtaining a multi-frame image to be processed, and selecting one frame of image from the multi-frame image as a reference frame image; acquiring and obtaining image transformation parameters corresponding to the rest frames of images according to the equipment jitter information and the equipment parameters; according to the image transformation parameters, taking the reference frame image as a reference, and carrying out calibration processing on the rest of each frame image to obtain target images corresponding to the rest of each frame image; and performing image reconstruction on the reference frame image and each frame of target image to obtain a super-resolution image corresponding to the multi-frame image. Because the image is calibrated only by taking the reference frame image as a reference according to the image transformation parameters in the scheme, the calculation process is simple, the calculation amount is small, and the time required by image calibration is reduced.

Description

Image super-resolution reconstruction method, device and system
Technical Field
The invention relates to the technical field of image processing, in particular to a method, a device and a system for reconstructing image super-resolution.
Background
The super-resolution image has high pixel density, can provide rich detail information, and can provide accurate and detailed description for objective scenes. With the continuous progress of economy and science and technology, the requirement of super-resolution images is very wide, and the super-resolution images have important roles in the fields of video safety monitoring, aerial photography investigation and the like. However, due to the limitation of the size of the camera imaging system and the sensor element, and the influence of the imaging environment and the object motion, the obtained digital image inevitably has a certain degradation, and often cannot meet the requirements of reconnaissance, monitoring, and the like, and it is very difficult to obtain a true super-resolution image.
Theoretically, the most straightforward way to increase image resolution is to increase sensor density, increasing the number of imaging systems. Although this method is effective, it has a technical bottleneck in terms of hardware, and is expensive and difficult to be widely used. Therefore, it is currently the most important method for obtaining super-resolution images to improve the resolution of images by using signal processing techniques. The image super-resolution reconstruction technology is an image processing technology for processing a frame of low-resolution image or a plurality of frames of low-resolution images and reconstructing the processed frames of low-resolution images to obtain a frame of super-resolution image.
The existing super-resolution reconstruction method for obtaining a super-resolution image through a plurality of frames of low-resolution images generally comprises two processes, namely image registration and image reconstruction. The method comprises the steps of extracting feature points in multi-frame low-resolution images in the image registration process, finding matched feature point pairs through similarity measurement, estimating relative displacement of sub-pixel levels among the multi-frame low-resolution images through the matched feature point pairs to obtain image space coordinate transformation parameters, and finally carrying out image calibration on the multi-frame low-resolution images through the image space coordinate transformation parameters. And in the image reconstruction process, reconstructing the multi-frame low-resolution image after image registration into a frame super-resolution image.
Therefore, the image registration process is complex and tedious, the calculated amount is large, and the time required by the image registration process is long.
Disclosure of Invention
The embodiment of the invention discloses a method, a device and a system for reconstructing an image super-resolution, which are used for solving the problems of complex process and large calculated amount in the existing image super-resolution reconstruction technology. The technical scheme is as follows:
in a first aspect, an embodiment of the present invention provides an image super-resolution reconstruction method, where the method includes:
obtaining a multi-frame image to be processed, and selecting one frame of image from the multi-frame image as a reference frame image;
acquiring and obtaining image transformation parameters corresponding to the rest frames of images according to the equipment jitter information and the equipment parameters, wherein the image transformation parameters are as follows: calibrating a calibration parameter required when the position relation between image pixel points is calibrated by taking the reference frame image as a standard;
according to the image transformation parameters, taking the reference frame image as a reference, and carrying out calibration processing on the rest of the frame images to obtain target images corresponding to the rest of the frame images respectively;
and carrying out image reconstruction on the reference frame image and each frame of target image to obtain a super-resolution image corresponding to the multi-frame image.
Optionally, the device jitter information includes: pitch angle change, angle of inclination change and rotation angle change, the equipment parameter includes: focal length, image resolution, imaging plane height and imaging plane width; the image transformation parameters include: vertical transformation parameters, horizontal transformation parameters and rotation transformation parameters;
the step of obtaining and obtaining image transformation parameters corresponding to the rest frames of images according to the equipment jitter information and the equipment parameters comprises:
acquiring a pitch angle, a rotation angle and an inclination angle when the multi-frame image is acquired;
calculating the pitch angle variation when the other frames of images are acquired according to the pitch angle when the reference frame image is acquired and the pitch angles when the other frames of images are acquired, and calculating the vertical transformation parameters of the other frames of images according to the pitch angle variation, the focal length, the vertical pixel number of the image resolution and the imaging surface height;
calculating a rotation angle variation amount when the other frames of images are acquired according to a rotation angle when the reference frame image is acquired and rotation angles when the other frames of images are acquired, and calculating a horizontal conversion parameter of the other frames of images according to the rotation angle variation amount, the focal length, the horizontal pixel number of the image resolution and the imaging plane width;
and calculating the inclination angle variation quantity when the rest of the frame images are acquired according to the inclination angle when the reference frame image is acquired and the inclination angles when the rest of the frame images are acquired, and calculating the rotation transformation parameters of the rest of the frame images according to the inclination angle variation quantity.
Optionally, the step of calculating vertical transformation parameters of the rest frames of images according to the pitch angle variation, the focal length, the number of vertical pixels of the image resolution, and the imaging plane height includes:
using the formula:
Figure BDA0001260923250000031
calculating vertical transformation parameters of the other frames of images;
the step of calculating the vertical transformation parameters of the rest frames of images according to the rotation angle variation, the focal length, the number of vertical pixels of the image resolution and the height of an imaging plane comprises the following steps:
using the formula:
Figure BDA0001260923250000032
calculating horizontal transformation parameters of the other frames of images;
the step of calculating the rotation transformation parameters of the rest of the frame images according to the inclination angle variation includes:
using the formula: alpha is alphai=∠Ci-∠C0Calculating the rotation transformation parameters of the other frames of images;
wherein d isveri is the vertical transformation parameter of the image i in the rest frame images, dhori is a horizontal transformation parameter of the image i, αiIs the rotation transformation parameter of the image i, r is the focal length, and is angle Ai-∠A0For acquiring the pitch angle variation of the image i, angle A0To acquire the pitch angle at the time of the reference frame image, angle AiFor the pitch angle at the time of acquiring the image i, i ∈ [1, N]N is the number of the other frames, HimIs the number of vertical pixels of the image resolution, WimIs the number of horizontal pixels of the image resolution, HsensorIs the image plane height, WsensorIs the imaging surface width, angle Bi-∠B0For the variation of the rotation angle when acquiring the image i, angle B0For the rotation angle at the time of collecting the reference frame image, angle BiTo collect the restRotation angle at image i, angle Ci-∠C0Angle C for the variation of the inclination angle when the image i is collected0Angle C for collecting the angle of inclination of the reference frame imageiIs the tilt angle at which the remaining image i is acquired.
Optionally, the device jitter information includes: pitch angle variation, inclination angle variation and rotation angle variation; the device parameters include: focal length, image resolution, imaging plane height and imaging plane width; the image transformation parameters include: vertical transformation parameters, horizontal transformation parameters and rotation transformation parameters;
before the step of obtaining and obtaining the image transformation parameters corresponding to the other frames of images according to the equipment jitter information and the equipment parameters, the method further comprises:
calculating the acquisition time difference between the reference frame image and each of the other frames of images according to the time information for acquiring the multiple frames of images;
the step of obtaining and obtaining image transformation parameters corresponding to the rest frames of images according to the equipment jitter information and the equipment parameters comprises:
acquiring a pitch angle angular velocity, a rotation angle angular velocity and an inclination angle angular velocity when the multi-frame image is acquired;
calculating the pitch angle variable quantity when the other frames of images are acquired according to the pitch angle angular speed when the reference frame image is acquired, the pitch angle angular speed when the other frames of images are acquired and the acquisition time difference, and calculating the vertical conversion parameters of the other frames of images according to the pitch angle variable quantity, the focal length, the number of vertical pixels of image resolution and the height of an imaging plane;
calculating a rotation angle variation amount when the remaining frames of images are acquired according to a rotation angle angular velocity when the reference frame of images are acquired, a rotation angle angular velocity when the remaining frames of images are acquired, and the acquisition time difference, and calculating a horizontal conversion parameter of the remaining frames of images according to the rotation angle variation amount, the focal length, the number of horizontal pixels of image resolution, and the imaging plane width;
and calculating the inclination angle variation quantity when the other frames of images are acquired according to the inclination angle angular speed when the reference frame image is acquired, the inclination angle angular speed when the other frames of images are acquired and the acquisition time difference, and calculating the rotation transformation parameters of the other frames of images according to the inclination angle variation quantity.
Optionally, the step of calculating vertical transformation parameters of the rest frames of images according to the pitch angle variation, the focal length, the number of vertical pixels of the image resolution, and the imaging plane height includes:
using the formula:
Figure BDA0001260923250000041
calculating vertical transformation parameters of the other frames of images;
the step of calculating the horizontal transformation parameters of the rest frames of images according to the rotation angle variation, the focal length, the horizontal pixel number of the image resolution and the imaging surface width comprises the following steps:
using the formula:
Figure BDA0001260923250000042
calculating horizontal transformation parameters of the other frames of images;
the step of calculating the rotation transformation parameters of the rest of the frame images according to the inclination angle variation includes:
using the formula:
Figure BDA0001260923250000051
calculating the rotation transformation parameters of the other frames of images;
wherein d isveri is the vertical transformation parameter of the image i in the rest frame images, dhori is a horizontal transformation parameter of the image i, αiIs the rotational transformation parameter of image i, r is the focal length,
Figure BDA0001260923250000052
for the variation of the pitch angle, omega 0, in the acquisition of the image ipitchTo adoptAngular velocity of pitch angle ω i at the time of collection of the reference frame imagepitchFor the pitch angular velocity at which image i is acquired, i ∈ [1, N]N is the number of the other frames, tiIs the difference in acquisition time of image i and the reference frame image, HimIs the number of vertical pixels of the image resolution, WimIs the number of horizontal pixels of the image resolution, HsensorIs the image plane height, WsensorFor the width of the imaging surface,
Figure BDA0001260923250000053
for the amount of rotation angle variation in the acquisition of the image i, ω 0yawFor the angular velocity of the rotation angle at which the reference frame image is acquired, ω iyawFor the angular velocity of the rotation angle at which the remaining images i are acquired,
Figure BDA0001260923250000054
for the amount of change in the tilt angle at which the image i is acquired, ω 0rollFor the angular velocity of the tilt angle at which the reference frame image is acquired, ω irollIs the angular velocity of the tilt angle at which the remaining image i is acquired.
Optionally, the step of performing calibration processing on the remaining frame images by using the reference frame image as a reference according to the image transformation parameter to obtain target images corresponding to the remaining frame images respectively includes:
and carrying out alignment transformation processing on the pixel point coordinates of the rest of the frames of images by using the image transformation parameters to obtain the pixel point coordinates of the target images corresponding to the rest of the frames of images.
Optionally, the step of performing alignment transformation processing on the pixel coordinates of the other frames of images by using the image transformation parameters to obtain the pixel coordinates of the target images corresponding to the other frames of images includes:
using the formula:
Figure BDA0001260923250000055
calculating pixel point coordinates of the target images corresponding to the other frames of images respectively;
wherein (x)i,yi) Is the pixel point coordinate of the target image corresponding to the image i, (w)i,zi) Is the pixel point coordinate of the image i.
Optionally, the step of performing image reconstruction on the reference frame image and each target image to obtain a super-resolution image corresponding to the multiple frame images includes:
amplifying the reference frame image and each target image;
and carrying out image reconstruction on the amplified image through a target convolutional neural network obtained through pre-training to obtain a super-resolution image corresponding to the multi-frame image, wherein the target convolutional neural network is a convolutional neural network which is obtained through pre-training and is used for reconstructing the amplified image.
Optionally, the training mode of the target convolutional neural network includes:
constructing an initial convolutional neural network comprising a plurality of filters;
obtaining a plurality of groups of images, wherein each group of images comprises a plurality of initial images, and selecting one frame from the plurality of initial images as a true value image sample of the group of images;
taking the corresponding true value image sample as a reference, calibrating the multi-frame initial image in each group of images, and performing down-sampling on the calibrated multi-frame initial image to obtain a down-sampled multi-frame initial image;
amplifying the multi-frame initial image subjected to the down-sampling processing to obtain a multi-frame target image sample, wherein the multi-frame target image sample and the true value image sample are image training samples for training a convolutional neural network;
inputting the image training sample into the initial convolutional neural network for training;
and when the average difference value between the output result corresponding to the multi-frame target image sample and the corresponding true value image sample is smaller than a preset value, finishing training to obtain the target convolutional neural network.
Optionally, the multi-frame image is an image in a Bayer format.
In a second aspect, an embodiment of the present invention further provides an image super-resolution reconstruction apparatus, where the apparatus includes:
the image acquisition module is used for acquiring a plurality of frames of images to be processed and selecting one frame of image from the plurality of frames of images as a reference frame image;
the image transformation parameter calculation module is used for obtaining and obtaining image transformation parameters corresponding to the rest frames of images according to the equipment jitter information and the equipment parameters, wherein the image transformation parameters are as follows: calibrating a calibration parameter required when the position relation between image pixel points is calibrated by taking the reference frame image as a standard;
the image calibration module is used for carrying out calibration processing on the rest of frame images by taking the reference frame image as a reference according to the image transformation parameters to obtain target images corresponding to the rest of frame images respectively;
and the image reconstruction module is used for carrying out image reconstruction on the reference frame image and each frame of target image to obtain a super-resolution image corresponding to the multi-frame image.
Optionally, the device jitter information includes: pitch angle change, angle of inclination change and rotation angle change, the equipment parameter includes: focal length, image resolution, imaging plane height and imaging plane width; the image transformation parameters include: vertical transformation parameters, horizontal transformation parameters and rotation transformation parameters;
the image transformation parameter calculation module includes:
the first acquisition unit is used for acquiring a pitch angle, a rotation angle and an inclination angle when the multi-frame images are acquired;
the first vertical transformation parameter calculation unit is used for calculating the pitch angle variation quantity when the other frames of images are acquired according to the pitch angle when the reference frame image is acquired and the pitch angle when the other frames of images are acquired, and calculating the vertical transformation parameters of the other frames of images according to the pitch angle variation quantity, the focal length, the number of vertical pixels of image resolution and the height of an imaging plane;
a first horizontal conversion parameter calculation unit configured to calculate a rotation angle variation amount when the remaining frames of images are acquired, based on a rotation angle when the reference frame of images is acquired and a rotation angle when the remaining frames of images are acquired, and calculate a horizontal conversion parameter of the remaining frames of images, based on the rotation angle variation amount, the focal length, a horizontal pixel number of image resolution, and an imaging plane width;
and the first rotation transformation parameter calculation unit is used for calculating the inclination angle variation when the other frames of images are acquired according to the inclination angle when the reference frame image is acquired and the inclination angles when the other frames of images are acquired, and calculating the rotation transformation parameters of the other frames of images according to the inclination angle variation.
Optionally, the first vertical transformation parameter calculating unit includes:
a first vertical transformation parameter calculation subunit for calculating, using the formula:
Figure BDA0001260923250000071
Figure BDA0001260923250000072
calculating vertical transformation parameters of the other frames of images;
the first horizontal conversion parameter calculation unit includes:
a first vertical transformation parameter calculation subunit for calculating, using the formula:
Figure BDA0001260923250000081
Figure BDA0001260923250000082
calculating horizontal transformation parameters of the other frames of images;
the first rotation transformation parameter calculation unit includes:
a first rotation transformation parameter calculation subunit for calculating, using the formula: alpha is alphai=∠Ci-∠C0Calculating the rotation transformation parameters of the other frames of images;
wherein d isveri is the vertical transformation parameter of the image i in the rest frame images, dhori is a horizontal transformation parameter of the image i, αiIs the rotation transformation parameter of the image i, r is the focal length, and is angle Ai-∠A0For acquiring the pitch angle variation of the image i, angle A0To acquire the pitch angle at the time of the reference frame image, angle AiFor the pitch angle at the time of acquiring the image i, i ∈ [1, N]N is the number of the other frames, HimIs the number of vertical pixels of the image resolution, WimIs the number of horizontal pixels of the image resolution, HsensorIs the image plane height, WsensorIs the imaging surface width, angle Bi-∠B0For the variation of the rotation angle when acquiring the image i, angle B0For the rotation angle at the time of collecting the reference frame image, angle BiAngle of rotation for collecting the rest of image ii-∠C0Angle C for the variation of the inclination angle when the image i is collected0Angle C for collecting the angle of inclination of the reference frame imageiIs the tilt angle at which the remaining image i is acquired.
Optionally, the device jitter information includes: pitch angle variation, inclination angle variation and rotation angle variation; the device parameters include: focal length, image resolution, imaging plane height and imaging plane width; the image transformation parameters include: vertical transformation parameters, horizontal transformation parameters and rotation transformation parameters;
the device further comprises: a time difference calculation module, configured to calculate, before the step of obtaining and obtaining image transformation parameters corresponding to the other frames of images according to the device shaking information and the device parameters, an acquisition time difference between the reference frame image and each of the other frames of images according to time information for acquiring the multiple frames of images;
the image transformation parameter calculation module includes:
the second acquisition unit is used for acquiring a pitch angle angular velocity, a rotation angle angular velocity and a tilt angle angular velocity when the multi-frame images are acquired;
a second vertical transformation parameter calculation unit, configured to calculate a pitch angle variation amount when the other frames of images are acquired according to the pitch angle angular velocity when the reference frame image is acquired, the pitch angle angular velocity when the other frames of images are acquired, and the acquisition time difference, and calculate vertical transformation parameters of the other frames of images according to the pitch angle variation amount, the focal length, the number of vertical pixels of image resolution, and the height of an imaging plane;
a second horizontal conversion parameter calculation unit configured to calculate a rotation angle variation amount when the remaining frames of images are acquired, based on a rotation angle angular velocity when the reference frame of images is acquired, a rotation angle angular velocity when the remaining frames of images are acquired, and the acquisition time difference, and calculate a horizontal conversion parameter of the remaining frames of images, based on the rotation angle variation amount, the focal length, a horizontal pixel number of image resolution, and an imaging plane width;
and the second rotation transformation parameter calculation unit is used for calculating the inclination angle variation quantity when the other frames of images are collected according to the inclination angle angular velocity when the reference frame image is collected, the inclination angle angular velocity when the other frames of images are collected and the collection time difference, and calculating the rotation transformation parameters of the other frames of images according to the inclination angle variation quantity.
Optionally, the second vertical transformation parameter calculating unit includes:
a second vertical transformation parameter calculation subunit for calculating, using the formula:
Figure BDA0001260923250000091
calculating vertical transformation parameters of the other frames of images;
the second horizontal conversion parameter calculation unit includes:
a second horizontal transformation parameter calculation subunit for calculating, using the formula:
Figure BDA0001260923250000092
Figure BDA0001260923250000093
calculating vertical transformation parameters of the other frames of images;
the second rotation transformation parameter calculation unit includes:
a second rotation transformation parameter calculation subunit for calculating, using the formula:
Figure BDA0001260923250000094
calculating the rotation transformation parameters of the other frames of images;
wherein d isveri is the vertical transformation parameter of the image i in the rest frame images, dhori is a horizontal transformation parameter of the image i, αiIs the rotational transformation parameter of image i, r is the focal length,
Figure BDA0001260923250000095
for the variation of the pitch angle, omega 0, in the acquisition of the image ipitchFor acquiring the pitch angular velocity, ω i, of the reference frame imagepitchFor the pitch angular velocity at which image i is acquired, i ∈ [1, N]N is the number of the other frames, tiIs the difference in acquisition time of image i and the reference frame image, HimIs the number of vertical pixels of the image resolution, WimIs the number of horizontal pixels of the image resolution, HsensorIs the image plane height, WsensorFor the width of the imaging surface,
Figure BDA0001260923250000101
for the amount of rotation angle variation in the acquisition of the image i, ω 0yawFor the angular velocity of the rotation angle at which the reference frame image is acquired, ω iyawFor the angular velocity of the rotation angle at which the remaining images i are acquired,
Figure BDA0001260923250000102
for the amount of change in the tilt angle at which the image i is acquired, ω 0rollFor the angular velocity of the tilt angle at which the reference frame image is acquired, ω irollIs the angular velocity of the tilt angle at which the remaining image i is acquired.
Optionally, the image calibration module includes:
and the alignment transformation unit is used for carrying out alignment transformation processing on the pixel point coordinates of the rest of the frames of images by using the image transformation parameters to obtain the pixel point coordinates of the target images corresponding to the rest of the frames of images.
Optionally, the alignment transformation unit includes:
a pixel point coordinate calculation subunit configured to calculate, using a formula:
Figure BDA0001260923250000103
Figure BDA0001260923250000104
calculating pixel point coordinates of the target images corresponding to the other frames of images respectively;
wherein (x)i,yi) Is the pixel point coordinate of the target image corresponding to the image i, (w)i,zi) Is the pixel point coordinate of the image i.
Optionally, the image reconstructing module includes:
the amplifying processing unit is used for amplifying the reference frame image and each target image;
and the image reconstruction unit is used for carrying out image reconstruction on the amplified image through a target convolutional neural network obtained by training in advance through the model construction module to obtain a super-resolution image corresponding to the multi-frame image, wherein the target convolutional neural network is a convolutional neural network which is trained in advance and used for reconstructing the amplified image.
Optionally, the model building module includes:
an initial convolutional neural network construction unit for constructing an initial convolutional neural network including a plurality of filters;
the device comprises an initial image acquisition unit, a real-time image acquisition unit and a real-time image acquisition unit, wherein the initial image acquisition unit is used for acquiring a plurality of groups of images, each group of images comprises a plurality of frames of initial images, and one frame is selected from the plurality of frames of initial images to be used as a true value image sample of the group of images;
the initial image calibration unit is used for calibrating the multi-frame initial images in each group of images by taking the corresponding true value image samples as a reference, and performing down-sampling on the calibrated multi-frame initial images to obtain the down-sampled multi-frame initial images;
the initial image amplification unit is used for amplifying the multi-frame initial image subjected to the downsampling processing to obtain a multi-frame target image sample, wherein the multi-frame target image sample and the true value image sample are image training samples used for training a convolutional neural network;
the sample training unit is used for inputting the image training sample into the initial convolutional neural network for training;
and the target convolutional neural network obtaining unit is used for finishing training when the average difference value between the output result corresponding to the multiple frames of target image samples and the corresponding true value image sample is smaller than a preset value, so as to obtain the target convolutional neural network.
Optionally, the multi-frame image is an image in a Bayer format.
In a third aspect, an embodiment of the present invention further provides an image super-resolution reconstruction system, where the system includes: an image acquisition apparatus and an image reconstruction apparatus, wherein,
the image acquisition equipment is used for acquiring a multi-frame image to be processed and sending the multi-frame image to be processed to the image reconstruction equipment;
the image reconstruction device is used for receiving the multi-frame image to be processed sent by the image acquisition device and selecting one frame of image from the multi-frame image as a reference frame image; acquiring and obtaining image transformation parameters corresponding to the rest frames of images according to the equipment jitter information and the equipment parameters; according to the image transformation parameters, taking the reference frame image as a reference, and carrying out calibration processing on the rest of the frame images to obtain target images corresponding to the rest of the frame images respectively; performing image reconstruction on the reference frame image and each frame of target image to obtain a super-resolution image corresponding to the plurality of frames of images, wherein the image transformation parameters are as follows: and calibrating the calibration parameters required by the position relation between the image pixel points by taking the reference frame image as a standard.
Optionally, the device jitter information includes: pitch angle change, angle of inclination change and rotation angle change, the equipment parameter includes: focal length, image resolution, imaging plane height and imaging plane width; the image transformation parameters include: vertical transformation parameters, horizontal transformation parameters and rotation transformation parameters;
the image reconstruction device is specifically used for acquiring a pitch angle, a rotation angle and an inclination angle when the multi-frame image is acquired; calculating the pitch angle variation when the other frames of images are acquired according to the pitch angle when the reference frame image is acquired and the pitch angles when the other frames of images are acquired, and calculating the vertical transformation parameters of the other frames of images according to the pitch angle variation, the focal length, the vertical pixel number of the image resolution and the imaging surface height; calculating a rotation angle variation amount when the other frames of images are acquired according to a rotation angle when the reference frame image is acquired and rotation angles when the other frames of images are acquired, and calculating a horizontal conversion parameter of the other frames of images according to the rotation angle variation amount, the focal length, the horizontal pixel number of the image resolution and the imaging plane width; and calculating the inclination angle variation quantity when the rest of the frame images are acquired according to the inclination angle when the reference frame image is acquired and the inclination angles when the rest of the frame images are acquired, and calculating the rotation transformation parameters of the rest of the frame images according to the inclination angle variation quantity.
Optionally, the image reconstruction device is specifically configured to use a formula:
Figure BDA0001260923250000121
Figure BDA0001260923250000122
calculating vertical transformation parameters of the other frames of images; using the formula:
Figure BDA0001260923250000123
Figure BDA0001260923250000124
calculating horizontal transformation parameters of the other frames of images; using the formula: alpha is alphai=∠Ci-∠C0Calculating the rotation transformation parameters of the other frames of images;
wherein d isveri is the vertical transformation parameter of the image i in the rest frame images, dhori is a horizontal transformation parameter of the image i, αiIs the rotation transformation parameter of the image i, r is the focal length, and is angle Ai-∠A0For acquiring the pitch angle variation of the image i, angle A0To acquire the pitch angle at the time of the reference frame image, angle AiFor the pitch angle at the time of acquiring the image i, i ∈ [1, N]N is the number of the other frames, HimIs the number of vertical pixels of the image resolution, WimIs the number of horizontal pixels of the image resolution, HsensorIs the image plane height, WsensorIs the imaging surface width, angle Bi-∠B0For the variation of the rotation angle when acquiring the image i, angle B0For the rotation angle at the time of collecting the reference frame image, angle BiAngle of rotation for collecting the rest of image ii-∠C0Angle C for the variation of the inclination angle when the image i is collected0Angle C for collecting the angle of inclination of the reference frame imageiIs the tilt angle at which the remaining image i is acquired.
Optionally, the pitch angle variation, the inclination angle variation and the rotation angle variation; the device parameters include: focal length, image resolution, imaging plane height and imaging plane width; the image transformation parameters include: vertical transformation parameters, horizontal transformation parameters and rotation transformation parameters;
the image reconstruction device is specifically configured to calculate an acquisition time difference between the reference frame image and each of the other frames of images according to the time information for acquiring the multiple frames of images; acquiring a pitch angle angular velocity, a rotation angle angular velocity and an inclination angle angular velocity when the multi-frame image is acquired; calculating the pitch angle variable quantity when the other frames of images are acquired according to the pitch angle angular speed when the reference frame image is acquired, the pitch angle angular speed when the other frames of images are acquired and the acquisition time difference, and calculating the vertical conversion parameters of the other frames of images according to the pitch angle variable quantity, the focal length, the number of vertical pixels of image resolution and the height of an imaging plane; calculating a rotation angle variation amount when the remaining frames of images are acquired according to a rotation angle angular velocity when the reference frame of images are acquired, a rotation angle angular velocity when the remaining frames of images are acquired, and the acquisition time difference, and calculating a horizontal conversion parameter of the remaining frames of images according to the rotation angle variation amount, the focal length, the number of horizontal pixels of image resolution, and the imaging plane width; and calculating the inclination angle variation quantity when the other frames of images are acquired according to the inclination angle angular speed when the reference frame image is acquired, the inclination angle angular speed when the other frames of images are acquired and the acquisition time difference, and calculating the rotation transformation parameters of the other frames of images according to the inclination angle variation quantity.
Optionally, the image reconstruction device is specifically configured to use a formula:
Figure BDA0001260923250000131
calculating vertical transformation parameters of the other frames of images; using the formula:
Figure BDA0001260923250000132
calculating horizontal transformation parameters of the other frames of images; using the formula:
Figure BDA0001260923250000133
calculating the rotation transformation parameters of the other frames of images;
wherein d isveri is the vertical transformation parameter of the image i in the rest frame images, dhori is a horizontal transformation parameter of the image i, αiIs the rotational transformation parameter of image i, r is the focal length,
Figure BDA0001260923250000134
for the variation of the pitch angle, omega 0, in the acquisition of the image ipitchFor the pitch angle at which the reference frame image is acquiredSpeed, [ omega ] ipitchFor the pitch angular velocity at which image i is acquired, i ∈ [1, N]N is the number of the other frames, tiIs the difference in acquisition time of image i and the reference frame image, HimIs the number of vertical pixels of the image resolution, WimIs the number of horizontal pixels of the image resolution, HsensorIs the image plane height, WsensorFor the width of the imaging surface,
Figure BDA0001260923250000141
for the amount of rotation angle variation in the acquisition of the image i, ω 0yawFor the angular velocity of the rotation angle at which the reference frame image is acquired, ω iyawFor the angular velocity of the rotation angle at which the remaining images i are acquired,
Figure BDA0001260923250000142
for the amount of change in the tilt angle at which the image i is acquired, ω 0rollFor the angular velocity of the tilt angle at which the reference frame image is acquired, ω irollIs the angular velocity of the tilt angle at which the remaining image i is acquired.
Optionally, the image reconstruction device is specifically configured to perform, by using the image transformation parameter, alignment transformation processing on the pixel coordinates of the other frames of images to obtain pixel coordinates of target images corresponding to the other frames of images.
Optionally, the image reconstruction device is specifically configured to use a formula:
Figure BDA0001260923250000143
Figure BDA0001260923250000144
calculating pixel point coordinates of the target images corresponding to the other frames of images respectively;
wherein (x)i,yi) Is the pixel point coordinate of the target image corresponding to the image i, (w)i,zi) Is the pixel point coordinate of the image i.
Optionally, the image reconstruction device is specifically configured to perform amplification processing on the reference frame image and each target image; and carrying out image reconstruction on the amplified image through a target convolutional neural network obtained through pre-training to obtain a super-resolution image corresponding to the multi-frame image, wherein the target convolutional neural network is a convolutional neural network which is obtained through pre-training and is used for reconstructing the amplified image.
Optionally, the image reconstruction device is specifically configured to construct an initial convolutional neural network including a plurality of filters; obtaining a plurality of groups of images, wherein each group of images comprises a plurality of initial images, and selecting one frame from the plurality of initial images as a true value image sample of the group of images; taking the corresponding true value image sample as a reference, calibrating the multi-frame initial image in each group of images, and performing down-sampling on the calibrated multi-frame initial image to obtain a down-sampled multi-frame initial image; amplifying the multi-frame initial image subjected to the down-sampling processing to obtain a multi-frame target image sample; inputting the image training sample into the initial convolutional neural network for training; and when the average difference value between the output result corresponding to the multiple frames of target image samples and the corresponding true value image sample is smaller than a preset value, finishing training to obtain the target convolutional neural network, wherein the multiple frames of target image samples and the true value image samples are image training samples for training the convolutional neural network.
Optionally, the multi-frame image is an image in a JPG format.
According to the scheme provided by the embodiment of the invention, a plurality of frames of images to be processed are obtained firstly, then one frame of image is selected from the plurality of frames of images to be used as a reference frame image, image transformation parameters corresponding to the rest of frames of images are obtained according to equipment jitter information and equipment parameters, then the rest of frames of images are calibrated by taking the reference frame image as a reference according to the image transformation parameters, target images corresponding to the rest of frames of images are obtained, finally the reference frame image and each frame of target image are subjected to image reconstruction, super-resolution images corresponding to the plurality of frames of images are obtained, and the super-resolution reconstruction of the images is completed. According to the scheme, complex image registration processes such as feature point extraction are not needed, image transformation parameters corresponding to other frames of images are obtained according to equipment shaking information, equipment parameters and acquisition time difference, and the images are calibrated according to the image transformation parameters.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to the drawings without creative efforts.
Fig. 1 is a flowchart of an image super-resolution reconstruction method according to an embodiment of the present invention;
fig. 2 is a schematic three-dimensional view of an image capturing device according to an embodiment of the present invention;
FIG. 3 is a schematic structural diagram of a target convolutional neural network according to an embodiment of the present invention;
fig. 4(a) is a schematic diagram of photographing when an image capturing device provided by an embodiment of the present invention captures a reference frame image;
fig. 4(b) is a schematic shooting diagram of the image acquisition apparatus according to the embodiment of the present invention when acquiring an image i;
fig. 5(a) is a schematic top-view shooting diagram of an image i acquired by an image acquisition device according to an embodiment of the present invention;
fig. 5(b) is a schematic top view shooting diagram of an image capturing apparatus according to an embodiment of the present invention when capturing a reference frame image;
fig. 6(a) is a schematic rear view shooting diagram of an image i collected by the image collecting apparatus according to the embodiment of the present invention;
fig. 6(b) is a schematic rear view photographing diagram of an image capturing device according to an embodiment of the present invention when capturing a reference frame image;
fig. 7 is a schematic diagram of an image splitting process according to an embodiment of the present invention;
FIG. 8 is a flowchart of a target convolutional neural network training method according to an embodiment of the present invention;
FIG. 9 is a schematic structural diagram of an image super-resolution reconstruction apparatus according to an embodiment of the present invention;
fig. 10 is a schematic structural diagram of an image super-resolution reconstruction system according to an embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
In order to reduce the complexity of an image super-resolution reconstruction process, reduce the calculated amount and improve the image super-resolution reconstruction efficiency, the embodiment of the invention provides an image super-resolution reconstruction method, device and system.
First, a method for reconstructing super-resolution image provided by an embodiment of the present invention is described below.
As shown in fig. 1, a method for reconstructing super-resolution image includes the following steps:
first, it should be noted that the image super-resolution reconstruction method provided by the embodiment of the present invention can be applied to an image acquisition device. In addition, for the conditions of monitoring, detection and the like needing to obtain the super-resolution image in real time, the image acquisition equipment can obtain the image in real time to carry out super-resolution reconstruction processing; for the situation that real-time processing is not needed, the image acquisition device can store the acquired image to the local for performing super-resolution reconstruction processing on the image when needed, or acquiring the corresponding image when super-resolution reconstruction processing is needed, which is reasonable. Of course, the method of the embodiment of the present invention may also be applied to a server or other electronic devices with data interaction and processing functions, and is not limited herein.
S101, obtaining a multi-frame image to be processed, and selecting one frame of image from the multi-frame image as a reference frame image;
when a super-resolution image needs to be obtained, the image acquisition device may perform super-resolution reconstruction on the image, and then the image acquisition device may obtain a multi-frame image to be processed, it can be understood that the multi-frame image is a multi-frame image used for reconstructing the super-resolution image, and is generally a continuous multi-frame image acquired by the image acquisition device in the same scene, and may generally be 3 frames, 5 frames, or 7 frames, and the like, which is not specifically limited herein.
For the mode of obtaining the multi-frame image by the image acquisition equipment, the existing arbitrary obtaining mode can be adopted. For example, in one implementation, the image capture device may obtain multiple frames of images in real time. Of course, in another implementation manner, the image capturing device may also store the captured multi-frame images locally, so as to obtain the stored multi-frame images when the super-resolution image needs to be reconstructed.
After the image acquisition device obtains the multi-frame images, one frame of image can be selected from the multi-frame images to serve as a reference frame image. The image acquisition device can arbitrarily select one frame image from the plurality of frame images as a reference frame image, such as a first frame, a last frame or an intermediate frame. Of course, a frame may also be selected as the reference frame image according to a preset rule, where the preset rule may be: selecting the intermediate frame as a reference frame image, selecting the second frame as a reference frame image, and the like, which are not specifically limited herein.
It should be noted that, in the embodiment of the present invention, the multi-frame image to be processed obtained by the image acquisition device may be an image in a Bayer format, but is not limited to this. As will be understood by those skilled in the art, since the Bayer-format image is the raw image data acquired by the image acquisition device, the high-resolution image obtained by the super-resolution reconstruction can be better obtained without any data format conversion, including the most complete image information.
S102, obtaining and obtaining image transformation parameters corresponding to the rest frames of images according to the equipment jitter information and the equipment parameters;
it should be noted that, the obtaining and obtaining the image transformation parameters corresponding to the other frames of images according to the device shaking information and the device parameters refers to obtaining the device shaking information and the device parameters, and obtaining the image transformation parameters corresponding to the other frames of images according to the device shaking information and the device parameters.
Because the image acquisition equipment can generally have the condition such as shake when gathering above-mentioned multiframe image, especially in fields such as unmanned aerial vehicle investigation, unmanned aerial vehicle receives the influence of factors such as air current in the flight process, the shake can appear when gathering the image, so, for the convenience of follow-up to carrying out calibration process to the multiframe image that obtains, in an embodiment, this image acquisition equipment is when obtaining this multiframe image, equipment shake information and this image acquisition equipment's when can also acquireing every frame image equipment parameter. In another embodiment, the image capturing device may obtain the device shaking information and the device parameters of the image capturing device in real time, so that the device shaking information and the device parameters of the image capturing device when each frame of image in the multi-frame image is captured are obtained.
The method for acquiring the equipment shake information and the equipment parameters may adopt a method of installing a gyroscope on the image acquisition equipment to acquire the equipment shake information and the like. The device parameters are inherent attribute information of the image acquisition device, and the device parameters of the image acquisition device are obtained after the type of the image acquisition device is determined.
By way of example, the device jitter information may include: pitch angle change amount, inclination angle change amount, and rotation angle change amount, the device parameter may include: the focal length, the image resolution, the imaging surface height and the imaging surface width of the image acquisition equipment. As shown in fig. 2, which is a three-dimensional schematic view of an image capturing device, for the convenience of the subsequent calibration process, the following definitions may be performed: when the multi-frame images are collected, the angle generated by the rotation of the image collecting equipment around the Y axis is a pitch angle; the angle generated by the rotation of the image acquisition equipment around the X axis is an inclination angle; the angle generated by the rotation of the image acquisition device about the Z axis is a rotation angle.
After the device shaking information and the device parameters are obtained, the image acquisition device can obtain image transformation parameters corresponding to the other frames of images according to the device shaking information and the device parameters. Wherein the image transformation parameters are: and calibrating the calibration parameters required by the position relation between the image pixel points by taking the reference frame image as a standard. For the sake of clear layout and clear scheme, a specific implementation manner of calculating image transformation parameters corresponding to the images of the remaining frames is described in the following.
S103, according to the image transformation parameters, taking the reference frame image as a reference, and performing calibration processing on the rest of frame images to obtain target images corresponding to the rest of frame images respectively;
after the image acquisition equipment obtains the image transformation parameters, the image transformation parameters can be utilized to perform calibration processing on the images of the rest frames by taking the reference frame image as a reference to obtain target images corresponding to the images of the rest frames, so that the purpose of position configuration is achieved, the obtained target images are aligned with the reference frame image, and the subsequent steps are facilitated.
In an embodiment, the image acquisition device may perform alignment transformation processing on the pixel coordinates of the other frames of images by using the image transformation parameters to obtain the pixel coordinates of the target images corresponding to the other frames of images, so as to obtain the target images corresponding to the other frames of images. For the sake of clear layout and clear scheme, a specific implementation manner of calculating the target images corresponding to the other frames of images is described in the following.
And S104, carrying out image reconstruction on the reference frame image and each frame of target image to obtain a super-resolution image corresponding to the multi-frame image.
After the target images corresponding to the rest frames of images are obtained, the image acquisition equipment can perform image reconstruction processing on the reference frame images and the target images of the frames to obtain super-resolution images, and image super-resolution reconstruction is completed.
As an implementation manner of the embodiment of the present invention, a manner of performing image reconstruction processing on the reference frame image and each frame target image by the image acquisition device may include:
amplifying the reference frame image and each target image; and carrying out image reconstruction on the amplified image through a target convolutional neural network obtained by pre-training to obtain a super-resolution image corresponding to the multi-frame image.
The target convolutional neural network may be a convolutional neural network trained in advance for reconstructing the amplified image. The target convolutional neural network may be composed of a plurality of filter banks, one possible structure of which is shown in fig. 3, the target convolutional neural network is a three-layer convolutional neural network composed of three layers of filters, the reference frame image and each target image after amplification processing are input into the target convolutional neural network, and the target convolutional neural network reconstructs the reference frame image and each target image, thereby outputting a frame of super-resolution image.
Taking the processing of the first layer target convolutional neural network on the image as an example, the specific processing principle is as follows:
Fk(a,b)=conv(Iloc(a,b),filterk)
the formula of the convolution operation is as follows:
Figure BDA0001260923250000191
wherein, Fk(a, B) represents the gray value of the pixel point coordinate (a, B) in the k-th characteristic image, conv (A, B) represents the convolution operation of A and B, Iloc(a, b) represents a local area of the input image centered on the coordinates (a, b), the size of the local area being the same as the size of each filter, filterkRepresenting the k-th filter kernel. The parameter c in the formula of the convolution operation is calculated by the parameter d in the size dxd of the filter, specifically, the parameter c is
Figure BDA0001260923250000192
And rounding down to obtain.
If the first layer filter bank has 1 × 64 filters, the number of output layers of the first layer target convolutional neural network is 64, the number of input image layers is 1, and the size of the filter is 9 × 9, the formula of the corresponding convolution operation can be as follows:
Figure BDA0001260923250000201
wherein, the parameter 4 in the above formula is obtained by rounding down 9/2.
Then, 64 characteristic images output by the first layer target convolutional neural network can be obtained through the two formulas. The 64 characteristic images are used as the input of the second layer target convolutional neural network, and the image reconstruction processing is continued.
Of course, the image reconstruction processing may also be performed by using other types of convolutional neural networks, as long as the purpose of reconstructing the reference frame image and each target image to obtain the super-resolution image can be achieved, which is not specifically limited herein.
The above-mentioned enlargement processing is an image processing method commonly used in the art, and a person skilled in the art can operate the enlargement processing according to factors such as an image format, and the like, and the enlargement processing is not specifically limited herein, and may be, for example, a bicubic enlargement, a bilinear interpolation enlargement, a spline interpolation enlargement, a wavelet-based enlargement, and the like.
As can be seen, in the scheme provided in the embodiment of the present invention, the image acquisition device first obtains a multi-frame image to be processed, then selects one frame of image from the multi-frame image as a reference frame image, obtains and obtains image transformation parameters corresponding to the other frames of images according to the device shaking information and the device parameters, then performs calibration processing on the other frames of images according to the image transformation parameters and using the reference frame image as a reference, obtains target images corresponding to the other frames of images, and finally performs image reconstruction on the reference frame image and the target images to obtain super-resolution images corresponding to the multi-frame images, thereby completing image super-resolution reconstruction. According to the scheme, complex image registration processes such as feature point extraction are not needed, image transformation parameters are obtained only according to equipment shaking information, equipment parameters and acquisition time difference, then the images are calibrated according to the image transformation parameters and the reference frame images, the calculation process is simple, the calculated amount is small, and the time required by image calibration is reduced.
The jitter information for the device includes: pitch angle variation, and rotation angle variation, and the equipment parameters include: focal length, image resolution, imaging plane height and imaging plane width, image transformation parameter includes: as an implementation manner of the embodiment of the present invention, the step of obtaining and obtaining image transformation parameters corresponding to the remaining frames of images according to the device shaking information and the device parameters may include:
acquiring a pitch angle, a rotation angle and an inclination angle when the multi-frame image is acquired;
calculating the pitch angle variation when the other frames of images are acquired according to the pitch angle when the reference frame image is acquired and the pitch angles when the other frames of images are acquired, and calculating the vertical transformation parameters of the other frames of images according to the pitch angle variation, the focal length, the vertical pixel number of the image resolution and the imaging surface height;
calculating a rotation angle variation amount when the other frames of images are acquired according to a rotation angle when the reference frame image is acquired and rotation angles when the other frames of images are acquired, and calculating a horizontal conversion parameter of the other frames of images according to the rotation angle variation amount, the focal length, the horizontal pixel number of the image resolution and the imaging plane width;
and calculating the inclination angle variation quantity when the rest of the frame images are acquired according to the inclination angle when the reference frame image is acquired and the inclination angles when the rest of the frame images are acquired, and calculating the rotation transformation parameters of the rest of the frame images according to the inclination angle variation quantity.
In one embodiment, the vertical transformation parameter, the horizontal transformation parameter, and the rotation transformation parameter corresponding to each of the remaining frames of images may be calculated by using the following formulas:
Figure BDA0001260923250000211
Figure BDA0001260923250000212
αi=∠Ci-∠C0
wherein d isveri is the vertical transformation parameter of the image i in the rest frame images, dhori is a horizontal transformation parameter of the image i, αiIs the rotation transformation parameter of the image i, r is the focal length, and is angle Ai-∠A0For acquiring the pitch angle variation of the image i, angle A0To acquire the pitch angle at the time of the reference frame image, angle AiFor the pitch angle at the time of acquiring the image i, i ∈ [1, N]N is the number of the other frames, HimIs the number of vertical pixels of the image resolution, WimIs the number of horizontal pixels of the image resolution, HsensorIs the image plane height, WsensorIs the imaging surface width, angle Bi-∠B0For the variation of the rotation angle when acquiring the image i, angle B0For the rotation angle at the time of collecting the reference frame image, angle BiAngle of rotation for collecting the rest of image ii-∠C0Angle C for the variation of the inclination angle when the image i is collected0Angle C for collecting the angle of inclination of the reference frame imageiIs the tilt angle at which the remaining image i is acquired.
The vertical transformation parameter d of the image i in the rest frame imagesverThe calculation of i is an example, and the manner of calculating the image conversion parameter will be described.
It should be noted that the vertical transformation parameter dveri denotes the number of vertical pixels in the vertical direction, i.e. in the image resolution of the image i, when the image i is acquired by the image acquisition deviceThe relative shift amount in the direction, that is, the relative shift amount in the vertical pixel number direction of the image resolution of the image i with respect to the time of acquiring the reference frame image.
As shown in fig. 4(a) and 4(b), it can be seen that when the image capturing device 410 shakes, and the length of PM in fig. 4(b) is the actually captured image i, the image capturing device 410 is shifted in the vertical direction with respect to the reference frame image, and then the vertical transformation parameter d is obtainedveri is as follows:
Figure BDA0001260923250000221
where PM is OM · tan (· POM), and OM is the focal length r of the image capturing device 410, and then the following formula can be obtained:
Figure BDA0001260923250000222
it can be understood that the angular change of the image capturing device 410 rotating around the Y axis shown in fig. 2 is ≈ POM, that is, the pitch angle change amount, and the angle between the optical axis of the image capturing device 410 and the gravity direction at any time, that is, the pitch angle may be obtained through calculation by using a sensor. As shown in fig. 4(b), when the image i and the reference frame image are acquired, the optical axis of the image acquisition device 410 is ON and OM, respectively, and when the image i and the reference frame image are acquired, the included angles between the optical axis of the image acquisition device 410 and the gravity direction are ═ GNO and ═ GMO, so that ═ POM ═ GNO ═ GMO.
In summary, it can be seen that the vertical transformation parameter dveri can also be calculated by the following formula:
Figure BDA0001260923250000223
in order to facilitate expression of a formula, the angle A can be usediRepresents the pitch angle and angle A when the image i is collected0Represents the pitch angle and angle A when the reference frame image is collectedi-∠A0That is, the pitch angle variation with respect to the reference frame image when the image i is acquired, then:
Figure BDA0001260923250000231
horizontal transformation parameter d for image ihori, which refers to the relative offset in the horizontal direction, i.e., the direction of the number of horizontal pixels of the image resolution of the image i, when the image acquisition device acquires the image i. Horizontal transformation parameter d of image i in each of the remaining frame imageshori calculation mode and vertical transformation parameter dveri is calculated in a similar manner.
The angle change of the rotation of the image capturing device 410 around the Z axis shown in fig. 2, that is, the amount of the rotation angle change, can be calculated by obtaining an included angle between the optical axis of the image capturing device 410 and a specific direction at any time through a sensor, where the specific direction can be a geomagnetic field direction (from south to north). Fig. 5 is a top view of the image capturing device 410, and when capturing the image i and the reference frame image, the included angles between the optical axis of the image capturing device 410 and the geomagnetic field direction are respectively ≧ BiAnd B0That is, the rotation angles of the image capturing device 410 at the time of capturing the image i and the reference frame image are ≦ B, respectivelyiAnd B0Then, the angle change of the rotation of the image capturing device 410 around the Z axis shown in fig. 2, that is, the amount of change of the rotation angle with respect to the reference frame image when capturing the image i, is: angle Bi-∠B0
Then, one obtains:
Figure BDA0001260923250000232
rotation transformation parameter alpha for image iiIt is indicated that when the image acquisition device acquires the image i, the relative rotation angle of the imaging plane in the three-dimensional space, that is, the angle change around the X-axis shown in fig. 2, that is, the inclination angle change amount. Then, the optical axis of the image acquisition device and a specific plane at any moment can be obtained through the sensorThe angle is calculated and the particular direction may be a horizontal plane. FIG. 6 shows a rear view of the image capturing device, wherein when capturing an image i and a reference frame image, the included angles between the optical axis of the image capturing device 410 and the horizontal plane are respectively ≈ CiAnd C0Then the tilt angle variation of the image capturing device 410 is: is less than Ci-∠C0
It will be appreciated that tilting of the image acquisition device will cause a rotation of the image, i.e. a rotation of the transformation parameter αi=∠Ci-∠C0
The jitter information for the device includes: pitch angle variation, and rotation angle variation, and the equipment parameters include: focal length, image resolution, imaging plane height and imaging plane width, image transformation parameter includes: regarding the situation of the vertical transformation parameter, the horizontal transformation parameter, and the rotational transformation parameter, as another implementation manner of the embodiment of the present invention, before the step of obtaining and obtaining the image transformation parameters corresponding to the remaining frames of images according to the device shaking information and the device parameters, the method further includes:
calculating the acquisition time difference between the reference frame image and each of the other frames of images according to the time information for acquiring the multiple frames of images;
after the reference frame image is determined, the image acquisition equipment can calculate the acquisition time difference between the reference frame image and each of the other frames of images according to the time information of acquiring the multi-frame images. In one implementation, the time information for acquiring the multiple frames of images may be a time interval for acquiring the images, and then the image acquisition device may calculate the acquisition time difference according to the time interval for acquiring the images, for example, the image acquisition device acquires three frames of images in total, the reference frame image is determined as the second frame of image, the time interval for acquiring the three frames of images by the image acquisition device is 1.5 milliseconds, and then the acquisition time difference between the reference frame image and the other two images is 1.5 milliseconds.
In addition, the time information for acquiring the multiple frames of images can also be the time for acquiring each frame of image, so that the image acquisition equipment can calculate the acquisition time difference between the reference frame of image and each of the other frames of images according to the time for acquiring each frame of image. For example, the image capturing device obtains five frames of images, the reference frame image is determined as the first frame image, the time for capturing the reference frame image, the second frame image, the third frame image, the fourth frame image and the fifth frame image by the image capturing device is respectively 33 ms at 9 points 10 minutes 22 seconds, 34 ms at 9 points 10 minutes 22 seconds, 35 ms at 9 points 10 minutes 22 seconds, 36 ms at 9 points 10 minutes 22 seconds and 37 ms at 9 points 10 minutes 22 seconds, and then it is obvious that the capturing time difference between the reference frame image and the rest four images is respectively 1 ms, 2 ms, 3 ms and 4 ms.
Correspondingly, the step of obtaining and obtaining the image transformation parameters corresponding to the other frames of images according to the device shaking information and the device parameters may include:
acquiring a pitch angle angular velocity, a rotation angle angular velocity and an inclination angle angular velocity when the multi-frame image is acquired;
calculating the pitch angle variable quantity when the other frames of images are acquired according to the pitch angle angular speed when the reference frame image is acquired, the pitch angle angular speed when the other frames of images are acquired and the acquisition time difference, and calculating the vertical transformation parameters of the other frames of images according to the pitch angle variable quantity, the focal length, the number of vertical pixels of image resolution, the imaging plane height and the acquisition time difference between the other frames of images and the reference frame image;
calculating a rotation angle variation amount when the remaining frames of images are acquired, based on a rotation angle angular velocity when the reference frame of images are acquired, a rotation angle angular velocity when the remaining frames of images are acquired, and the acquisition time difference, and calculating a horizontal conversion parameter of the remaining frames of images, based on the rotation angle variation amount, the focal length, the number of horizontal pixels of image resolution, the imaging surface width, and the acquisition time difference between the remaining frames of images and the reference frame of images;
and calculating the inclination angle variation quantity when the other frames of images are acquired according to the inclination angle angular speed when the reference frame image is acquired, the inclination angle angular speed when the other frames of images are acquired and the acquisition time difference, and calculating the rotation transformation parameters of the other frames of images according to the inclination angle variation quantity.
In one embodiment, the vertical transformation parameter, the horizontal transformation parameter, and the rotation transformation parameter corresponding to each of the remaining frames of images may be calculated by using the following formulas:
Figure BDA0001260923250000251
Figure BDA0001260923250000252
Figure BDA0001260923250000253
wherein d isveri is the vertical transformation parameter of the image i in the rest frame images, dhori is a horizontal transformation parameter of the image i, αiIs the rotational transformation parameter of image i, r is the focal length,
Figure BDA0001260923250000254
for the variation of the pitch angle, omega 0, in the acquisition of the image ipitchFor acquiring the pitch angular velocity, ω i, of the reference frame imagepitchFor the pitch angular velocity at which image i is acquired, i ∈ [1, N]N is the number of the other frames, tiIs the difference in acquisition time of image i and the reference frame image, HimIs the number of vertical pixels of the image resolution, WimIs the number of horizontal pixels of the image resolution, HsensorIs the image plane height, WsensorFor the width of the imaging surface,
Figure BDA0001260923250000255
for the amount of rotation angle variation in the acquisition of the image i, ω 0yawFor the angular velocity of the rotation angle at which the reference frame image is acquired, ω iyawFor the angular velocity of the rotation angle at which the remaining images i are acquired,
Figure BDA0001260923250000256
for the amount of change in the tilt angle at which the image i is acquired, ω 0rollFor the angular velocity of the tilt angle at which the reference frame image is acquired, ω irollIs the angular velocity of the tilt angle at which the remaining image i is acquired.
The vertical transformation parameter d of the image i in the rest frame imagesverThe calculation of i is an example, and the manner of calculating the image conversion parameter will be described.
As shown in fig. 4(a) and 4(b), assuming that the angular velocity variation process generated when the image capturing device 410 shakes is a uniform acceleration process, the pitch angle variation izepom can be calculated by the following formula:
Figure BDA0001260923250000261
combining the above formula:
Figure BDA0001260923250000262
vertical transformation parameter dveri can also be calculated by the following formula:
Figure BDA0001260923250000263
horizontal transformation parameter d for image ihori denotes a relative shift amount in a horizontal direction, that is, a horizontal pixel number direction of an image resolution of the image i when the image i is acquired by the image acquisition apparatus. The calculation process and the vertical transformation parameter dveri are calculated similarly, and the correlation can be found in the above-mentioned vertical transformation parameter dverAnd a part of the description of the calculation process of i is not repeated herein.
Rotation transformation parameter alpha for image iiIn other words, still assuming that the angular velocity variation process generated when the image capturing device shakes is a uniform acceleration process, the tilt angle variation amount is:
Figure BDA0001260923250000264
then, the transformation parameter α is rotatediNamely, it can be calculated by the following formula:
Figure BDA0001260923250000265
and after the image transformation parameters corresponding to the rest frames of images are obtained through calculation: as an implementation manner of the embodiment of the present invention, after the vertical transformation parameter, the horizontal transformation parameter, and the rotational transformation parameter are obtained, the image transformation parameter is used to perform alignment transformation processing on the pixel coordinates of the other frames of images, and a formula used to obtain the pixel coordinates of the target image corresponding to each of the other frames of images may be:
Figure BDA0001260923250000266
wherein (x)i,yi) Is the pixel point coordinate of the target image corresponding to the image i, (w)i,zi) Is the pixel point coordinate of the image i.
Through the formula, the image acquisition equipment can calculate the coordinate (w) of each pixel point in the image i according to the image transformation parametersi,zi) The corresponding pixel point coordinate (x) after alignment transformationi,yi) And further obtaining target images corresponding to the rest images of each frame.
As an implementation manner of the embodiment of the present invention, in order to avoid the problem of color interference between pixel points of different colors and influence the image super-resolution reconstruction effect, before the step of performing the amplification processing on the reference frame image and each target image, the method may further include:
and splitting the reference frame image and each target image into images of a plurality of color channels according to the color formats of the reference frame image and each target image.
Since a common format for images may be: RGGB, GRBG, GBRG, BGGR, etc., so in one embodiment, the image capture device can split the reference frame image and each target image into images of four color channels.
For example, as shown in fig. 7, if the image 710 is an image in RGGB format, the image 710 can be split into images of four color channels, i.e., image 7101, image 7102, image 7103, and image 7104, according to the color format, i.e., the color arrangement, of the image 710. It can be seen that the image 7101 is composed of R (red) pixels, the images 7102 and 7103 are composed of G (green) pixels, and the image 7104 is composed of B (blue) pixels, so that color interference due to different pixel colors is avoided during subsequent amplification processing and image reconstruction processing, and the image super-resolution reconstruction effect is better.
Correspondingly, the step of performing the amplification processing on the reference frame image and each target image may include:
and respectively amplifying the split images of the plurality of color channels.
It can be understood that, if each frame of image is split into images of four color channels in the image splitting process, the image acquisition device may perform amplification processing on the images of four color channels obtained by splitting each frame of image. The specific manner of the amplification processing is the same as the manner of the amplification processing for the reference frame image which is not split and each target image, and is not described herein again.
In another embodiment, during the amplification process, the amplification process may be performed according to the color formats of the reference frame image and each target image, for example, when the image 710 is amplified, the effect of splitting the image 710 into four color channel images and then performing the amplification process may be achieved in a manner that every other pixel point is amplified in the height and width directions of the pixel points, so that the image splitting process is not required before the amplification process.
As an implementation manner of the embodiment of the present invention, as shown in fig. 8, the training method for the target convolutional neural network may include the following steps:
s801, constructing an initial convolutional neural network comprising a plurality of filters;
it can be understood that the image acquisition device first needs to construct an initial convolutional neural network including a plurality of filters, and then trains the initial convolutional neural network to obtain a target convolutional neural network.
In one embodiment, the initial convolutional neural network may be constructed using a caffe tool. For example, the structural parameters of the initial convolutional neural network can be as shown in the following table:
filter bank name Layer 1 Layer 2 Layer 3
Output layer number 64 32 1
Filter size 9×9 5×5 5×5
S802, obtaining a plurality of groups of images, wherein each group of images comprises a plurality of frames of initial images, and selecting one frame from the plurality of frames of initial images as a true value image sample of the group of images;
the multi-frame initial image is an image which is acquired by image acquisition equipment and used for training a convolutional neural network. It should be noted that the image capturing device that captures the multi-frame initial image is generally the same device as the image capturing device used when capturing the multi-frame image, and is an image captured in the same scene as the image capturing device that captures the multi-frame image.
For example, if the multi-frame image is an image acquired by an image acquisition device installed on the unmanned aerial vehicle when the unmanned aerial vehicle flies, the multi-frame initial image may also be an image acquired by the image acquisition device installed on the unmanned aerial vehicle when the unmanned aerial vehicle flies, so that it can be ensured that the target convolutional neural network obtained by training the multi-frame initial image can accurately perform image reconstruction on the reference frame image and each frame of target image, and a super-resolution image with a better effect is obtained.
When one frame is selected from the multiple initial images in each group of images as a true-value image sample of the group of images, one frame of image may be randomly selected from the multiple initial images, or may be selected according to a preset rule, for example, an intermediate frame image or a first frame of image may be selected as the true-value image sample, which is not limited specifically herein.
S803, taking the corresponding true value image sample as a reference, calibrating the multi-frame initial image in each group of images, and performing down-sampling on the calibrated multi-frame initial image to obtain a down-sampled multi-frame initial image;
in order to obtain a super-resolution image with a better effect, the calibration processing may be performed on the multi-frame initial image in each group of images by using the corresponding true-value image sample as a reference, in a manner the same as that described above for the reference frame image and the calibration processing performed on the remaining frames of images, and for relevant points, reference may be made to the description of the calibration processing performed on the remaining frames of images by using the reference frame image as a reference, which is not described herein again. By adopting the processing mode, the trained target convolutional neural network can be more suitable for processing the reference frame image and each frame of target image.
Then, the image acquisition device may perform downsampling on the calibrated multi-frame initial image to obtain a downsampled multi-frame initial image. The downsampling processing is a common image processing method in the art, and a person skilled in the art can operate the downsampling processing according to factors such as a format of an image, and is not specifically limited and described herein.
S804, amplifying the multi-frame initial image subjected to the down-sampling processing to obtain a multi-frame target image sample;
after the multi-frame initial image after the down-sampling processing is obtained, the image acquisition device can amplify the multi-frame initial image after the down-sampling processing to obtain a multi-frame target image sample. It is understood that the frames of target image samples and the above-mentioned true value image samples are image training samples for training the convolutional neural network.
Similarly, in order to make the trained target convolutional neural network more suitable for processing the reference frame image and each frame of target image, the multiple frames of initial images after downsampling processing may be processed by using the above-mentioned amplification processing method for the reference frame image and each frame of target image, which is not described herein again.
In the case of splitting the reference frame image and each target image into images of a plurality of color channels, after obtaining a plurality of frames of initial images after down-sampling processing, each frame of initial image after down-sampling processing may be split into image samples of a plurality of color channels according to the color format of the image training sample, and then the split image samples may be amplified. The specific splitting manner may adopt the above splitting manner for the reference frame image and each frame target image, and is not described herein again.
S805, inputting the image training sample into the initial convolutional neural network for training;
it can be understood that, when training each image training sample, the image acquisition device may continuously adjust parameters of the initial convolutional neural network according to the average difference value between the output result corresponding to each set of image training samples and the corresponding true value image sample, so that the image may be better processed, and a super-resolution image may be obtained.
The average difference value between the output result corresponding to the image training sample and the corresponding true-value image sample is mainly used to measure the difference between the output result corresponding to the input image training sample and the corresponding true-value image sample, and can be generally calculated by means of "sum of squares", "sum of absolute values", and the like of the values of the pixel points of the two, where the "sum of squares" calculation formula is as follows:
Figure BDA0001260923250000301
wherein, x1jThe value of j (th) pixel point in the output result corresponding to the image training sample, x2jAnd the value of the jth pixel point in the true-value image sample is represented, and N represents the total number of the pixel points in the true-value image sample. It can be understood that the total number of pixels in the multi-frame target image sample included in the image training sample, and the total number of pixels in the output result corresponding to the image training sample are the same as the total number of pixels in the true image sample.
And S806, when the average difference value between the output result corresponding to the multiple frames of target image samples and the corresponding true value image sample is smaller than a preset value, finishing training to obtain the target convolutional neural network.
When the average difference value between the output result corresponding to the multi-frame target image samples included in each image training sample and the corresponding true value image sample is smaller than the preset value, it indicates that the convolutional neural network at this time meets the required reconstruction requirement, i.e. a super-resolution image with a better effect can be obtained, then the training process can be ended, and the convolutional neural network obtained at this time is used as a target convolutional neural network for reconstructing the reference frame image and each target image.
It should be noted that the preset value can be set by a person skilled in the art according to factors such as the type of the image and the requirement of the obtained super-resolution image, and is not limited in detail herein.
Corresponding to the above method embodiment, an image super-resolution reconstruction apparatus is further provided in the embodiments of the present invention, and the following describes an image super-resolution reconstruction apparatus provided in the embodiments of the present invention.
As shown in fig. 9, an image super-resolution reconstruction apparatus, the apparatus comprising:
an image obtaining module 910, configured to obtain multiple frames of images to be processed, and select one frame of image from the multiple frames of images as a reference frame of image;
an image transformation parameter calculating module 920, configured to obtain and obtain image transformation parameters corresponding to the other frames of images according to the device shaking information and the device parameters, where the image transformation parameters are: calibrating a calibration parameter required when the position relation between image pixel points is calibrated by taking the reference frame image as a standard;
an image calibration module 930, configured to perform calibration processing on the remaining frames of images by using the reference frame of image as a reference according to the image transformation parameter, so as to obtain target images corresponding to the remaining frames of images;
and an image reconstruction module 940, configured to perform image reconstruction on the reference frame image and each frame of target image to obtain a super-resolution image corresponding to the multiple frames of images.
As can be seen, in the scheme provided in the embodiment of the present invention, the image acquisition device first obtains a multi-frame image to be processed, then selects one frame of image from the multi-frame image as a reference frame image, obtains and obtains image transformation parameters corresponding to the other frames of images according to the device shaking information and the device parameters, then performs calibration processing on the other frames of images according to the image transformation parameters and using the reference frame image as a reference to obtain target images corresponding to the other frames of images, and finally performs image reconstruction on the reference frame image and each frame of target images to obtain super-resolution images corresponding to the multi-frame images, thereby completing image super-resolution reconstruction. According to the scheme, complex image registration processes such as feature point extraction are not needed, image transformation parameters are obtained only according to equipment shaking information, equipment parameters and acquisition time difference, then the images are calibrated according to the image transformation parameters and the reference frame images, the calculation process is simple, the calculated amount is small, and the time required by image calibration is reduced.
As an implementation manner of the embodiment of the present invention, the device jitter information may include: pitch angle change, angle of inclination change and rotation angle change, the equipment parameter includes: focal length, image resolution, imaging plane height and imaging plane width; the image transformation parameters include: vertical transformation parameters, horizontal transformation parameters and rotation transformation parameters;
the image transformation parameter calculation module 920 may include:
a first acquiring unit (not shown in fig. 9) for acquiring a pitch angle, a rotation angle, and a tilt angle at the time of acquiring the plurality of frame images;
a first vertical transformation parameter calculating unit (not shown in fig. 9) configured to calculate a pitch angle variation amount when the other frames of images are acquired according to the pitch angle when the reference frame image is acquired and the pitch angle when the other frames of images are acquired, and calculate vertical transformation parameters of the other frames of images according to the pitch angle variation amount, the focal length, the number of vertical pixels of image resolution, and the imaging plane height;
a first horizontal conversion parameter calculation unit (not shown in fig. 9) for calculating a rotation angle variation amount at the time of acquiring the remaining frame images from a rotation angle at the time of acquiring the reference frame image and a rotation angle at the time of acquiring the remaining frame images, and calculating a horizontal conversion parameter of the remaining frame images from the rotation angle variation amount, the focal length, a horizontal pixel number of image resolution, and an imaging plane width;
a first rotation transformation parameter calculating unit (not shown in fig. 9) configured to calculate a variation of the tilt angle when the other frame images are acquired according to the tilt angle when the reference frame image is acquired and the tilt angles when the other frame images are acquired, and calculate a rotation transformation parameter of the other frame images according to the variation of the tilt angle.
As an implementation manner of the embodiment of the present invention, the first vertical conversion parameter calculation unit may include:
a first vertical transformation parameter calculation subunit (not shown in fig. 9) for calculating, using the formula:
Figure BDA0001260923250000321
calculating vertical transformation parameters of the other frames of images;
the first level shift parameter calculation unit may include:
a first vertical transformation parameter calculation subunit (not shown in fig. 9) for calculating, using the formula:
Figure BDA0001260923250000322
calculating horizontal transformation parameters of the other frames of images;
the first rotation transformation parameter calculation unit may include:
a first rotation transformation parameter calculation subunit (not shown in fig. 9) for calculating, using the formula: alpha is alphai=∠Ci-∠C0Calculating the rotation transformation parameters of the other frames of images;
wherein d isveri is the vertical transformation parameter of the image i in the rest frame images, dhori is a horizontal transformation parameter of the image i, αiIs the rotation transformation parameter of the image i, r is the focal length, and is angle Ai-∠A0For acquiring the pitch angle variation of the image i, angle A0To acquire the pitch angle at the time of the reference frame image, angle AiFor the pitch angle at the time of acquiring the image i, i ∈ [1, N]N is the number of the other frames, HimIs the number of vertical pixels of the image resolution, WimIs the number of horizontal pixels of the image resolution, HsensorIs the image plane height, WsensorIs the imaging surface width, angle Bi-∠B0For the variation of the rotation angle when acquiring the image i, angle B0For the rotation angle at the time of collecting the reference frame image, angle BiFor acquiring the rest of images iAngle of rotation of (C)i-∠C0Angle C for the variation of the inclination angle when the image i is collected0Angle C for collecting the angle of inclination of the reference frame imageiIs the tilt angle at which the remaining image i is acquired.
As an implementation manner of the embodiment of the present invention, the device jitter information may include: pitch angle variation, inclination angle variation and rotation angle variation; the device parameters may include: focal length, image resolution, imaging plane height and imaging plane width; the image transformation parameters may include: vertical transformation parameters, horizontal transformation parameters and rotation transformation parameters;
the apparatus may further include: a time difference calculation module (not shown in fig. 9) configured to calculate, before the step of obtaining and obtaining image transformation parameters corresponding to the other frames of images according to the device shaking information and the device parameters, an acquisition time difference between the reference frame image and each of the other frames of images according to the time information of acquiring the multiple frames of images;
the image transformation parameter calculation module 920 may include:
a second acquisition unit (not shown in fig. 9) configured to acquire a pitch angular velocity, a rotation angular velocity, and a tilt angular velocity at the time of acquiring the plurality of frame images;
a second vertical conversion parameter calculation unit (not shown in fig. 9) configured to calculate a pitch angle variation amount when the remaining frames of images are acquired according to the pitch angle angular velocity when the reference frame of images is acquired, the pitch angle angular velocity when the remaining frames of images are acquired, and the acquisition time difference, and calculate vertical conversion parameters of the remaining frames of images according to the pitch angle variation amount, the focal length, the number of vertical pixels of image resolution, the imaging plane height, and the acquisition time difference between the remaining frames of images and the reference frame of images;
a second horizontal conversion parameter calculation unit (not shown in fig. 9) for calculating a rotation angle variation amount at the time of acquiring the remaining frame images from a rotation angle angular velocity at the time of acquiring the reference frame image, a rotation angle angular velocity at the time of acquiring the remaining frame images, and the acquisition time difference, and calculating horizontal conversion parameters of the remaining frame images from the rotation angle variation amount, the focal length, the horizontal pixel number of image resolution, the imaging surface width, and the acquisition time difference between the remaining frame images and the reference frame image;
a second rotation transformation parameter calculating unit (not shown in fig. 9) configured to calculate a variation of the tilt angle when the other frame images are acquired according to the tilt angle angular velocity when the reference frame image is acquired, the tilt angle angular velocity when the other frame images are acquired, and the acquisition time difference, and calculate a rotation transformation parameter of the other frame images according to the variation of the tilt angle.
As an implementation manner of the embodiment of the present invention, the vertical transformation parameter calculation unit may include:
a second vertical transformation parameter calculation subunit (not shown in fig. 9) for calculating a vertical transformation parameter using the formula:
Figure BDA0001260923250000341
calculating vertical transformation parameters of the other frames of images;
the horizontal conversion parameter calculation unit includes:
a second horizontal transformation parameter calculation subunit (not shown in fig. 9) for calculating, using the formula:
Figure BDA0001260923250000342
calculating vertical transformation parameters of the other frames of images;
the rotation transformation parameter calculation unit includes:
a second rotation transformation parameter calculation subunit (not shown in fig. 9) for calculating, using the formula:
Figure BDA0001260923250000343
calculating the rotation transformation parameters of the other frames of images;
wherein d isveri is the vertical transformation parameter of the image i in the rest frame images, dhori is a horizontal transformation parameter of the image i, αiAs a rotation of the image iAnd a conversion parameter, r being the focal length,
Figure BDA0001260923250000344
for the variation of the pitch angle, omega 0, in the acquisition of the image ipitchFor acquiring the pitch angular velocity, ω i, of the reference frame imagepitchFor the pitch angular velocity at which image i is acquired, i ∈ [1, N]N is the number of the other frames, tiIs the difference in acquisition time of image i and the reference frame image, HimIs the number of vertical pixels of the image resolution, WimIs the number of horizontal pixels of the image resolution, HsensorIs the image plane height, WsensorFor the width of the imaging surface,
Figure BDA0001260923250000345
for the amount of rotation angle variation in the acquisition of the image i, ω 0yawFor the angular velocity of the rotation angle at which the reference frame image is acquired, ω iyawFor the angular velocity of the rotation angle at which the remaining images i are acquired,
Figure BDA0001260923250000346
for the amount of change in the tilt angle at which the image i is acquired, ω 0rollFor the angular velocity of the tilt angle at which the reference frame image is acquired, ω irollIs the angular velocity of the tilt angle at which the remaining image i is acquired.
As an implementation manner of the embodiment of the present invention, the image calibration module 930 may include:
and an alignment transformation unit (not shown in fig. 9) configured to perform alignment transformation processing on the pixel coordinates of the other frames of images by using the image transformation parameters, so as to obtain pixel coordinates of the target image corresponding to each of the other frames of images.
As an implementation manner of the embodiment of the present invention, the alignment transformation unit includes:
a pixel point coordinate calculation subunit (not shown in fig. 9) for calculating the pixel point coordinate using the formula:
Figure BDA0001260923250000351
Figure BDA0001260923250000352
calculating pixel point coordinates of the target images corresponding to the other frames of images respectively;
wherein (x)i,yi) Is the pixel point coordinate of the target image corresponding to the image i, (w)i,zi) Is the pixel point coordinate of the image i.
As an implementation manner of the embodiment of the present invention, the image reconstructing module 940 may include:
an enlargement processing unit (not shown in fig. 9) for performing enlargement processing on the reference frame image and each target image;
and an image reconstruction unit (not shown in fig. 9) configured to perform image reconstruction on the amplified image through a target convolutional neural network trained by the model building module in advance, so as to obtain a super-resolution image corresponding to the multi-frame image, where the target convolutional neural network is a convolutional neural network trained in advance and used for reconstructing the amplified image.
As an implementation manner of the embodiment of the present invention, the model building module (not shown in fig. 9) may include:
an initial convolutional neural network constructing unit (not shown in fig. 9) for constructing an initial convolutional neural network including a plurality of filters;
an initial image obtaining unit (not shown in fig. 9) configured to obtain a plurality of sets of images, each set of images including a plurality of initial images, and select one frame from the plurality of initial images as a true value image sample of the set of images;
an initial image calibration unit (not shown in fig. 9) configured to perform calibration processing on the multiple frames of initial images in each group of images by using the corresponding true value image samples as references, and perform downsampling processing on the multiple frames of initial images after the calibration processing to obtain multiple frames of initial images after the downsampling processing;
an initial image amplifying unit (not shown in fig. 9), configured to amplify the multiple frames of initial images after the downsampling processing to obtain multiple frames of target image samples, where the multiple frames of target image samples and the true value image samples are image training samples used for training a convolutional neural network;
a sample training unit (not shown in fig. 9) for inputting the image training samples into the initial convolutional neural network for training;
and a target convolutional neural network obtaining unit (not shown in fig. 9), configured to complete training when an average difference value between an output result corresponding to the multiple frames of target image samples and a corresponding true value image sample is smaller than a preset value, so as to obtain the target convolutional neural network.
As an implementation manner of the embodiment of the present invention, the multi-frame image may be an image in a Bayer format.
The embodiment of the invention also provides an image super-resolution reconstruction system, and the image super-resolution reconstruction system provided by the embodiment of the invention is introduced below.
As shown in fig. 10, a system for super-resolution reconstruction of images, the system comprising: an image acquisition device 1010 and an image reconstruction device 1020, wherein,
the image acquisition device 1010 is configured to obtain a multi-frame image to be processed, and send the multi-frame image to be processed to the image reconstruction device;
the image reconstruction device 1020 is configured to receive the to-be-processed multi-frame image sent by the image acquisition device, select one frame of image from the multi-frame image as a reference frame image, and obtain image transformation parameters corresponding to the other frames of images according to the device shaking information and the device parameters; according to the image transformation parameters, taking the reference frame image as a reference, and carrying out calibration processing on the rest of the frame images to obtain target images corresponding to the rest of the frame images respectively; performing image reconstruction on the reference frame image and each frame of target image to obtain a super-resolution image corresponding to the plurality of frames of images, wherein the image transformation parameters are as follows: and calibrating the calibration parameters required by the position relation between the image pixel points by taking the reference frame image as a standard.
It can be seen that in the scheme provided by the embodiment of the present invention, an image acquisition device obtains a multi-frame image to be processed, and then sends the multi-frame image to be processed to an image reconstruction device, the image reconstruction device selects one frame image from the multi-frame images as a reference frame image, obtains and obtains image transformation parameters corresponding to the other frame images according to device jitter information and device parameters, and then performs calibration processing on the other frame images according to the image transformation parameters and using the reference frame image as a reference to obtain target images corresponding to the other frame images, and finally performs image reconstruction on the reference frame image and the target images to obtain super-resolution images corresponding to the multi-frame images, thereby completing image super-resolution reconstruction. According to the scheme, complex image registration processes such as feature point extraction are not needed, image transformation parameters are obtained only according to equipment shaking information, equipment parameters and acquisition time difference, then the images are calibrated according to the image transformation parameters and the reference frame images, the calculation process is simple, the calculated amount is small, and the time required by image calibration is reduced.
Wherein the image reconstruction device exists in a variety of forms, including but not limited to:
(1) ultra mobile personal computer device: the equipment belongs to the category of personal computers, has calculation and processing functions and generally has the characteristic of mobile internet access. Such terminals include: PDA, MID, and UMPC devices, etc., such as ipads.
(2) A server: the device for providing the computing service comprises a processor, a hard disk, a memory, a system bus and the like, and the server is similar to a general computer architecture, but has higher requirements on processing capacity, stability, reliability, safety, expandability, manageability and the like because of the need of providing high-reliability service.
(3) And other electronic devices with data interaction and processing functions.
It should be noted that, for the situation that monitoring, detection, and the like need to obtain a super-resolution image in real time, the image acquisition device may obtain a multi-frame image to be processed in real time, and send the multi-frame image to be processed to the image reconstruction device, so that the image reconstruction device may perform super-resolution reconstruction processing, and further obtain a super-resolution image.
For the situation that real-time processing is not needed, the image acquisition device can store the acquired multi-frame image to be processed locally, so that the stored multi-frame image to be processed is sent to the image reconstruction device when needed, so that the image reconstruction device can perform super-resolution reconstruction processing, or the image acquisition device acquires the corresponding multi-frame image to be processed again and sends the acquired multi-frame image to be processed to the image reconstruction device when super-resolution reconstruction processing is needed, which is reasonable.
In one embodiment, the image capturing device may obtain the multiple frames of images and simultaneously obtain device jitter information and device parameters of the image capturing device when capturing each frame of image. In another embodiment, the image capturing device may obtain the device shaking information and the device parameters of the image capturing device in real time, so that the device shaking information and the device parameters of the image capturing device when each frame of image in the multi-frame image is captured are obtained. In both embodiments, it is reasonable that the image acquisition device may send the acquired device shaking information and device parameters to the image reconstruction device in real time, or may store the acquired device shaking information and device parameters locally, and send the stored device shaking information and device parameters to the image reconstruction device after receiving an acquisition instruction sent by the image reconstruction device.
As an implementation manner of the embodiment of the present invention, the device jitter information may include: pitch angle variation, bank angle variation and rotation angle variation, the device parameter may include: focal length, image resolution, imaging plane height and imaging plane width; the image transformation parameters may include: vertical transformation parameters, horizontal transformation parameters and rotation transformation parameters;
the image reconstruction device may be specifically configured to acquire a pitch angle, a rotation angle, and an inclination angle at the time of acquiring the multi-frame image; calculating the pitch angle variation when the other frames of images are acquired according to the pitch angle when the reference frame image is acquired and the pitch angles when the other frames of images are acquired, and calculating the vertical transformation parameters of the other frames of images according to the pitch angle variation, the focal length, the vertical pixel number of the image resolution and the imaging surface height; calculating a rotation angle variation amount when the other frames of images are acquired according to a rotation angle when the reference frame image is acquired and rotation angles when the other frames of images are acquired, and calculating a horizontal conversion parameter of the other frames of images according to the rotation angle variation amount, the focal length, the horizontal pixel number of the image resolution and the imaging plane width; and calculating the inclination angle variation quantity when the rest of the frame images are collected according to the inclination angle when the reference frame image is collected and the inclination angle when the rest of the frame images are collected, and calculating the rotation transformation parameters of the rest of the frame images according to the inclination angle variation quantity.
As an implementation manner of the embodiment of the present invention, the image reconstruction apparatus may be specifically configured to use a formula:
Figure BDA0001260923250000381
calculating vertical transformation parameters of the other frames of images; using the formula:
Figure BDA0001260923250000382
calculating horizontal transformation parameters of the other frames of images; using the formula: alpha is alphai=∠Ci-∠C0Calculating the rotation transformation parameters of the other frames of images;
wherein d isveri is the vertical transformation parameter of the image i in the rest frame images, dhori is a horizontal transformation parameter of the image i, αiIs the rotation transformation parameter of the image i, r is the focal length, and is angle Ai-∠A0For acquiring the pitch angle variation of the image i, angle A0To acquire the pitch angle at the time of the reference frame image, angle AiFor the pitch angle at the time of acquiring the image i, i ∈ [1, N]N is the number of the other frames, HimVertical pixel for the resolution of the imageNumber, WimIs the number of horizontal pixels of the image resolution, HsensorIs the image plane height, WsensorIs the imaging surface width, angle Bi-∠B0For the variation of the rotation angle when acquiring the image i, angle B0For the rotation angle at the time of collecting the reference frame image, angle BiAngle of rotation for collecting the rest of image ii-∠C0Angle C for the variation of the inclination angle when the image i is collected0Angle C for collecting the angle of inclination of the reference frame imageiIs the tilt angle at which the remaining image i is acquired.
As an implementation manner of the embodiment of the present invention, the device jitter information may include: pitch angle variation, inclination angle variation and rotation angle variation; the device parameters may include: focal length, image resolution, imaging plane height and imaging plane width; the image transformation parameters may include: vertical transformation parameters, horizontal transformation parameters and rotation transformation parameters;
the image reconstruction device may be specifically configured to calculate, according to time information for acquiring the multiple frames of images, an acquisition time difference between the reference frame image and each of the other frames of images; acquiring a pitch angle angular velocity, a rotation angle angular velocity and an inclination angle angular velocity when the multi-frame image is acquired; calculating the pitch angle variable quantity when the other frames of images are acquired according to the pitch angle angular speed when the reference frame image is acquired, the pitch angle angular speed when the other frames of images are acquired and the acquisition time difference, and calculating the vertical transformation parameters of the other frames of images according to the pitch angle variable quantity, the focal length, the number of vertical pixels of image resolution, the imaging plane height and the acquisition time difference between the other frames of images and the reference frame image; calculating a rotation angle variation amount when the remaining frames of images are acquired, based on a rotation angle angular velocity when the reference frame of images are acquired, a rotation angle angular velocity when the remaining frames of images are acquired, and the acquisition time difference, and calculating a horizontal conversion parameter of the remaining frames of images, based on the rotation angle variation amount, the focal length, the number of horizontal pixels of image resolution, the imaging surface width, and the acquisition time difference between the remaining frames of images and the reference frame of images; and calculating the inclination angle variation quantity when the other frames of images are acquired according to the inclination angle angular speed when the reference frame image is acquired, the inclination angle angular speed when the other frames of images are acquired and the acquisition time difference, and calculating the rotation transformation parameters of the other frames of images according to the inclination angle variation quantity.
The time information for acquiring the multiple frames of images may be sent by the image acquisition device to the image reconstruction device, and the image reconstruction device may calculate the acquisition time difference between the reference frame image and each of the other frames of images according to the time information for acquiring the multiple frames of images after receiving the time information.
As an implementation manner of the embodiment of the present invention, the image reconstruction apparatus may be specifically configured to use a formula:
Figure BDA0001260923250000401
calculating vertical transformation parameters of the other frames of images; using the formula:
Figure BDA0001260923250000402
calculating horizontal transformation parameters of the other frames of images; using the formula:
Figure BDA0001260923250000403
calculating the rotation transformation parameters of the other frames of images;
wherein d isveri is the vertical transformation parameter of the image i in the rest frame images, dhori is a horizontal transformation parameter of the image i, αiIs the rotational transformation parameter of image i, r is the focal length,
Figure BDA0001260923250000404
for the variation of the pitch angle, omega 0, in the acquisition of the image ipitchFor acquiring the pitch angular velocity, ω i, of the reference frame imagepitchFor the pitch angular velocity at which image i is acquired, i ∈ [1, N]N is the number of the other frames, tiIs the difference in acquisition time of image i and the reference frame image, HimIs the number of vertical pixels of the image resolution, WimIs the number of horizontal pixels of the image resolution, HsensorIs the image plane height, WsensorFor the width of the imaging surface,
Figure BDA0001260923250000405
for the amount of rotation angle variation in the acquisition of the image i, ω 0yawFor the angular velocity of the rotation angle at which the reference frame image is acquired, ω iyawFor the angular velocity of the rotation angle at which the remaining images i are acquired,
Figure BDA0001260923250000406
for the amount of change in the tilt angle at which the image i is acquired, ω 0rollFor the angular velocity of the tilt angle at which the reference frame image is acquired, ω irollIs the angular velocity of the tilt angle at which the remaining image i is acquired.
As an implementation manner of the embodiment of the present invention, the image reconstruction device may be specifically configured to perform alignment transformation processing on the pixel coordinates of the other frames of images by using the image transformation parameter, so as to obtain pixel coordinates of target images corresponding to the other frames of images.
As an implementation manner of the embodiment of the present invention, the image reconstruction apparatus may be specifically configured to use a formula:
Figure BDA0001260923250000407
calculating pixel point coordinates of the target images corresponding to the other frames of images respectively;
wherein (x)i,yi) Is the pixel point coordinate of the target image corresponding to the image i, (w)i,zi) Is the pixel point coordinate of the image i.
As an implementation manner of the embodiment of the present invention, the image reconstruction device may be specifically configured to perform an amplification process on the reference frame image and each target image; and carrying out image reconstruction on the amplified image through a target convolutional neural network obtained through pre-training to obtain a super-resolution image corresponding to the multi-frame image, wherein the target convolutional neural network is a convolutional neural network which is obtained through pre-training and is used for reconstructing the amplified image.
As an implementation manner of the embodiment of the present invention, the image reconstruction apparatus may be specifically configured to construct an initial convolutional neural network including a plurality of filters; obtaining a plurality of groups of images, wherein each group of images comprises a plurality of initial images, and selecting one frame from the plurality of initial images as a true value image sample of the group of images; taking the corresponding true value image sample as a reference, calibrating the multi-frame initial image in each group of images, and performing down-sampling on the calibrated multi-frame initial image to obtain a down-sampled multi-frame initial image; amplifying the multi-frame initial image subjected to the down-sampling processing to obtain a multi-frame target image sample; inputting the image training sample into the initial convolutional neural network for training; and when the average difference value between the output result corresponding to the multiple frames of target image samples and the corresponding true value image sample is smaller than a preset value, finishing training to obtain the target convolutional neural network, wherein the multiple frames of target image samples and the true value image samples are image training samples for training the convolutional neural network.
As an implementation manner of the embodiment of the present invention, the multi-frame image may be an image in a JPG format, but is not limited thereto. That is to say, after the image acquisition device acquires a multi-frame image to be processed, the multi-frame image can be subjected to conventional coding processing to obtain a multi-frame image in a format of JPG or the like, and then the multi-frame image in the format of JPG or the like, equipment jitter information and equipment parameters are sent to the image reconstruction device, and then the image reconstruction device can perform image super-resolution reconstruction processing on the multi-frame image in the format of JPG or the like according to the above manner to obtain a super-resolution image.
It should be noted that, since the above system embodiments are basically similar to the method embodiments, the description is relatively simple, and reference may be made to part of the description of the method embodiments for relevant points.
It is noted that, herein, relational terms such as first and second, and the like may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other identical elements in a process, method, article, or apparatus that comprises the element.
All the embodiments in the present specification are described in a related manner, and the same and similar parts among the embodiments may be referred to each other, and each embodiment focuses on the differences from the other embodiments. In particular, as for the apparatus embodiment, since it is substantially similar to the method embodiment, the description is relatively simple, and for the relevant points, reference may be made to the partial description of the method embodiment.
The above description is only for the preferred embodiment of the present invention, and is not intended to limit the scope of the present invention. Any modification, equivalent replacement, or improvement made within the spirit and principle of the present invention shall fall within the protection scope of the present invention.

Claims (30)

1. An image super-resolution reconstruction method is characterized by comprising the following steps:
obtaining a multi-frame image to be processed, and selecting one frame of image from the multi-frame image as a reference frame image;
acquiring and obtaining image transformation parameters corresponding to the rest frames of images according to the equipment jitter information and the equipment parameters, wherein the image transformation parameters are as follows: calibrating a calibration parameter required when the position relation between image pixel points is calibrated by taking the reference frame image as a standard;
according to the image transformation parameters, taking the reference frame image as a reference, and carrying out calibration processing on the rest of the frame images to obtain target images corresponding to the rest of the frame images respectively;
carrying out image reconstruction on the reference frame image and each frame of target image to obtain a super-resolution image corresponding to the plurality of frames of images;
wherein the device jitter information comprises: a rotation angle variation amount, the apparatus parameter including: focal length, image resolution and imaging surface width; the image transformation parameters include: a horizontal transformation parameter;
the step of obtaining and obtaining image transformation parameters corresponding to the rest frames of images according to the equipment jitter information and the equipment parameters comprises:
and calculating the horizontal conversion parameters of the rest frames of images according to the rotation angle variable quantity, the focal length, the horizontal pixel number of the image resolution and the width of an imaging surface.
2. The method of claim 1, wherein the device jitter information further comprises: pitch angle variation, inclination angle variation, and the equipment parameters further include imaging plane height; the image transformation parameters further include: vertical transformation parameters and rotation transformation parameters;
the step of obtaining and obtaining image transformation parameters corresponding to the rest frames of images according to the equipment jitter information and the equipment parameters further includes:
acquiring a pitch angle, a rotation angle and an inclination angle when the multi-frame image is acquired;
calculating the pitch angle variation when the other frames of images are acquired according to the pitch angle when the reference frame image is acquired and the pitch angles when the other frames of images are acquired, and calculating the vertical transformation parameters of the other frames of images according to the pitch angle variation, the focal length, the vertical pixel number of the image resolution and the imaging surface height;
calculating the variation of the rotation angle when the other frames of images are collected according to the rotation angle when the reference frame image is collected and the rotation angle when the other frames of images are collected;
and calculating the inclination angle variation quantity when the rest of the frame images are acquired according to the inclination angle when the reference frame image is acquired and the inclination angles when the rest of the frame images are acquired, and calculating the rotation transformation parameters of the rest of the frame images according to the inclination angle variation quantity.
3. The method as claimed in claim 2, wherein the step of calculating vertical transformation parameters of the remaining frames of images according to the pitch angle variation, the focal length, the number of vertical pixels of image resolution and the imaging plane height comprises:
using the formula:
Figure FDA0003305299240000021
calculating vertical transformation parameters of the other frames of images;
the step of calculating the horizontal transformation parameters of the rest frames of images according to the rotation angle variation, the focal length, the horizontal pixel number of the image resolution and the imaging surface width comprises the following steps:
using the formula:
Figure FDA0003305299240000022
calculating horizontal transformation parameters of the other frames of images;
the step of calculating the rotation transformation parameters of the rest of the frame images according to the inclination angle variation includes:
using the formula: alpha is alphai=∠Ci-∠C0Calculating the rotation transformation parameters of the other frames of images;
wherein d isveri is the vertical transformation parameter of the image i in the rest frame images, dhori is a horizontal transformation parameter of the image i, αiIs the rotation transformation parameter of the image i, r is the focal length, and is angle Ai-∠A0In order to acquire the pitch angle variation when the image i is acquired,∠A0to acquire the pitch angle at the time of the reference frame image, angle AiFor the pitch angle at the time of acquiring the image i, i ∈ [1, N]N is the number of the other frames, HimIs the number of vertical pixels of the image resolution, WimIs the number of horizontal pixels of the image resolution, HsensorIs the image plane height, WsensorIs the imaging surface width, angle Bi-∠B0For the variation of the rotation angle when acquiring the image i, angle B0For the rotation angle at the time of collecting the reference frame image, angle BiAngle of rotation for collecting the rest of image ii-∠C0Angle C for the variation of the inclination angle when the image i is collected0Angle C for collecting the angle of inclination of the reference frame imageiIs the tilt angle at which the remaining image i is acquired.
4. The method of claim 1, wherein the device jitter information further comprises: pitch angle variation, pitch angle variation; the device parameters further include: imaging surface height; the image transformation parameters further include: vertical transformation parameters and rotation transformation parameters;
before the step of obtaining and obtaining the image transformation parameters corresponding to the other frames of images according to the equipment jitter information and the equipment parameters, the method further comprises:
calculating the acquisition time difference between the reference frame image and each of the other frames of images according to the time information for acquiring the multiple frames of images;
the step of obtaining and obtaining image transformation parameters corresponding to the rest frames of images according to the equipment jitter information and the equipment parameters further includes:
acquiring a pitch angle angular velocity, a rotation angle angular velocity and an inclination angle angular velocity when the multi-frame image is acquired;
calculating the pitch angle variable quantity when the other frames of images are acquired according to the pitch angle angular speed when the reference frame image is acquired, the pitch angle angular speed when the other frames of images are acquired and the acquisition time difference, and calculating the vertical conversion parameters of the other frames of images according to the pitch angle variable quantity, the focal length, the number of vertical pixels of image resolution and the height of an imaging plane;
calculating the variation of the rotation angle when the images of the other frames are acquired according to the rotation angle angular velocity when the images of the reference frame are acquired, the rotation angle angular velocity when the images of the other frames are acquired and the acquisition time difference;
and calculating the inclination angle variation quantity when the other frames of images are acquired according to the inclination angle angular speed when the reference frame image is acquired, the inclination angle angular speed when the other frames of images are acquired and the acquisition time difference, and calculating the rotation transformation parameters of the other frames of images according to the inclination angle variation quantity.
5. The method as claimed in claim 4, wherein the step of calculating the vertical transformation parameters of the remaining frames of images according to the pitch angle variation, the focal length, the number of vertical pixels of image resolution and the imaging plane height comprises:
using the formula:
Figure FDA0003305299240000041
calculating vertical transformation parameters of the other frames of images;
the step of calculating the horizontal transformation parameters of the rest frames of images according to the rotation angle variation, the focal length, the horizontal pixel number of the image resolution and the imaging surface width comprises the following steps:
using the formula:
Figure FDA0003305299240000042
calculating horizontal transformation parameters of the other frames of images;
the step of calculating the rotation transformation parameters of the rest of the frame images according to the inclination angle variation includes:
using the formula:
Figure FDA0003305299240000043
calculating the rotation transformation parameters of the other frames of images;
wherein d isveri is the vertical transformation parameter of the image i in the rest frame images, dhori is a horizontal transformation parameter of the image i, αiIs the rotational transformation parameter of image i, r is the focal length,
Figure FDA0003305299240000044
for the variation of the pitch angle, omega 0, in the acquisition of the image ipitchFor acquiring the pitch angular velocity, ω i, of the reference frame imagepitchFor the pitch angular velocity at which image i is acquired, i ∈ [1, N]N is the number of the other frames, tiIs the difference in acquisition time of image i and the reference frame image, HimIs the number of vertical pixels of the image resolution, WimIs the number of horizontal pixels of the image resolution, HsensorIs the image plane height, WsensorFor the width of the imaging surface,
Figure FDA0003305299240000045
for the amount of rotation angle variation in the acquisition of the image i, ω 0yawFor the angular velocity of the rotation angle at which the reference frame image is acquired, ω iyawFor the angular velocity of the rotation angle at which the remaining images i are acquired,
Figure FDA0003305299240000046
for the amount of change in the tilt angle at which the image i is acquired, ω 0rollFor the angular velocity of the tilt angle at which the reference frame image is acquired, ω irollIs the angular velocity of the tilt angle at which the remaining image i is acquired.
6. The method according to claim 1, wherein the step of performing calibration processing on the remaining frame images based on the reference frame image according to the image transformation parameters to obtain target images corresponding to the remaining frame images comprises:
and carrying out alignment transformation processing on the pixel point coordinates of the rest of the frames of images by using the image transformation parameters to obtain the pixel point coordinates of the target images corresponding to the rest of the frames of images.
7. The method according to claim 6, wherein the step of performing the alignment transformation processing on the pixel coordinates of the remaining frames of images by using the image transformation parameters to obtain the pixel coordinates of the target image corresponding to each of the remaining frames of images comprises:
using the formula:
Figure FDA0003305299240000051
calculating pixel point coordinates of the target images corresponding to the other frames of images respectively;
wherein (x)i,yi) Is the pixel point coordinate of the target image corresponding to the image i, (w)i,zi) Is the pixel point coordinate of image i, alphaiFor the rotation transformation parameter, alpha, of the image iiCalculating according to the inclination angle variation when the image i is collected; dhori is a horizontal transformation parameter of the image i; dveri is a vertical transformation parameter of the image i, dverAnd i is obtained by calculation according to the pitch angle variable quantity when the image i is collected, the focal length, the vertical pixel number of the image resolution and the height of an imaging surface.
8. The method of claim 1, wherein the step of performing image reconstruction on the reference frame image and each target image to obtain a super-resolution image corresponding to the multi-frame image comprises:
amplifying the reference frame image and each target image;
and carrying out image reconstruction on the amplified image through a target convolutional neural network obtained through pre-training to obtain a super-resolution image corresponding to the multi-frame image, wherein the target convolutional neural network is a convolutional neural network which is obtained through pre-training and is used for reconstructing the amplified image.
9. The method of claim 8, wherein the training mode of the target convolutional neural network comprises:
constructing an initial convolutional neural network comprising a plurality of filters;
obtaining a plurality of groups of images, wherein each group of images comprises a plurality of initial images, and selecting one frame from the plurality of initial images as a true value image sample of the group of images;
taking the corresponding true value image sample as a reference, calibrating the multi-frame initial image in each group of images, and performing down-sampling on the calibrated multi-frame initial image to obtain a down-sampled multi-frame initial image;
amplifying the multi-frame initial image subjected to the down-sampling processing to obtain a multi-frame target image sample, wherein the multi-frame target image sample and the true value image sample are image training samples for training a convolutional neural network;
inputting the image training sample into the initial convolutional neural network for training;
and when the average difference value between the output result corresponding to the multi-frame target image sample and the corresponding true value image sample is smaller than a preset value, finishing training to obtain the target convolutional neural network.
10. The method of any one of claims 1-9, wherein the multi-frame image is a Bayer-format image.
11. An image super-resolution reconstruction apparatus, characterized in that the apparatus comprises:
the image acquisition module is used for acquiring a plurality of frames of images to be processed and selecting one frame of image from the plurality of frames of images as a reference frame image;
the image transformation parameter calculation module is used for obtaining and obtaining image transformation parameters corresponding to the rest frames of images according to the equipment jitter information and the equipment parameters, wherein the image transformation parameters are as follows: calibrating a calibration parameter required when the position relation between image pixel points is calibrated by taking the reference frame image as a standard;
the image calibration module is used for carrying out calibration processing on the rest of frame images by taking the reference frame image as a reference according to the image transformation parameters to obtain target images corresponding to the rest of frame images respectively;
the image reconstruction module is used for carrying out image reconstruction on the reference frame image and each frame of target image to obtain a super-resolution image corresponding to the multi-frame image;
wherein the device jitter information comprises: a rotation angle variation amount, the apparatus parameter including: focal length, image resolution and imaging surface width; the image transformation parameters include: a horizontal transformation parameter;
the image transformation parameter calculation module comprises a first horizontal transformation parameter calculation unit, and is used for calculating horizontal transformation parameters of the rest frames of images according to the rotation angle variation, the focal length, the horizontal pixel number of the image resolution and the imaging surface width.
12. The apparatus of claim 11, wherein the device jitter information further comprises: pitch angle variation, inclination angle variation, and the equipment parameters further include imaging plane height; the image transformation parameters further include: vertical transformation parameters and rotation transformation parameters;
the image transformation parameter calculation module further comprises:
the first acquisition unit is used for acquiring a pitch angle, a rotation angle and an inclination angle when the multi-frame images are acquired;
the first vertical transformation parameter calculation unit is used for calculating the pitch angle variation quantity when the other frames of images are acquired according to the pitch angle when the reference frame image is acquired and the pitch angle when the other frames of images are acquired, and calculating the vertical transformation parameters of the other frames of images according to the pitch angle variation quantity, the focal length, the number of vertical pixels of image resolution and the height of an imaging plane;
the first horizontal transformation parameter calculation unit is further used for calculating the variation of the rotation angle when the other frames of images are acquired according to the rotation angle when the reference frame image is acquired and the rotation angles when the other frames of images are acquired;
and the first rotation transformation parameter calculation unit is used for calculating the inclination angle variation when the other frames of images are acquired according to the inclination angle when the reference frame image is acquired and the inclination angles when the other frames of images are acquired, and calculating the rotation transformation parameters of the other frames of images according to the inclination angle variation.
13. The apparatus of claim 12, wherein the first vertical transformation parameter calculation unit comprises:
a first vertical transformation parameter calculation subunit for calculating, using the formula:
Figure FDA0003305299240000071
Figure FDA0003305299240000072
calculating vertical transformation parameters of the other frames of images;
the first horizontal conversion parameter calculation unit includes:
a first vertical transformation parameter calculation subunit for calculating, using the formula:
Figure FDA0003305299240000073
Figure FDA0003305299240000074
calculating horizontal transformation parameters of the other frames of images;
the first rotation transformation parameter calculation unit includes:
a first rotation transformation parameter calculation subunit for calculating, using the formula: alpha is alphai=∠Ci-∠C0Calculating the rotation transformation parameters of the other frames of images;
wherein d isveri is the vertical transformation parameter of the image i in the rest frame images, dhori is a horizontal transformation parameter of the image i, αiIs the rotation transformation parameter of the image i, r is the focal length, and is angle Ai-∠A0For acquiring the pitch angle variation of the image i, angle A0To acquire the pitch angle at the time of the reference frame image, angle AiFor the pitch angle at the time of acquiring the image i, i ∈ [1, N]N is the number of the other frames, HimIs the number of vertical pixels of the image resolution, WimIs the number of horizontal pixels of the image resolution, HsensorIs the image plane height, WsensorIs the imaging surface width, angle Bi-∠B0For the variation of the rotation angle when acquiring the image i, angle B0For the rotation angle at the time of collecting the reference frame image, angle BiAngle of rotation for collecting the rest of image ii-∠C0Angle C for the variation of the inclination angle when the image i is collected0Angle C for collecting the angle of inclination of the reference frame imageiIs the tilt angle at which the remaining image i is acquired.
14. The apparatus of claim 11, wherein the device jitter information further comprises: pitch angle variation, pitch angle variation; the device parameters further include: imaging surface height; the image transformation parameters further include: vertical transformation parameters and rotation transformation parameters;
the device further comprises: a time difference calculation module, configured to calculate, before the step of obtaining and obtaining image transformation parameters corresponding to the other frames of images according to the device shaking information and the device parameters, an acquisition time difference between the reference frame image and each of the other frames of images according to time information for acquiring the multiple frames of images;
the image transformation parameter calculation module includes:
the second acquisition unit is used for acquiring a pitch angle angular velocity, a rotation angle angular velocity and a tilt angle angular velocity when the multi-frame images are acquired;
a second vertical transformation parameter calculation unit, configured to calculate a pitch angle variation amount when the other frames of images are acquired according to the pitch angle angular velocity when the reference frame image is acquired, the pitch angle angular velocity when the other frames of images are acquired, and the acquisition time difference, and calculate vertical transformation parameters of the other frames of images according to the pitch angle variation amount, the focal length, the number of vertical pixels of image resolution, and the height of an imaging plane;
a second horizontal transformation parameter calculation unit configured to calculate a rotation angle variation amount when the remaining frames of images are acquired, based on a rotation angle angular velocity when the reference frame of images is acquired, a rotation angle angular velocity when the remaining frames of images are acquired, and the acquisition time difference;
and the second rotation transformation parameter calculation unit is used for calculating the inclination angle variation quantity when the other frames of images are collected according to the inclination angle angular velocity when the reference frame image is collected, the inclination angle angular velocity when the other frames of images are collected and the collection time difference, and calculating the rotation transformation parameters of the other frames of images according to the inclination angle variation quantity.
15. The apparatus of claim 14, wherein the second vertical transformation parameter calculation unit comprises:
a second vertical transformation parameter calculation subunit for calculating, using the formula:
Figure FDA0003305299240000091
Figure FDA0003305299240000092
calculating vertical transformation parameters of the other frames of images;
the second horizontal conversion parameter calculation unit includes:
a second horizontal transformation parameter calculation subunit for calculating, using the formula:
Figure FDA0003305299240000093
Figure FDA0003305299240000094
calculating a vertical transformation of the remaining frames of imagesA parameter;
the second rotation transformation parameter calculation unit includes:
a second rotation transformation parameter calculation subunit for calculating, using the formula:
Figure FDA0003305299240000095
calculating the rotation transformation parameters of the other frames of images;
wherein d isveri is the vertical transformation parameter of the image i in the rest frame images, dhori is a horizontal transformation parameter of the image i, αiIs the rotational transformation parameter of image i, r is the focal length,
Figure FDA0003305299240000096
for the variation of the pitch angle, omega 0, in the acquisition of the image ipitchFor acquiring the pitch angular velocity, ω i, of the reference frame imagepitchFor the pitch angular velocity at which image i is acquired, i ∈ [1, N]N is the number of the other frames, tiIs the difference in acquisition time of image i and the reference frame image, HimIs the number of vertical pixels of the image resolution, WimIs the number of horizontal pixels of the image resolution, HsensorIs the image plane height, WsensorFor the width of the imaging surface,
Figure FDA0003305299240000097
for the amount of rotation angle variation in the acquisition of the image i, ω 0yawFor the angular velocity of the rotation angle at which the reference frame image is acquired, ω iyawFor the angular velocity of the rotation angle at which the remaining images i are acquired,
Figure FDA0003305299240000101
for the amount of change in the tilt angle at which the image i is acquired, ω 0rollFor the angular velocity of the tilt angle at which the reference frame image is acquired, ω irollIs the angular velocity of the tilt angle at which the remaining image i is acquired.
16. The apparatus of claim 11, wherein the image calibration module comprises:
and the alignment transformation unit is used for carrying out alignment transformation processing on the pixel point coordinates of the rest of the frames of images by using the image transformation parameters to obtain the pixel point coordinates of the target images corresponding to the rest of the frames of images.
17. The apparatus of claim 16, wherein the alignment transform unit comprises:
a pixel point coordinate calculation subunit configured to calculate, using a formula:
Figure FDA0003305299240000102
Figure FDA0003305299240000103
calculating pixel point coordinates of the target images corresponding to the other frames of images respectively;
wherein (x)i,yi) Is the pixel point coordinate of the target image corresponding to the image i, (w)i,zi) Is the pixel point coordinate of image i, alphaiFor the rotation transformation parameter, alpha, of the image iiCalculating according to the inclination angle variation when the image i is collected; dhori is a horizontal transformation parameter of the image i; dveri is a vertical transformation parameter of the image i, dverAnd i is obtained by calculation according to the pitch angle variable quantity when the image i is collected, the focal length, the vertical pixel number of the image resolution and the height of an imaging surface.
18. The apparatus of claim 11, wherein the image reconstruction module comprises:
the amplifying processing unit is used for amplifying the reference frame image and each target image;
and the image reconstruction unit is used for carrying out image reconstruction on the amplified image through a target convolutional neural network obtained by training in advance through the model construction module to obtain a super-resolution image corresponding to the multi-frame image, wherein the target convolutional neural network is a convolutional neural network which is trained in advance and used for reconstructing the amplified image.
19. The apparatus of claim 18, wherein the model building module comprises:
an initial convolutional neural network construction unit for constructing an initial convolutional neural network including a plurality of filters;
the device comprises an initial image acquisition unit, a real-time image acquisition unit and a real-time image acquisition unit, wherein the initial image acquisition unit is used for acquiring a plurality of groups of images, each group of images comprises a plurality of frames of initial images, and one frame is selected from the plurality of frames of initial images to be used as a true value image sample of the group of images;
the initial image calibration unit is used for calibrating the multi-frame initial images in each group of images by taking the corresponding true value image samples as a reference, and performing down-sampling on the calibrated multi-frame initial images to obtain the down-sampled multi-frame initial images;
the initial image amplification unit is used for amplifying the multi-frame initial image subjected to the downsampling processing to obtain a multi-frame target image sample, wherein the multi-frame target image sample and the true value image sample are image training samples used for training a convolutional neural network;
the sample training unit is used for inputting the image training sample into the initial convolutional neural network for training;
and the target convolutional neural network obtaining unit is used for finishing training when the average difference value between the output result corresponding to the multiple frames of target image samples and the corresponding true value image sample is smaller than a preset value, so as to obtain the target convolutional neural network.
20. The apparatus according to any one of claims 11-19, wherein the multi-frame image is a Bayer pattern image.
21. An image super-resolution reconstruction system, characterized in that the system comprises: an image acquisition apparatus and an image reconstruction apparatus, wherein,
the image acquisition equipment is used for acquiring a multi-frame image to be processed and sending the multi-frame image to be processed to the image reconstruction equipment;
the image reconstruction device is used for receiving the multi-frame image to be processed sent by the image acquisition device and selecting one frame of image from the multi-frame image as a reference frame image; acquiring and obtaining image transformation parameters corresponding to the rest frames of images according to the equipment jitter information and the equipment parameters; according to the image transformation parameters, taking the reference frame image as a reference, and carrying out calibration processing on the rest of the frame images to obtain target images corresponding to the rest of the frame images respectively; performing image reconstruction on the reference frame image and each frame of target image to obtain a super-resolution image corresponding to the plurality of frames of images, wherein the image transformation parameters are as follows: calibrating a calibration parameter required when the position relation between image pixel points is calibrated by taking the reference frame image as a standard;
wherein the device jitter information comprises: a rotation angle variation amount, the apparatus parameter including: focal length, image resolution and imaging surface width; the image transformation parameters include: a horizontal transformation parameter;
and the image reconstruction equipment is specifically used for calculating horizontal conversion parameters of the rest frames of images according to the rotation angle variable quantity, the focal length, the horizontal pixel number of the image resolution and the imaging surface width.
22. The system of claim 21, wherein the device jitter information further comprises: pitch angle change, angle of inclination change, equipment parameter still includes: imaging surface height; the image transformation parameters further include: vertical transformation parameters and rotation transformation parameters;
the image reconstruction device is specifically used for acquiring a pitch angle, a rotation angle and an inclination angle when the multi-frame image is acquired; calculating the pitch angle variation when the other frames of images are acquired according to the pitch angle when the reference frame image is acquired and the pitch angles when the other frames of images are acquired, and calculating the vertical transformation parameters of the other frames of images according to the pitch angle variation, the focal length, the vertical pixel number of the image resolution and the imaging surface height; calculating the variation of the rotation angle when the other frames of images are collected according to the rotation angle when the reference frame image is collected and the rotation angle when the other frames of images are collected; and calculating the inclination angle variation quantity when the rest of the frame images are acquired according to the inclination angle when the reference frame image is acquired and the inclination angles when the rest of the frame images are acquired, and calculating the rotation transformation parameters of the rest of the frame images according to the inclination angle variation quantity.
23. The system of claim 22,
the image reconstruction device is specifically configured to use a formula:
Figure FDA0003305299240000121
calculating vertical transformation parameters of the other frames of images; using the formula:
Figure FDA0003305299240000122
Figure FDA0003305299240000123
calculating horizontal transformation parameters of the other frames of images; using the formula: alpha is alphai=∠Ci-∠C0Calculating the rotation transformation parameters of the other frames of images;
wherein d isveri is the vertical transformation parameter of the image i in the rest frame images, dhori is a horizontal transformation parameter of the image i, αiIs the rotation transformation parameter of the image i, r is the focal length, and is angle Ai-∠A0For acquiring the pitch angle variation of the image i, angle A0To acquire the pitch angle at the time of the reference frame image, angle AiFor the pitch angle at the time of acquiring the image i, i ∈ [1, N]N is the number of the other frames, HimIs the number of vertical pixels of the image resolution, WimIs the number of horizontal pixels of the image resolution, HsensorIs the image plane height, WsensorIs the imaging surface width, angle Bi-∠B0For the variation of the rotation angle when acquiring the image i, angle B0For the rotation angle at the time of collecting the reference frame image, angle BiAngle of rotation for collecting the rest of image ii-∠C0Angle C for the variation of the inclination angle when the image i is collected0Angle C for collecting the angle of inclination of the reference frame imageiIs the tilt angle at which the remaining image i is acquired.
24. The system of claim 21, wherein the device jitter information further comprises: pitch angle variation, pitch angle variation; the device parameters further include: imaging surface height; the image transformation parameters further include: vertical transformation parameters and rotation transformation parameters;
the image reconstruction device is specifically configured to calculate an acquisition time difference between the reference frame image and each of the other frames of images according to the time information for acquiring the multiple frames of images; acquiring a pitch angle angular velocity, a rotation angle angular velocity and an inclination angle angular velocity when the multi-frame image is acquired; calculating the pitch angle variable quantity when the other frames of images are acquired according to the pitch angle angular speed when the reference frame image is acquired, the pitch angle angular speed when the other frames of images are acquired and the acquisition time difference, and calculating the vertical conversion parameters of the other frames of images according to the pitch angle variable quantity, the focal length, the number of vertical pixels of image resolution and the height of an imaging plane; calculating the variation of the rotation angle when the images of the other frames are acquired according to the rotation angle angular velocity when the images of the reference frame are acquired, the rotation angle angular velocity when the images of the other frames are acquired and the acquisition time difference; and calculating the inclination angle variation quantity when the other frames of images are acquired according to the inclination angle angular speed when the reference frame image is acquired, the inclination angle angular speed when the other frames of images are acquired and the acquisition time difference, and calculating the rotation transformation parameters of the other frames of images according to the inclination angle variation quantity.
25. The system of claim 24,
the image reconstruction device is specifically configured to use a formula:
Figure FDA0003305299240000131
Figure FDA0003305299240000132
calculating vertical transformation parameters of the other frames of images; using the formula:
Figure FDA0003305299240000133
Figure FDA0003305299240000134
calculating horizontal transformation parameters of the other frames of images; using the formula:
Figure FDA0003305299240000135
calculating the rotation transformation parameters of the other frames of images;
wherein d isveri is the vertical transformation parameter of the image i in the rest frame images, dhori is a horizontal transformation parameter of the image i, αiIs the rotational transformation parameter of image i, r is the focal length,
Figure FDA0003305299240000136
for the variation of the pitch angle, omega 0, in the acquisition of the image ipitchFor acquiring the pitch angular velocity, ω i, of the reference frame imagepitchFor the pitch angular velocity at which image i is acquired, i ∈ [1, N]N is the number of the other frames, tiIs the difference in acquisition time of image i and the reference frame image, HimIs the number of vertical pixels of the image resolution, WimIs the number of horizontal pixels of the image resolution, HsensorIs the image plane height, WsensorFor the width of the imaging surface,
Figure FDA0003305299240000141
for the amount of rotation angle variation in the acquisition of the image i, ω 0yawFor the angular velocity of the rotation angle at which the reference frame image is acquired, ω iyawFor the angular velocity of the rotation angle at which the remaining images i are acquired,
Figure FDA0003305299240000142
for the amount of change in the tilt angle at which the image i is acquired, ω 0rollFor the angular velocity of the tilt angle at which the reference frame image is acquired, ω irollIs the angular velocity of the tilt angle at which the remaining image i is acquired.
26. The system of claim 21,
the image reconstruction device is specifically configured to perform alignment transformation processing on the pixel coordinates of the other frames of images by using the image transformation parameters, so as to obtain pixel coordinates of the target image corresponding to each of the other frames of images.
27. The system of claim 26,
the image reconstruction device is specifically configured to use a formula:
Figure FDA0003305299240000143
Figure FDA0003305299240000144
calculating pixel point coordinates of the target images corresponding to the other frames of images respectively;
wherein (x)i,yi) Is the pixel point coordinate of the target image corresponding to the image i, (w)i,zi) Is the pixel point coordinate of image i, alphaiFor the rotation transformation parameter, alpha, of the image iiCalculating according to the inclination angle variation when the image i is collected; dhori is a horizontal transformation parameter of the image i; dveri is a vertical transformation parameter of the image i, dveri according to the change of the pitch angle when the image i is collectedAnd calculating the quantity, the focal length, the number of vertical pixels of the image resolution and the height of an imaging surface.
28. The system of claim 21,
the image reconstruction device is specifically configured to perform amplification processing on the reference frame image and each target image; and carrying out image reconstruction on the amplified image through a target convolutional neural network obtained through pre-training to obtain a super-resolution image corresponding to the multi-frame image, wherein the target convolutional neural network is a convolutional neural network which is obtained through pre-training and is used for reconstructing the amplified image.
29. The system of claim 28,
the image reconstruction device is specifically used for constructing an initial convolutional neural network comprising a plurality of filters; obtaining a plurality of groups of images, wherein each group of images comprises a plurality of initial images, and selecting one frame from the plurality of initial images as a true value image sample of the group of images; taking the corresponding true value image sample as a reference, calibrating the multi-frame initial image in each group of images, and performing down-sampling on the calibrated multi-frame initial image to obtain a down-sampled multi-frame initial image; amplifying the multi-frame initial image subjected to the down-sampling processing to obtain a multi-frame target image sample; inputting an image training sample into the initial convolutional neural network for training; and when the average difference value between the output result corresponding to the multiple frames of target image samples and the corresponding true value image sample is smaller than a preset value, finishing training to obtain the target convolutional neural network, wherein the multiple frames of target image samples and the true value image samples are image training samples for training the convolutional neural network.
30. The system of any one of claims 21-29, wherein the plurality of frame images are images in JPG format.
CN201710210828.3A 2017-03-31 2017-03-31 Image super-resolution reconstruction method, device and system Active CN108665410B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710210828.3A CN108665410B (en) 2017-03-31 2017-03-31 Image super-resolution reconstruction method, device and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710210828.3A CN108665410B (en) 2017-03-31 2017-03-31 Image super-resolution reconstruction method, device and system

Publications (2)

Publication Number Publication Date
CN108665410A CN108665410A (en) 2018-10-16
CN108665410B true CN108665410B (en) 2021-11-26

Family

ID=63784554

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710210828.3A Active CN108665410B (en) 2017-03-31 2017-03-31 Image super-resolution reconstruction method, device and system

Country Status (1)

Country Link
CN (1) CN108665410B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112037129B (en) * 2020-08-26 2024-04-19 广州视源电子科技股份有限公司 Image super-resolution reconstruction method, device, equipment and storage medium
CN114708144B (en) * 2022-03-16 2023-05-26 荣耀终端有限公司 Image data processing method and device

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101980289A (en) * 2010-10-25 2011-02-23 上海大学 Frequency domain registration and convex set projection-based multi-frame image super-resolution reconstruction method
CN103136734A (en) * 2013-02-27 2013-06-05 北京工业大学 Restraining method on edge Halo effects during process of resetting projections onto convex sets (POCS) super-resolution image
CN104159119A (en) * 2014-07-07 2014-11-19 大连民族学院 Super-resolution reconstruction method and system for video images during real-time sharing playing
CN105306787A (en) * 2015-10-26 2016-02-03 努比亚技术有限公司 Image processing method and device
EP3013053A3 (en) * 2013-01-30 2016-07-20 Intel Corporation Content adaptive fusion filtering of prediction signals for next generation video coding
CN106204440A (en) * 2016-06-29 2016-12-07 北京互信互通信息技术有限公司 A kind of multiframe super resolution image reconstruction method and system

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9824486B2 (en) * 2013-12-16 2017-11-21 Futurewei Technologies, Inc. High resolution free-view interpolation of planar structure

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101980289A (en) * 2010-10-25 2011-02-23 上海大学 Frequency domain registration and convex set projection-based multi-frame image super-resolution reconstruction method
EP3013053A3 (en) * 2013-01-30 2016-07-20 Intel Corporation Content adaptive fusion filtering of prediction signals for next generation video coding
CN103136734A (en) * 2013-02-27 2013-06-05 北京工业大学 Restraining method on edge Halo effects during process of resetting projections onto convex sets (POCS) super-resolution image
CN104159119A (en) * 2014-07-07 2014-11-19 大连民族学院 Super-resolution reconstruction method and system for video images during real-time sharing playing
CN105306787A (en) * 2015-10-26 2016-02-03 努比亚技术有限公司 Image processing method and device
CN106204440A (en) * 2016-06-29 2016-12-07 北京互信互通信息技术有限公司 A kind of multiframe super resolution image reconstruction method and system

Also Published As

Publication number Publication date
CN108665410A (en) 2018-10-16

Similar Documents

Publication Publication Date Title
US20220343598A1 (en) System and methods for improved aerial mapping with aerial vehicles
CN110874817B (en) Image stitching method and device, vehicle-mounted image processing device, equipment and medium
CN110211043B (en) Registration method based on grid optimization for panoramic image stitching
US8818101B1 (en) Apparatus and method for feature matching in distorted images
CN102148965B (en) Video monitoring system for multi-target tracking close-up shooting
US20220222776A1 (en) Multi-Stage Multi-Reference Bootstrapping for Video Super-Resolution
CN109474780B (en) Method and device for image processing
CN111355884B (en) Monitoring method, device, system, electronic equipment and storage medium
US9838572B2 (en) Method and device for determining movement between successive video images
CN111598777A (en) Sky cloud image processing method, computer device and readable storage medium
KR101324250B1 (en) optical axis error compensation method using image processing, the method of the same, and the zoom camera provided for the compensation function of the optical axis error
CN108665410B (en) Image super-resolution reconstruction method, device and system
CN111091088A (en) Video satellite information supported marine target real-time detection positioning system and method
CN106846250B (en) Super-resolution reconstruction method based on multi-scale filtering
CN110532853B (en) Remote sensing time-exceeding phase data classification method and device
CN114742866A (en) Image registration method and device, storage medium and electronic equipment
CN108961182B (en) Vertical direction vanishing point detection method and video correction method for video image
CN110599424A (en) Method and device for automatic image color-homogenizing processing, electronic equipment and storage medium
CN111461008B (en) Unmanned aerial vehicle aerial photographing target detection method combined with scene perspective information
WO2019003796A1 (en) Image compositing method, image compositing device, and recording medium
Wang et al. Design of high-resolution space imaging system on sandroid CubeSat using camera array
CN115239815B (en) Camera calibration method and device
JP7444585B2 (en) Recognition device, recognition method
TW201215123A (en) Method of determining shift between two images
US20240132211A1 (en) System and methods for improved aerial mapping with aerial vehicles

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant