CN116347248A - Image acquisition method and device, electronic equipment, medium and chip - Google Patents

Image acquisition method and device, electronic equipment, medium and chip Download PDF

Info

Publication number
CN116347248A
CN116347248A CN202211444613.5A CN202211444613A CN116347248A CN 116347248 A CN116347248 A CN 116347248A CN 202211444613 A CN202211444613 A CN 202211444613A CN 116347248 A CN116347248 A CN 116347248A
Authority
CN
China
Prior art keywords
image
exposure
initial
sample
initial image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202211444613.5A
Other languages
Chinese (zh)
Other versions
CN116347248B (en
Inventor
张恒
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Xuanjie Technology Co ltd
Original Assignee
Shanghai Xuanjie Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Xuanjie Technology Co ltd filed Critical Shanghai Xuanjie Technology Co ltd
Priority to CN202211444613.5A priority Critical patent/CN116347248B/en
Publication of CN116347248A publication Critical patent/CN116347248A/en
Application granted granted Critical
Publication of CN116347248B publication Critical patent/CN116347248B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Abstract

The disclosure relates to an image acquisition method and device, electronic equipment, medium and chip. The method comprises the following steps: acquiring an input image, wherein the input image comprises a first initial image, a second initial image and a third initial image; inputting the input image into a preset image alignment model, and respectively aligning the first initial image and the second initial image based on the third initial image by the image alignment model to obtain a first aligned image and a second aligned image; and fusing the first alignment image, the second alignment image and the third initial image to obtain a target image. According to the method, the three initial images are aligned first and then fused, so that a high-quality target image without alignment defects can be obtained, and the use experience is improved.

Description

Image acquisition method and device, electronic equipment, medium and chip
Technical Field
The disclosure relates to the technical field of image processing, and in particular relates to an image acquisition method and device, electronic equipment, a medium and a chip.
Background
As electronic devices become a necessity, their camera functions are also widely used. The existing camera provides an HDR (High Dynamic Range ) function, which obtains one HDR image by fusing a plurality of low exposure images under different exposure. However, when some regions in the input image have motion scenes, the HDR image may have obvious alignment, which reduces the use experience.
Disclosure of Invention
The present disclosure provides an image acquisition method and apparatus, an electronic device, a medium, and a chip to solve the deficiencies of the related art.
According to a first aspect of an embodiment of the present disclosure, there is provided an image acquisition method including:
acquiring an input image, wherein the input image comprises a first initial image, a second initial image and a third initial image;
inputting the input image into a preset image alignment model, and respectively aligning the first initial image and the second initial image based on the third initial image by the image alignment model to obtain a first aligned image and a second aligned image;
and fusing the first alignment image, the second alignment image and the third initial image to obtain a target image.
Optionally, the image alignment model includes a noise extraction module, a re-exposure module, an optical flow estimation module, and an alignment module;
the noise extraction module is used for respectively extracting the noise of the first initial image and the noise of the second initial image to obtain a first noise image and a second noise image;
the re-exposure module is used for respectively carrying out re-exposure processing on the first initial image and the second initial image to obtain a first re-exposure image and a second re-exposure image;
The optical flow estimation module is used for carrying out optical flow estimation on the first re-exposure image and the third initial image to obtain first optical flow data; performing optical flow estimation on the second re-exposure image and the third initial image to obtain second optical flow data;
the alignment module is used for aligning the first optical flow data with the first initial image and fusing a first noise image to obtain a first aligned image; and aligning the second optical flow data with the second initial image and fusing the second noise image to obtain a second aligned image.
Optionally, the method further comprises a step of training the image alignment model, specifically comprising:
acquiring a preset number of training image samples; each group of training image samples comprises a first initial image sample, a second initial image sample, a third initial image sample, a first re-exposure image sample corresponding to the first initial image sample, a second re-exposure image sample corresponding to the second initial image sample and a target image sample; the exposure values of the first initial image sample, the second initial image sample and the third initial image sample are sequentially increased;
Training the re-exposure module with the first re-exposure image sample and the first initial image sample and the second re-exposure image sample and the second initial image sample;
transplanting the model parameters of the re-exposure module to a re-exposure module in the image alignment model, and fixing the model parameters;
the image alignment model is trained using the first initial image sample, the second initial image sample, the third initial image sample, and the target image sample.
Optionally, the image alignment model includes a noise extraction module, a re-exposure module, an optical flow estimation module, and an alignment module;
the noise extraction module is used for respectively extracting the noise of the first initial image and the noise of the second initial image to obtain a first noise image and a first filtering image as well as a second noise image and a second filtering image;
the re-exposure module is used for respectively carrying out re-exposure processing on the first filter image and the second filter image to obtain a first re-exposure image and a second re-exposure image;
the optical flow estimation module is used for carrying out optical flow estimation on the first re-exposure image and the third initial image to obtain first optical flow data; performing optical flow estimation on the second re-exposure image and the third initial image to obtain second optical flow data;
The alignment module is used for aligning the first optical flow data with the first filtering image to obtain a first aligned image; and aligning the second optical flow data and the second filtered image to obtain a second aligned image.
Optionally, the method further comprises a step of training the image alignment model, specifically comprising:
acquiring a preset number of training image samples; each group of training image samples comprises a first initial image sample, a second initial image sample, a third initial image sample, a first filtering image sample corresponding to the first initial image sample, a second filtering image sample corresponding to the second initial image sample, a first re-exposure image sample corresponding to the first initial image sample, a second re-exposure image sample corresponding to the second initial image sample and a target image sample; the exposure values of the first initial image sample, the second initial image sample and the third initial image sample are sequentially increased;
training the re-exposure module with the first filtered image sample and the first re-exposure image sample and the second filtered image sample and the second re-exposure image sample;
transplanting the model parameters of the re-exposure module to a re-exposure module in the image alignment model, and fixing the model parameters;
The image alignment model is trained using the first initial image sample, the second initial image sample, the third initial image sample, and the target image sample.
Optionally, the noise extraction module includes a first neural network, where the first neural network is implemented by adopting an UNET structure;
the re-exposure module comprises a second neural network, and the second neural network is realized by adopting a residual UNET structure;
the optical flow estimation module comprises a third neural network, and the third neural network is realized by adopting a residual UNET structure;
or alternatively, the process may be performed,
the alignment module comprises a fourth neural network, and the fourth neural network is realized by adopting a self-coding structure.
According to a second aspect of embodiments of the present disclosure, there is provided an image acquisition apparatus including:
an input image acquisition unit configured to acquire an input image including a first initial image, a second initial image, and a third initial image;
an alignment image acquisition unit, configured to input the input image into a preset image alignment model, and align the first initial image and the second initial image based on the third initial image by the image alignment model, so as to obtain a first alignment image and a second alignment image;
And the target image acquisition unit is used for fusing the first aligned image, the second aligned image and the third initial image to obtain a target image.
Optionally, the image alignment model includes a noise extraction module, a re-exposure module, an optical flow estimation module, and an alignment module;
the noise extraction module is used for respectively extracting the noise of the first initial image and the noise of the second initial image to obtain a first noise image and a second noise image;
the re-exposure module is used for respectively carrying out re-exposure processing on the first initial image and the second initial image to obtain a first re-exposure image and a second re-exposure image;
the optical flow estimation module is used for carrying out optical flow estimation on the first re-exposure image and the third initial image to obtain first optical flow data; performing optical flow estimation on the second re-exposure image and the third initial image to obtain second optical flow data;
the alignment module is used for aligning the first optical flow data with the first initial image and fusing a first noise image to obtain a first aligned image; and aligning the second optical flow data with the second initial image and fusing the second noise image to obtain a second aligned image.
Optionally, the apparatus further comprises a model training unit for training the image alignment model; the model training unit includes:
the image sample acquisition subunit is used for acquiring a preset number of training image samples; each group of training image samples comprises a first initial image sample, a second initial image sample, a third initial image sample, a first re-exposure image sample corresponding to the first initial image sample, a second re-exposure image sample corresponding to the second initial image sample and a target image sample; the exposure values of the first initial image sample, the second initial image sample and the third initial image sample are sequentially increased;
a re-exposure module training subunit for training the re-exposure module with the first re-exposure image sample and the first initial image sample and the second re-exposure image sample and the second initial image sample;
a model parameter transplanting subunit, configured to transplant model parameters of the re-exposure module to a re-exposure module in the image alignment model, and fix the model parameters;
an alignment model training subunit for training the image alignment model using the first initial image sample, the second initial image sample, the third initial image sample, and the target image sample.
Optionally, the image alignment model includes a noise extraction module, a re-exposure module, an optical flow estimation module, and an alignment module;
the noise extraction module is used for respectively extracting the noise of the first initial image and the noise of the second initial image to obtain a first noise image and a first filtering image as well as a second noise image and a second filtering image;
the re-exposure module is used for respectively carrying out re-exposure processing on the first filter image and the second filter image to obtain a first re-exposure image and a second re-exposure image;
the optical flow estimation module is used for carrying out optical flow estimation on the first re-exposure image and the third initial image to obtain first optical flow data; performing optical flow estimation on the second re-exposure image and the third initial image to obtain second optical flow data;
the alignment module is used for aligning the first optical flow data with the first filtering image to obtain a first aligned image; and aligning the second optical flow data and the second filtered image to obtain a second aligned image.
Optionally, the apparatus further comprises a model training unit for training the image alignment model; the model training unit includes:
The image sample acquisition subunit is used for acquiring a preset number of training image samples; each group of training image samples comprises a first initial image sample, a second initial image sample, a third initial image sample, a first filtering image sample corresponding to the first initial image sample, a second filtering image sample corresponding to the second initial image sample, a first re-exposure image sample corresponding to the first initial image sample, a second re-exposure image sample corresponding to the second initial image sample and a target image sample; the exposure values of the first initial image sample, the second initial image sample and the third initial image sample are sequentially increased;
a re-exposure module training subunit for training the re-exposure module with the first filtered image sample and the first re-exposure image sample and the second filtered image sample and the second re-exposure image sample;
a model parameter transplanting subunit, configured to transplant model parameters of the re-exposure module to a re-exposure module in the image alignment model, and fix the model parameters;
an alignment model training subunit for training the image alignment model using the first initial image sample, the second initial image sample, the third initial image sample, and the target image sample.
Optionally, the noise extraction module includes a first neural network, where the first neural network is implemented by adopting an UNET structure;
the re-exposure module comprises a second neural network, and the second neural network is realized by adopting a residual UNET structure;
the optical flow estimation module comprises a third neural network, and the third neural network is realized by adopting a residual UNET structure;
or alternatively, the process may be performed,
the alignment module comprises a fourth neural network, and the fourth neural network is realized by adopting a self-coding structure.
According to a third aspect of embodiments of the present disclosure, there is provided an electronic device, comprising: a memory and a processor;
the memory is used for storing a computer program executable by the processor;
the processor is configured to execute the computer program in the memory to implement the method as described above.
According to a fourth aspect of embodiments of the present disclosure, there is provided a chip for performing the above-described image processing method.
According to a fifth aspect of embodiments of the present disclosure, there is provided a non-transitory computer readable storage medium, which when executed by a processor, enables the image processing method as described above to be implemented.
The technical scheme provided by the embodiment of the disclosure can comprise the following beneficial effects:
as can be seen from the foregoing embodiments, the solution provided by the embodiments of the present disclosure may obtain an input image, where the input image includes a first initial image, a second initial image, and a third initial image; then, inputting the input image into a preset image alignment model, and respectively aligning the first initial image and the second initial image based on the third initial image by the image alignment model to obtain a first aligned image and a second aligned image; and then fusing the first alignment image, the second alignment image and the third initial image to obtain a target image. In this way, in the embodiment, the three initial images are aligned first and then fused, so that a high-quality target image without alignment defects can be obtained, and the use experience is improved.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the disclosure.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the disclosure and together with the description, serve to explain the principles of the disclosure.
Fig. 1 is a flowchart illustrating an image acquisition method according to an exemplary embodiment.
FIG. 2 is a block diagram of an image alignment model, according to an example embodiment.
FIG. 3 is a flowchart illustrating a training image alignment model according to an exemplary embodiment.
FIG. 4 is a block diagram of another image alignment model, shown according to an exemplary embodiment.
FIG. 5 is a flowchart illustrating another training image alignment model according to an exemplary embodiment.
Fig. 6 is a block diagram of an image acquisition apparatus according to an exemplary embodiment.
Fig. 7 is a block diagram of an electronic device, according to an example embodiment.
Detailed Description
Reference will now be made in detail to exemplary embodiments, examples of which are illustrated in the accompanying drawings. When the following description refers to the accompanying drawings, the same numbers in different drawings refer to the same or similar elements, unless otherwise indicated. The embodiments described by way of example below are not representative of all embodiments consistent with the present disclosure. Rather, they are merely examples of apparatus consistent with some aspects of the disclosure as detailed in the accompanying claims. The features of the following examples and embodiments may be combined with each other without any conflict.
As electronic devices become a necessity, their camera functions are also widely used. The existing camera provides an HDR (High Dynamic Range ) function, which obtains one HDR image by fusing a plurality of low exposure images under different exposure. However, when some regions in the input image have motion scenes, the HDR image may have obvious alignment defects, which reduces the use experience.
In order to solve the above technical problems, an embodiment of the present disclosure provides an image acquisition method, which may be applied to an electronic device, in which a camera is disposed in the electronic device, where the camera in the electronic device has an HDR function. Fig. 1 is a flowchart illustrating an image acquisition method according to an exemplary embodiment. Referring to fig. 1, an image acquisition method includes steps 11 to 13.
In step 11, an input image is acquired, the input image including a first initial image, a second initial image, and a third initial image.
In this embodiment, the electronic device may acquire an input image including a first initial image, a second initial image, and a third initial image. The first, second and third initial images may be LDR (Low Dynamic Range ) images. Wherein an Exposure Value (EV) of the first, second, and third initial images sequentially increases. The exposure value can be calculated according to the formula (1):
Figure BDA0003949268080000081
In the formula (1), N is an aperture (i.e., f-number), and t is exposure time (shutter) in seconds.
It may be appreciated that the input image may be acquired by a camera of the electronic device, or may be read from a designated location, where the designated location may include a local memory, an external memory, a cache, or a cloud, and a manner of acquiring the input image may be selected according to a specific scene, which is not limited herein.
In step 12, the input image is input to a preset image alignment model, and the image alignment model aligns the first initial image and the second initial image based on the third initial image, so as to obtain a first aligned image and a second aligned image.
In this embodiment, a preset image alignment model is stored in the electronic device, the image alignment model is trained in advance, input data of the image alignment model is three initial images, and an output image is a target image. After the electronic device acquires the input image, the input image may be input into a preset image alignment model, and the image alignment model aligns the first initial image and the second initial image based on the third initial image, so as to obtain a first aligned image and a second aligned image.
In an example, referring to fig. 2, the image alignment model described above may include a noise extraction module 21, a re-exposure module 22, an optical flow estimation module 23, and an alignment module 24. Wherein, the liquid crystal display device comprises a liquid crystal display device,
the noise extraction module 21 is configured to extract noise of the first initial image and noise of the second initial image, respectively, and obtain a first noise image and a second noise image. The purpose of extracting noise in this example is to extract continuous noise data, and the first noise image and the second noise image may be respectively fused onto the subsequent first alignment image and the second alignment image, so as to avoid texture noise caused by discontinuous noise, which is beneficial to improving quality of the subsequent acquired target image.
The re-exposure module 22 is configured to re-expose the first initial image and the second initial image respectively, so as to obtain a first re-exposed image and a second re-exposed image. The goal of the re-exposure in this example is to make the exposure values of the first initial image and the second initial image coincide or even equal with the exposure value of the third initial image (as the reference image). Wherein, the trend to be consistent means that the difference value of the exposure values of the initial image and the reference image is smaller than or equal to a preset difference threshold value (which can be set according to the scene). Re-exposing the initial image in this example may provide the optical flow estimation module with an input image with consistent brightness, which is beneficial to improving accuracy of estimating the optical flow.
An optical flow estimation module 23, configured to perform optical flow estimation on the first re-exposure image and the third initial image, to obtain first optical flow data; and performing optical flow estimation on the second re-exposure image and the third initial image to obtain second optical flow data. In this example, the optical flow estimation is performed on the re-exposure image and the third initial image, so as to find out a moving region containing a moving object between the two images, i.e. a region generating ghost, so as to facilitate subsequent alignment correction.
An alignment module 24 for aligning the first optical flow data with the first initial image to obtain a first aligned image; and aligning the second optical flow data with the second initial image to obtain a second aligned image. In this example, the initial image may be aligned with the moving object between the first (or second) initial image and the reference image based on the optical flow data, that is, the moving object in the first initial image is moved to the position of the moving object in the reference image, so as to avoid that the position of the moving object in the two images affects the subsequent synthesized target image.
With continued reference to fig. 2, the working principle of the image alignment model described above is:
(1) The first initial image (EV-), the second initial image (EV+) and the third initial image (REF) are respectively input to the image alignment model.
(2) The noise extraction block 21 has an input of an initial image and an output of a noise image. The noise extraction module 21 may extract noise in the first initial image (EV-) to obtain a first noisy image (N-). The noise extraction module 21 may extract noise in the second initial image (ev+) to obtain a second noise image (n+).
(3) The re-exposure module 22 inputs an initial image and outputs a re-exposure image. The re-exposure module 22 re-exposes the first initial image (EV-) to obtain a first re-exposed image (EV- (1)). The re-exposure module 22 re-exposes the second initial image (EV+) to obtain a second re-exposed image (EV+ (1)).
(4) The input of the optical flow estimation block 23 is a re-exposure image and a reference image (i.e., a third initial image), and is output as optical flow data. The optical flow estimation module 23 performs optical flow estimation on the first re-exposure image initial image (EV- (1)) and the reference image (REF), resulting in first optical flow data (OF-). The optical flow estimation module 23 performs optical flow estimation on the second re-exposure image initial image (ev+ (1)) and the reference image, resulting in second optical flow data (of+).
(5) The inputs to the alignment module 24 are optical flow data, an initial image, and a reference image. The alignment module 24 may align the first initial image based on the first optical flow data to obtain a first initial aligned image; the first initial alignment image and the first noise image (N-) are then fused to obtain a first alignment image (EV- (2)).
The alignment module 24 may align the second initial image based on the second optical flow data to obtain a second initial aligned image; the second initial alignment image and the second noise image (N +) are then fused to obtain a second alignment image (EV + (2)).
In an embodiment, the noise extraction module may include a first neural network, where the first neural network is implemented by adopting a UNET structure, and a specific implementation manner of the UNET structure may refer to a related technology, which is not described herein. The re-exposure module may include a second neural network, where the second neural network is implemented using a residual UNET structure, and a specific implementation manner of the residual UNET structure may refer to a related technology, which is not described herein. The optical flow estimation module may include a third neural network, where the third neural network is implemented using a residual UNET structure, and a specific implementation manner of the residual UNET structure may refer to a related technology, which is not described herein. Alternatively, the alignment module may include a fourth neural network, where the fourth neural network is implemented using a self-coding structure, and a specific implementation of the self-coding structure may refer to the related art, which is not described herein.
It can be appreciated that only one specific implementation of the image alignment module is illustrated in this embodiment, and the image alignment module is applicable to a mobile device because it belongs to a lightweight model. Of course, a skilled person may select an appropriate neural network for each module according to a specific scenario, for example, each module may use a convolutional neural network, a deep neural network, a GAN neural network, or the like, so as to form various image alignment models, and be applicable to devices such as a mobile terminal, a personal tablet, a personal computer, and the like, where the corresponding structure falls within the scope of the inventive concept of the present disclosure.
In this example, the image alignment model shown in fig. 2 may be trained by the following steps 31-34, see fig. 3:
in step 31, the electronic device may acquire a preset number of training image samples; each group of training image samples comprises a first initial image sample, a second initial image sample, a third initial image sample, a first re-exposure image sample corresponding to the first initial image sample, a second re-exposure image sample corresponding to the second initial image sample and a target image sample; the exposure values of the first initial image sample, the second initial image sample and the third initial image sample are sequentially increased. The exposure values of the first re-exposure image sample and the third initial image sample tend to be identical or equal, and the exposure values of the second re-exposure image sample and the third initial image sample tend to be identical or equal.
It is understood that the above-mentioned preset number may be set according to a specific scenario, for example, 1000 to 100000, which is not limited herein.
Note that, each image sample may be a RAW image or an RGB image, and the size thereof may be 448×448 or 512×512, and an image sample with a suitable size may be selected according to a specific scene, which is not limited herein.
In step 32, the electronic device may train the re-exposure module with the first re-exposure image sample and the first initial image sample and the second re-exposure image sample and the second initial image sample.
For example, the electronic device may input the first re-exposure image sample to a re-exposure module, which may perform a re-exposure process on the first initial image sample to obtain a re-exposure intermediate image. The re-exposure module may then calculate the similarity of the re-exposed intermediate image and the first re-exposed image sample.
The similarity between the re-exposure intermediate image and the first re-exposure image sample can be obtained by the following method:
calculating pixel value difference values of pixels at the same position of the re-exposure intermediate image and the first re-exposure image sample; an average value of the pixel value differences of all the pixels is calculated as the above-mentioned similarity.
Or calculating pixel value difference values of pixels at the same position of the re-exposure intermediate image and the first re-exposure image sample; and obtaining the maximum value or the median value in the pixel value differences of all the pixels, wherein the maximum value or the median value is used as the similarity.
Or calculating pixel value difference values of pixels at the same position of the re-exposure intermediate image and the first re-exposure image sample; and obtaining the mean square error of the pixel value differences of all the pixels, wherein the mean square error is used as the similarity.
Alternatively, feature vectors of the re-exposure intermediate image and the first re-exposure image sample are provided separately; and calculating cosine values of the two feature vectors, wherein the cosine values are used as the similarity.
It can be appreciated that, in this example, only a few ways of obtaining the similarity are described, and a skilled person may select an appropriate way to obtain the similarity according to a specific scenario, and the corresponding scheme falls within the protection scope of the present disclosure.
Then, the re-exposure module can compare the similarity with a preset similarity threshold, and when the similarity is greater than or equal to the similarity threshold, training can be stopped, namely the re-exposure module has completed pre-training; and when the similarity is smaller than the similarity threshold, continuing to repeat the training process.
It should be noted that only the process of training the re-exposure module using the first initial image sample and the first re-exposure image sample is described in the above example. It is understood that the process of training the re-exposure module using the second initial image sample and the second re-exposure image sample is the same as the above process, and will not be described herein.
In this embodiment, the electronic device may train the re-exposure module with the second initial image sample after training the re-exposure module with the first initial image sample until training is completed. Alternatively, after the first initial image sample and the second initial image sample are mixed, the initial image samples are randomly selected to train the re-exposure module. Alternatively, the electronic device may train one re-exposure sub-module with the first initial image sample and train one re-exposure sub-module with the second initial image sample, the two re-exposure sub-modules constituting the re-exposure module, at which time the re-exposure quality of the re-exposure module is higher. The skilled person can select a suitable mode of training the re-exposure module according to a specific scene, and the corresponding scheme falls into the protection scope of the disclosure.
In step 33, the electronic device may migrate the model parameters of the re-exposure module to the re-exposure module in the image alignment model and fix the model parameters. Therefore, the model parameters are transplanted and fixed in the step, so that the re-exposure quality of the re-exposure module can be ensured, the re-exposure image and the reference image have exposure values which tend to be consistent or equal, and accurate input data is provided for subsequent optical flow estimation.
In step 34, the electronic device may train the image alignment model using the first initial image sample, the second initial image sample, the third initial image sample, and the target image sample. For example, the electronic device may acquire an intermediate alignment image corresponding to the first initial image sample and an intermediate alignment image corresponding to the second initial image sample output by the image alignment model; then 2 intermediate alignment images and a third initial image sample are fused to obtain an intermediate target image; and then calculating the similarity between the intermediate target image and the target image sample, and determining whether to stop training according to the similarity. The manner of how to calculate the similarity and determine whether to stop training by using the similarity can be referred to as the training content of the re-exposure module in step 32, which is not described herein.
In this way, in the embodiment, the training mode of combining the pre-training of the re-exposure module and the training of the image alignment model can improve the image quality and the robustness of the output target image of the image alignment model.
In another example, referring to fig. 4, the image alignment model described above may include a noise extraction module 41, a re-exposure module 42, an optical flow estimation module 43, and an alignment module 44.
A noise extraction module 41, configured to extract noise of the first initial image, and obtain a first noise image and a first filtered image; and extracting noise of the second initial image to obtain a second noise image and a second filtered image. The first filter image is an image which does not contain noise in the first noise image, and the second filter image is an image which does not contain noise in the second noise image. The purpose of extracting noise in this example is to extract continuous noise data, and the first noise image and the second noise image may be respectively fused onto the subsequent first alignment image and the second alignment image, so as to avoid texture noise caused by discontinuous noise, which is beneficial to improving quality of the subsequent acquired target image. Further, in this example, the purpose of acquiring the first filtered image and the second filtered image is to perform subsequent processing instead of the first initial image and the second initial image, respectively, and since the quality of the filtered image is higher than that of the initial image, the quality of the target image acquired subsequently is further improved.
The re-exposure module 42 is configured to re-expose the first filtered image and the second filtered image to obtain a first re-exposed image and a second re-exposed image. The goal of the re-exposure in this example is to make the exposure values of the first and second filtered images and the third initial image (as the reference image) coincide or even equal. Wherein, the trend to be consistent means that the difference value of the exposure values of the filtered image and the reference image is smaller than or equal to a preset difference threshold value (which can be set according to the scene). Re-exposing the initial image in this example may provide the optical flow estimation module 43 with an input image of consistent brightness, which may be beneficial to improving the accuracy of estimating optical flow.
An optical flow estimation module 43, configured to perform optical flow estimation on the first re-exposure image and the third initial image, to obtain first optical flow data; and performing optical flow estimation on the second re-exposure image and the third initial image to obtain second optical flow data. In this example, the optical flow estimation is performed on the re-exposure image and the third initial image, so as to find out a moving region containing a moving object between the two images, i.e. a region generating ghost, so as to facilitate subsequent alignment correction.
The alignment module is used for aligning the first optical flow data with the first filtering image to obtain a first aligned image; and aligning the second optical flow data with the second filtered image to obtain a second aligned image. In this example, the moving object between the first (or second) filtered image and the reference image may be aligned based on the optical flow data alignment filtered image, that is, the moving object in the first filtered image is moved to the position of the moving object in the reference image, so as to avoid that the position of the moving object in the two images affects the subsequent synthesized target image.
With continued reference to fig. 4, the working principle of the image alignment model described above is:
(1) The first initial image (EV-), the second initial image (EV+) and the third initial image (REF) are respectively input to the image alignment model.
(2) The input of the noise extraction block 41 is an initial image, and the output is a noise image and a filtered image. The noise extraction module 41 may extract noise in the first initial image (EV-) to obtain a first noisy image (N-) and a first filtered image (F-). The noise extraction module 41 may extract noise in the second initial image (ev+) to obtain a second noise image (n+) and a second filtered image (f+).
(3) The re-exposure module 42 inputs the filtered image and outputs the re-exposed image. The re-exposure module 42 re-exposes the first filtered image (F-) to obtain a first re-exposed image (EV- (1)). The re-exposure module 42 re-exposes the second filtered image (F+) to obtain a second re-exposed image (EV+ (1)).
(4) The input of the optical flow estimation block 43 is a re-exposure image and a reference image (i.e., a third initial image), and is output as optical flow data. The optical flow estimation module 43 performs optical flow estimation on the first re-exposure image initial image (EV- (1)) and the reference image, resulting in first optical flow data (OF-). The optical flow estimation module 43 performs optical flow estimation on the second re-exposure image initial image (ev+ (1)) and the reference image, resulting in second optical flow data (of+).
(5) The inputs to the alignment module 44 are optical flow data, filtered images, and reference images. Alignment module 44 may align the first filtered image based on the first optical flow data to obtain a first initial aligned image; the first initial alignment image and the first noise image (N-) are then fused to obtain a first alignment image (EV- (2)). Alignment module 44 may align the second filtered image based on the second optical flow data to obtain a second initial aligned image; the second initial alignment image and the second noise image (N +) are then fused to obtain a second alignment image (EV + (2)).
In this example, the neural network adopted by each module in the image alignment module illustrated in fig. 4 may be the same as the neural network of the image alignment module illustrated in fig. 2, and the specific content may refer to the content of the example illustrated in fig. 2, which is not described herein.
In this example, the image alignment model shown in FIG. 4 may be trained by, referring to FIG. 5, including steps 51-54:
in step 51, the electronic device may acquire a preset number of sets of training image samples; each group of training image samples comprises a first initial image sample, a second initial image sample, a third initial image sample, a first filtering image sample corresponding to the first initial image sample, a second filtering image sample corresponding to the second initial image sample, a first re-exposure image sample corresponding to the first initial image sample, a second re-exposure image sample corresponding to the second initial image sample and a target image sample; the exposure values of the first initial image sample, the second initial image sample and the third initial image sample are sequentially increased. The description of the common image samples in step 51 and step 31 may be referred to in step 31, and will not be repeated here. Step 51 differs from step 31 in that a first filtered image sample and a second filtered image sample are used. The first filtered image sample can be obtained by calculating the average value of the pixel values of a plurality of initial images, and the denoising effect can be achieved by calculating the average value of the pixel values. The second filtered image sample may be obtained by using the first filtered image sample, which is not described herein.
In step 52, the electronic device may train the re-exposure module with the first re-exposure image sample and the first filtered image sample and the second re-exposure image sample and the second filtered image sample. The manner of training the re-exposure module in step 52 may refer to the manner of training the re-exposure module in step 32, specifically refer to the content of step 32, which is not described herein.
In step 53, the electronic device may migrate the model parameters of the re-exposure module to the re-exposure module in the image alignment model and fix the model parameters. The mode of transplanting the model parameters in step 53 is the same as that of transplanting the model parameters in step 33, and the specific content is referred to step 33, and will not be described here again.
In step 54, the electronic device may train the image alignment model using the first initial image sample, the second initial image sample, the third initial image sample, and the target image sample. The manner of training the image alignment model in step 54 may refer to the manner of training the image alignment model in step 34, specifically refer to the content of step 34, which is not described herein. In this way, in the embodiment, the training mode of combining the pre-training of the re-exposure module and the training of the image alignment model can improve the image quality and the robustness of the output target image of the image alignment model. In addition, in the embodiment, the quality of the output target image can be further improved by taking the filtered image corresponding to the initial image as the input of the re-exposure module.
In step 13, the first aligned image, the second aligned image and the third initial image are fused to obtain a target image.
In this embodiment, the electronic device may obtain a sum of pixel values of pixels at the same position in the first aligned image, the second aligned image, and the third initial image, where the sum is used as a pixel value of a corresponding pixel in the target image. The target image is an HDR image.
The scheme provided by the embodiment of the disclosure can acquire an input image, wherein the input image comprises a first initial image, a second initial image and a third initial image; then, inputting the input image into a preset image alignment model, and respectively aligning the first initial image and the second initial image based on the third initial image by the image alignment model to obtain a first aligned image and a second aligned image; and then fusing the first alignment image, the second alignment image and the third initial image to obtain a target image. In this way, in the embodiment, three initial images are aligned first and then fused, so that a high-quality target image without alignment defects is obtained, and the use experience is improved.
On the basis of the image acquisition method provided by the embodiment of the present disclosure, the embodiment of the present disclosure further provides an image acquisition apparatus, referring to fig. 6, the apparatus includes:
an input image acquisition unit 61 for acquiring an input image including a first initial image, a second initial image, and a third initial image;
an alignment image acquisition unit 62 configured to input the input image to a preset image alignment model, and align the first initial image and the second initial image based on the third initial image by the image alignment model, respectively, to obtain a first alignment image and a second alignment image;
a target image obtaining unit 63, configured to fuse the first aligned image, the second aligned image, and the third initial image to obtain a target image.
In an embodiment, the image alignment model includes a noise extraction module, a re-exposure module, an optical flow estimation module, and an alignment module;
the noise extraction module is used for respectively extracting the noise of the first initial image and the noise of the second initial image to obtain a first noise image and a second noise image;
the re-exposure module is used for respectively carrying out re-exposure processing on the first initial image and the second initial image to obtain a first re-exposure image and a second re-exposure image;
The optical flow estimation module is used for carrying out optical flow estimation on the first re-exposure image and the third initial image to obtain first optical flow data; performing optical flow estimation on the second re-exposure image and the third initial image to obtain second optical flow data;
the alignment module is used for aligning the first optical flow data with the first initial image and fusing a first noise image to obtain a first aligned image; and aligning the second optical flow data with the second initial image and fusing the second noise image to obtain a second aligned image.
In an embodiment, the apparatus further comprises a model training unit for training the image alignment model; the model training unit includes:
the image sample acquisition subunit is used for acquiring a preset number of training image samples; each group of training image samples comprises a first initial image sample, a second initial image sample, a third initial image sample, a first re-exposure image sample corresponding to the first initial image sample, a second re-exposure image sample corresponding to the second initial image sample and a target image sample; the exposure values of the first initial image sample, the second initial image sample and the third initial image sample are sequentially increased;
A re-exposure module training subunit for training the re-exposure module with the first re-exposure image sample and the first initial image sample and the second re-exposure image sample and the second initial image sample;
a model parameter transplanting subunit, configured to transplant model parameters of the re-exposure module to a re-exposure module in the image alignment model, and fix the model parameters;
an alignment model training subunit for training the image alignment model using the first initial image sample, the second initial image sample, the third initial image sample, and the target image sample.
In an embodiment, the image alignment model includes a noise extraction module, a re-exposure module, an optical flow estimation module, and an alignment module;
the noise extraction module is used for respectively extracting the noise of the first initial image and the noise of the second initial image to obtain a first noise image and a first filtering image as well as a second noise image and a second filtering image;
the re-exposure module is used for respectively carrying out re-exposure processing on the first filter image and the second filter image to obtain a first re-exposure image and a second re-exposure image;
The optical flow estimation module is used for carrying out optical flow estimation on the first re-exposure image and the third initial image to obtain first optical flow data; performing optical flow estimation on the second re-exposure image and the third initial image to obtain second optical flow data;
the alignment module is used for aligning the first optical flow data with the first filtering image to obtain a first aligned image; and aligning the second optical flow data and the second filtered image to obtain a second aligned image.
In an embodiment, the apparatus further comprises a model training unit for training the image alignment model; the model training unit includes:
the image sample acquisition subunit is used for acquiring a preset number of training image samples; each group of training image samples comprises a first initial image sample, a second initial image sample, a third initial image sample, a first filtering image sample corresponding to the first initial image sample, a second filtering image sample corresponding to the second initial image sample, a first re-exposure image sample corresponding to the first initial image sample, a second re-exposure image sample corresponding to the second initial image sample and a target image sample; the exposure values of the first initial image sample, the second initial image sample and the third initial image sample are sequentially increased;
A re-exposure module training subunit for training the re-exposure module with the first filtered image sample and the first re-exposure image sample and the second filtered image sample and the second re-exposure image sample;
a model parameter transplanting subunit, configured to transplant model parameters of the re-exposure module to a re-exposure module in the image alignment model, and fix the model parameters;
an alignment model training subunit for training the image alignment model using the first initial image sample, the second initial image sample, the third initial image sample, and the target image sample.
In an embodiment, the noise extraction module includes a first neural network, where the first neural network is implemented with a UNET structure;
the re-exposure module comprises a second neural network, and the second neural network is realized by adopting a residual UNET structure;
the optical flow estimation module comprises a third neural network, and the third neural network is realized by adopting a residual UNET structure;
or alternatively, the process may be performed,
the alignment module comprises a fourth neural network, and the fourth neural network is realized by adopting a self-coding structure.
It should be noted that, the device shown in this embodiment matches the content of the method embodiment, and reference may be made to the content of the method embodiment, which is not described herein.
Fig. 7 is a block diagram of an electronic device, according to an example embodiment. For example, the electronic device 700 may be a smart phone, a computer, a digital broadcast terminal, a tablet device, a medical device, an exercise device, a personal digital assistant, or the like.
Referring to fig. 7, an electronic device 700 may include one or more of the following components: a processing component 702, a memory 704, a power component 706, a multimedia component 708, an audio component 710, an input/output (I/O) interface 712, a sensor component 714, a communication component 716, and an image acquisition component 718.
The processing component 702 generally controls overall operation of the electronic device 700, such as operations associated with display, telephone calls, data communications, camera operations, and recording operations. The processing component 702 may include one or more processors 720 to execute computer programs. Further, the processing component 702 can include one or more modules that facilitate interaction between the processing component 702 and other components. For example, the processing component 702 may include a multimedia module to facilitate interaction between the multimedia component 708 and the processing component 702.
The memory 704 is configured to store various types of data to support operations at the electronic device 700. Examples of such data include computer programs, contact data, phonebook data, messages, pictures, videos, etc. for any application or method operating on the electronic device 700. The memory 704 may be implemented by any type or combination of volatile or nonvolatile memory devices such as Static Random Access Memory (SRAM), electrically erasable programmable read-only memory (EEPROM), erasable programmable read-only memory (EPROM), programmable read-only memory (PROM), read-only memory (ROM), magnetic memory, flash memory, magnetic or optical disk.
The power supply component 706 provides power to the various components of the electronic device 700. Power supply components 706 may include a power management system, one or more power supplies, and other components associated with generating, managing, and distributing power for electronic device 700. The power supply assembly 706 may include a power chip and the controller may communicate with the power chip to control the power chip to turn on or off the switching device to power the motherboard circuit with or without the battery.
The multimedia component 708 includes a screen that provides an output interface between the electronic device 700 and the target object. In some embodiments, the screen may include a Liquid Crystal Display (LCD) and a Touch Panel (TP). If the screen includes a touch panel, the screen may be implemented as a touch screen to receive input information from a target object. The touch panel includes one or more touch sensors to sense touches, swipes, and gestures on the touch panel. The touch sensor may sense not only the boundary of a touch or sliding action, but also the duration and pressure associated with the touch or sliding operation.
The audio component 710 is configured to output and/or input audio file information. For example, the audio component 710 includes a Microphone (MIC) configured to receive external audio file information when the electronic device 700 is in an operational mode, such as a call mode, a recording mode, and a voice recognition mode. The received audio file information may be further stored in the memory 704 or transmitted via the communication component 716. In some embodiments, audio component 710 further includes a speaker for outputting audio file information.
The I/O interface 712 provides an interface between the processing component 702 and peripheral interface modules, which may be a keyboard, click wheel, buttons, etc.
The sensor assembly 714 includes one or more sensors for providing status assessment of various aspects of the electronic device 700. For example, the sensor assembly 714 may detect an on/off state of the electronic device 700, a relative positioning of the components, such as a display and keypad of the electronic device 700, a change in position of the electronic device 700 or one of the components, the presence or absence of a target object in contact with the electronic device 700, an orientation or acceleration/deceleration of the electronic device 700, and a change in temperature of the electronic device 700. In this example, the sensor assembly 714 can include a magnetic force sensor, a gyroscope, and a magnetic field sensor, wherein the magnetic field sensor includes at least one of: hall sensors, thin film magneto-resistive sensors, and magnetic liquid acceleration sensors.
The communication component 716 is configured to facilitate communication between the electronic device 700 and other devices, either wired or wireless. The electronic device 700 may access a wireless network based on a communication standard, such as WiFi,2G, 3G, 4G, 5G, or a combination thereof. In one exemplary embodiment, the communication component 716 receives broadcast information or broadcast related information from an external broadcast management system via a broadcast channel. In one exemplary embodiment, the communication component 716 further includes a Near Field Communication (NFC) module to facilitate short range communications. For example, the NFC module may be implemented based on Radio Frequency Identification (RFID) technology, infrared data association (IrDA) technology, ultra Wideband (UWB) technology, bluetooth (BT) technology, and other technologies.
In an exemplary embodiment, the electronic device 700 can be implemented by one or more Application Specific Integrated Circuits (ASICs), digital information processors (DSPs), digital information processing devices (DSPDs), programmable Logic Devices (PLDs), field Programmable Gate Arrays (FPGAs), controllers, microcontrollers, microprocessors, or other electronic elements.
In an exemplary embodiment, a chip for performing the above-described image acquisition method is also provided. The chip may be a CPU (central processing unit ) chip, a GPU (graphics processing unit, graphics processor) chip, an SOC chip, an ISP (Image Signal Processing ) chip, or the like, or may be an acceleration chip dedicated to an artificial intelligence technology, such as an AI (Artificial Intelligence ) accelerator, or the like.
In an exemplary embodiment, a non-transitory computer readable storage medium, such as a memory comprising instructions, is also provided, the executable computer program being executable by a processor. The readable storage medium may be, among other things, ROM, random Access Memory (RAM), CD-ROM, magnetic tape, floppy disk, optical data storage device, etc.
Other embodiments of the disclosure will be apparent to those skilled in the art from consideration of the specification and practice of the disclosure disclosed herein. This disclosure is intended to cover any adaptations, uses, or adaptations of the disclosure following, in general, the principles of the disclosure and including such departures from the present disclosure as come within known or customary practice within the art to which the disclosure pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the disclosure being indicated by the following claims.
It is to be understood that the present disclosure is not limited to the precise arrangements and instrumentalities shown in the drawings, and that various modifications and changes may be effected without departing from the scope thereof. The scope of the present disclosure is limited only by the appended claims.

Claims (15)

1. An image acquisition method, comprising:
acquiring an input image, wherein the input image comprises a first initial image, a second initial image and a third initial image;
inputting the input image into a preset image alignment model, and respectively aligning the first initial image and the second initial image based on the third initial image by the image alignment model to obtain a first aligned image and a second aligned image;
And fusing the first alignment image, the second alignment image and the third initial image to obtain a target image.
2. The method of claim 1, wherein the image alignment model comprises a noise extraction module, a re-exposure module, an optical flow estimation module, and an alignment module;
the noise extraction module is used for respectively extracting the noise of the first initial image and the noise of the second initial image to obtain a first noise image and a second noise image;
the re-exposure module is used for respectively carrying out re-exposure processing on the first initial image and the second initial image to obtain a first re-exposure image and a second re-exposure image;
the optical flow estimation module is used for carrying out optical flow estimation on the first re-exposure image and the third initial image to obtain first optical flow data; performing optical flow estimation on the second re-exposure image and the third initial image to obtain second optical flow data;
the alignment module is used for aligning the first optical flow data with the first initial image and fusing a first noise image to obtain a first aligned image; and aligning the second optical flow data with the second initial image and fusing the second noise image to obtain a second aligned image.
3. The method according to claim 2, further comprising the step of training the image alignment model, in particular comprising:
acquiring a preset number of training image samples; each group of training image samples comprises a first initial image sample, a second initial image sample, a third initial image sample, a first re-exposure image sample corresponding to the first initial image sample, a second re-exposure image sample corresponding to the second initial image sample and a target image sample; the exposure values of the first initial image sample, the second initial image sample and the third initial image sample are sequentially increased;
training the re-exposure module with the first re-exposure image sample and the first initial image sample and the second re-exposure image sample and the second initial image sample;
transplanting the model parameters of the re-exposure module to a re-exposure module in the image alignment model, and fixing the model parameters;
the image alignment model is trained using the first initial image sample, the second initial image sample, the third initial image sample, and the target image sample.
4. The method of claim 1, wherein the image alignment model comprises a noise extraction module, a re-exposure module, an optical flow estimation module, and an alignment module;
the noise extraction module is used for respectively extracting the noise of the first initial image and the noise of the second initial image to obtain a first noise image and a first filtering image as well as a second noise image and a second filtering image;
the re-exposure module is used for respectively carrying out re-exposure processing on the first filter image and the second filter image to obtain a first re-exposure image and a second re-exposure image;
the optical flow estimation module is used for carrying out optical flow estimation on the first re-exposure image and the third initial image to obtain first optical flow data; performing optical flow estimation on the second re-exposure image and the third initial image to obtain second optical flow data;
the alignment module is used for aligning the first optical flow data with the first filtering image to obtain a first aligned image; and aligning the second optical flow data and the second filtered image to obtain a second aligned image.
5. The method according to claim 4, further comprising the step of training the image alignment model, in particular comprising:
Acquiring a preset number of training image samples; each group of training image samples comprises a first initial image sample, a second initial image sample, a third initial image sample, a first filtering image sample corresponding to the first initial image sample, a second filtering image sample corresponding to the second initial image sample, a first re-exposure image sample corresponding to the first initial image sample, a second re-exposure image sample corresponding to the second initial image sample and a target image sample; the exposure values of the first initial image sample, the second initial image sample and the third initial image sample are sequentially increased;
training the re-exposure module with the first filtered image sample and the first re-exposure image sample and the second filtered image sample and the second re-exposure image sample;
transplanting the model parameters of the re-exposure module to a re-exposure module in the image alignment model, and fixing the model parameters;
the image alignment model is trained using the first initial image sample, the second initial image sample, the third initial image sample, and the target image sample.
6. The method according to claim 2 or 4, wherein,
The noise extraction module comprises a first neural network, and the first neural network is realized by adopting an UNET structure;
the re-exposure module comprises a second neural network, and the second neural network is realized by adopting a residual UNET structure;
the optical flow estimation module comprises a third neural network, and the third neural network is realized by adopting a residual UNET structure;
or alternatively, the process may be performed,
the alignment module comprises a fourth neural network, and the fourth neural network is realized by adopting a self-coding structure.
7. An image acquisition apparatus, comprising:
an input image acquisition unit configured to acquire an input image including a first initial image, a second initial image, and a third initial image;
an alignment image acquisition unit, configured to input the input image into a preset image alignment model, and align the first initial image and the second initial image based on the third initial image by the image alignment model, so as to obtain a first alignment image and a second alignment image;
and the target image acquisition unit is used for fusing the first aligned image, the second aligned image and the third initial image to obtain a target image.
8. The apparatus of claim 7, wherein the image alignment model comprises a noise extraction module, a re-exposure module, an optical flow estimation module, and an alignment module;
the noise extraction module is used for respectively extracting the noise of the first initial image and the noise of the second initial image to obtain a first noise image and a second noise image;
the re-exposure module is used for respectively carrying out re-exposure processing on the first initial image and the second initial image to obtain a first re-exposure image and a second re-exposure image;
the optical flow estimation module is used for carrying out optical flow estimation on the first re-exposure image and the third initial image to obtain first optical flow data; performing optical flow estimation on the second re-exposure image and the third initial image to obtain second optical flow data;
the alignment module is used for aligning the first optical flow data with the first initial image and fusing a first noise image to obtain a first aligned image; and aligning the second optical flow data with the second initial image and fusing the second noise image to obtain a second aligned image.
9. The apparatus of claim 8, further comprising a model training unit for training the image alignment model; the model training unit includes:
The image sample acquisition subunit is used for acquiring a preset number of training image samples; each group of training image samples comprises a first initial image sample, a second initial image sample, a third initial image sample, a first re-exposure image sample corresponding to the first initial image sample, a second re-exposure image sample corresponding to the second initial image sample and a target image sample; the exposure values of the first initial image sample, the second initial image sample and the third initial image sample are sequentially increased;
a re-exposure module training subunit for training the re-exposure module with the first re-exposure image sample and the first initial image sample and the second re-exposure image sample and the second initial image sample;
a model parameter transplanting subunit, configured to transplant model parameters of the re-exposure module to a re-exposure module in the image alignment model, and fix the model parameters;
an alignment model training subunit for training the image alignment model using the first initial image sample, the second initial image sample, the third initial image sample, and the target image sample.
10. The apparatus of claim 7, wherein the image alignment model comprises a noise extraction module, a re-exposure module, an optical flow estimation module, and an alignment module;
the noise extraction module is used for respectively extracting the noise of the first initial image and the noise of the second initial image to obtain a first noise image and a first filtering image as well as a second noise image and a second filtering image;
the re-exposure module is used for respectively carrying out re-exposure processing on the first filter image and the second filter image to obtain a first re-exposure image and a second re-exposure image;
the optical flow estimation module is used for carrying out optical flow estimation on the first re-exposure image and the third initial image to obtain first optical flow data; performing optical flow estimation on the second re-exposure image and the third initial image to obtain second optical flow data;
the alignment module is used for aligning the first optical flow data with the first filtering image to obtain a first aligned image; and aligning the second optical flow data and the second filtered image to obtain a second aligned image.
11. The apparatus of claim 10, further comprising a model training unit for training the image alignment model; the model training unit includes:
The image sample acquisition subunit is used for acquiring a preset number of training image samples; each group of training image samples comprises a first initial image sample, a second initial image sample, a third initial image sample, a first filtering image sample corresponding to the first initial image sample, a second filtering image sample corresponding to the second initial image sample, a first re-exposure image sample corresponding to the first initial image sample, a second re-exposure image sample corresponding to the second initial image sample and a target image sample; the exposure values of the first initial image sample, the second initial image sample and the third initial image sample are sequentially increased;
a re-exposure module training subunit for training the re-exposure module with the first filtered image sample and the first re-exposure image sample and the second filtered image sample and the second re-exposure image sample;
a model parameter transplanting subunit, configured to transplant model parameters of the re-exposure module to a re-exposure module in the image alignment model, and fix the model parameters;
an alignment model training subunit for training the image alignment model using the first initial image sample, the second initial image sample, the third initial image sample, and the target image sample.
12. The device according to claim 8 or 10, wherein,
the noise extraction module comprises a first neural network, and the first neural network is realized by adopting an UNET structure;
the re-exposure module comprises a second neural network, and the second neural network is realized by adopting a residual UNET structure;
the optical flow estimation module comprises a third neural network, and the third neural network is realized by adopting a residual UNET structure;
or alternatively, the process may be performed,
the alignment module comprises a fourth neural network, and the fourth neural network is realized by adopting a self-coding structure.
13. An electronic device, comprising: a memory and a processor;
the memory is used for storing a computer program executable by the processor;
the processor is configured to execute a computer program in the memory to implement the method of any one of claims 1-6.
14. A chip for performing the image acquisition method of any one of claims 1 to 6.
15. A non-transitory computer readable storage medium, characterized in that the image acquisition method according to any one of claims 1 to 6 is enabled when an executable computer program in the storage medium is executed by a processor.
CN202211444613.5A 2022-11-18 2022-11-18 Image acquisition method and device, electronic equipment, medium and chip Active CN116347248B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211444613.5A CN116347248B (en) 2022-11-18 2022-11-18 Image acquisition method and device, electronic equipment, medium and chip

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211444613.5A CN116347248B (en) 2022-11-18 2022-11-18 Image acquisition method and device, electronic equipment, medium and chip

Publications (2)

Publication Number Publication Date
CN116347248A true CN116347248A (en) 2023-06-27
CN116347248B CN116347248B (en) 2024-02-06

Family

ID=86888174

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211444613.5A Active CN116347248B (en) 2022-11-18 2022-11-18 Image acquisition method and device, electronic equipment, medium and chip

Country Status (1)

Country Link
CN (1) CN116347248B (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111275653A (en) * 2020-02-28 2020-06-12 北京松果电子有限公司 Image denoising method and device
CN113454981A (en) * 2019-02-18 2021-09-28 三星电子株式会社 Techniques for multi-exposure fusion of multiple image frames based on convolutional neural network and for deblurring multiple image frames
CN113497901A (en) * 2020-04-01 2021-10-12 三星电子株式会社 System and method for motion warping using multiple exposure frames
US20210327031A1 (en) * 2020-04-15 2021-10-21 Tsinghua Shenzhen International Graduate School Video blind denoising method based on deep learning, computer device and computer-readable storage medium
WO2022141445A1 (en) * 2020-12-31 2022-07-07 华为技术有限公司 Image processing method and device

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113454981A (en) * 2019-02-18 2021-09-28 三星电子株式会社 Techniques for multi-exposure fusion of multiple image frames based on convolutional neural network and for deblurring multiple image frames
CN111275653A (en) * 2020-02-28 2020-06-12 北京松果电子有限公司 Image denoising method and device
CN113497901A (en) * 2020-04-01 2021-10-12 三星电子株式会社 System and method for motion warping using multiple exposure frames
US20210327031A1 (en) * 2020-04-15 2021-10-21 Tsinghua Shenzhen International Graduate School Video blind denoising method based on deep learning, computer device and computer-readable storage medium
WO2022141445A1 (en) * 2020-12-31 2022-07-07 华为技术有限公司 Image processing method and device

Also Published As

Publication number Publication date
CN116347248B (en) 2024-02-06

Similar Documents

Publication Publication Date Title
CN109345485B (en) Image enhancement method and device, electronic equipment and storage medium
CN106408603B (en) Shooting method and device
EP3200125A1 (en) Fingerprint template input method and device
CN107967459B (en) Convolution processing method, convolution processing device and storage medium
CN106557759B (en) Signpost information acquisition method and device
CN109360197B (en) Image processing method and device, electronic equipment and storage medium
CN107730448B (en) Beautifying method and device based on image processing
CN109784164B (en) Foreground identification method and device, electronic equipment and storage medium
CN107958223B (en) Face recognition method and device, mobile equipment and computer readable storage medium
CN105528765B (en) Method and device for processing image
CN112188091B (en) Face information identification method and device, electronic equipment and storage medium
CN109509195B (en) Foreground processing method and device, electronic equipment and storage medium
CN114096994A (en) Image alignment method and device, electronic equipment and storage medium
CN112449085A (en) Image processing method and device, electronic equipment and readable storage medium
CN112200040A (en) Occlusion image detection method, device and medium
CN113411498A (en) Image shooting method, mobile terminal and storage medium
CN107424130B (en) Picture beautifying method and device
CN109784327B (en) Boundary box determining method and device, electronic equipment and storage medium
CN105955821B (en) Pre-reading method and device
CN109003272B (en) Image processing method, device and system
CN107730443B (en) Image processing method and device and user equipment
CN107992894B (en) Image recognition method, image recognition device and computer-readable storage medium
CN107239758B (en) Method and device for positioning key points of human face
CN105635573B (en) Camera visual angle regulating method and device
CN106469446B (en) Depth image segmentation method and segmentation device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant