CN114219725A - Image processing method, terminal equipment and computer readable storage medium - Google Patents

Image processing method, terminal equipment and computer readable storage medium Download PDF

Info

Publication number
CN114219725A
CN114219725A CN202111415647.7A CN202111415647A CN114219725A CN 114219725 A CN114219725 A CN 114219725A CN 202111415647 A CN202111415647 A CN 202111415647A CN 114219725 A CN114219725 A CN 114219725A
Authority
CN
China
Prior art keywords
image
network
feature
different scales
image processing
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111415647.7A
Other languages
Chinese (zh)
Inventor
张锲石
程俊
欧阳祖薇
任子良
高向阳
康宇航
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Institute of Advanced Technology of CAS
Original Assignee
Shenzhen Institute of Advanced Technology of CAS
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Institute of Advanced Technology of CAS filed Critical Shenzhen Institute of Advanced Technology of CAS
Priority to CN202111415647.7A priority Critical patent/CN114219725A/en
Publication of CN114219725A publication Critical patent/CN114219725A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • G06T5/77
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06T5/90
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10024Color image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning

Abstract

The application is applicable to the technical field of image processing, and provides an image processing method, terminal equipment and a computer readable storage medium, wherein the image processing method comprises the following steps: inputting a low-illumination image into a trained continuous updating network to extract local features and global features to obtain feature images with different scales, inputting the feature images with different scales into the trained color enhancement network to process to obtain a target image, extracting the feature images with different scales through the continuous updating network, extracting global information and local information, ensuring that the recovered normal illumination image does not have the phenomenon of detail loss, enhancing the image color and texture of the recovered image through the color enhancement network, ensuring the image quality, and solving the problem of poor image processing effect of the conventional method for brightening the image shot under the low-light condition.

Description

Image processing method, terminal equipment and computer readable storage medium
Technical Field
The present application belongs to the field of image processing technologies, and in particular, to an image processing method, a terminal device, and a computer-readable storage medium.
Background
In a plurality of fields such as industrial production, video monitoring, intelligent transportation, remote sensing control, all involve carrying out multiple processing such as image identification, image information extraction to the image of gathering, the image of shooting under the low light condition often can lead to the image to have the luminance low, contrast is low, noise point many scheduling problem because the light is darker, leads to subsequent image processing effect poor, can't discern the problem of effectual image information.
In order to improve the reliability and robustness of image recognition, it is necessary to brighten an image photographed under a weak light condition. At present, the method for brightening the image shot under the weak light condition is generally based on the retina theory or on the generation of a competing network to realize brightening, wherein the mode based on the retina theory is to use two networks, one of the networks decomposes the weak light image into illumination and reflection, and the other network is used as an intensifier to improve the illumination map of the weak light image, however, the decomposition of illumination and reflection has great difficulty, so that the processing effect is poor; the method based on generation of the countermeasure network is to use weak light enhancement as a domain transfer learning task by finding a mapping between weak light and normal light domains, estimate a normal light image from the weak light image by using a generator, and estimate the vision quality by using a discriminator.
In summary, the current method for brightening the image shot under the weak light condition has the problem of poor image processing effect.
Disclosure of Invention
In view of this, embodiments of the present application provide an image processing method, a terminal device, and a computer readable storage medium to solve the problem that the current method for brightening an image captured under a low-light condition has a poor image processing effect.
In a first aspect, an embodiment of the present application provides an image processing method, including:
inputting the low-illumination image into a trained continuous updating network for extracting local features and global features to obtain feature images with different scales;
and inputting the characteristic images with different scales into the trained color enhancement network for processing to obtain a target image.
Optionally, the image processing method further includes:
constructing a continuous updating network and a color enhancement network;
acquiring a sample data set;
and training the continuous updating network and the color enhancement network based on the sample data set to obtain the trained continuous updating network and the trained color enhancement network.
Optionally, the continuous update network includes N U-type network structures and a feature fusion structure, where N is a positive integer greater than or equal to 2.
Optionally, the U-shaped network structure includes a down-sampling structure and an up-sampling structure, the down-sampling structure includes four feature blocks with different scales, the up-sampling structure includes four feature blocks with different scales, and different feature blocks are connected by using a skip connection.
Optionally, the feature fusion structure includes a convolution layer, an average pool layer, and a complete connectivity layer.
Optionally, the color enhancement network includes a residual error obtaining module, a convolution module, and a weight determining module.
Optionally, the feature images of different scales include a first scale feature image, a second scale feature image, and a third feature scale image.
In a second aspect, an embodiment of the present application provides a terminal device, including:
the feature images of different scales comprise a first scale feature image, a second scale feature image and a third feature scale image.
In a third aspect, an embodiment of the present application provides a terminal device, where the terminal device includes a processor, a memory, and a computer program stored in the memory and executable on the processor, and the processor, when executing the computer program, implements the method according to the first aspect or any optional manner of the first aspect.
In a fourth aspect, embodiments of the present application provide a computer-readable storage medium storing a computer program, which when executed by a processor implements the method according to the first aspect or any alternative manner of the first aspect.
In a fifth aspect, embodiments of the present application provide a computer program product, which, when run on a terminal device, causes the terminal device to perform the method of the first aspect or any alternative manner of the first aspect.
The image processing method, the terminal device, the computer readable storage medium and the computer program product provided by the embodiment of the application have the following beneficial effects:
the method has the advantages that the characteristic images with different scales are extracted through the continuous updating network, the global information and the local information are extracted, the phenomenon that details of the recovered normal illumination image are lost can be avoided, the color and the texture of the recovered image are enhanced through the color brightening network, the image quality is guaranteed, and the problem that the image processing effect is poor in the existing method for brightening the image shot under the low-light condition is solved.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings needed to be used in the embodiments or the prior art descriptions will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without inventive exercise.
Fig. 1 is a schematic flow chart of an image processing method provided in an embodiment of the present application;
FIG. 2 is a schematic structural diagram of a U-type network structure of a continuous update network in an embodiment of the present application;
FIG. 3 is a flow chart of a feature fusion process provided by an embodiment of the present application;
FIG. 4 is a schematic diagram of a process of processing feature images of different scales by a color enhancement network provided in an embodiment of the present application;
fig. 5 is a schematic structural diagram of a terminal device according to an embodiment of the present application;
fig. 6 is a schematic structural diagram of a terminal device according to another embodiment of the present application;
fig. 7 is a schematic structural diagram of a computer-readable storage medium according to an embodiment of the present application.
Detailed Description
In the following description, for purposes of explanation and not limitation, specific details are set forth, such as particular system structures, techniques, etc. in order to provide a thorough understanding of the embodiments of the present application. It will be apparent, however, to one skilled in the art that the present application may be practiced in other embodiments that depart from these specific details. In other instances, detailed descriptions of well-known systems, devices, circuits, and methods are omitted so as not to obscure the description of the present application with unnecessary detail.
It should be understood that the term "and/or" as used in this specification and the appended claims refers to and includes any and all possible combinations of one or more of the associated listed items. Furthermore, in the description of the present application and the appended claims, the terms "first," "second," "third," and the like are used for distinguishing between descriptions and not necessarily for describing or implying relative importance.
It should also be appreciated that reference throughout this specification to "one embodiment" or "some embodiments," or the like, means that a particular feature, structure, or characteristic described in connection with the embodiment is included in one or more embodiments of the present application. Thus, appearances of the phrases "in one embodiment," "in some embodiments," "in other embodiments," or the like, in various places throughout this specification are not necessarily all referring to the same embodiment, but rather "one or more but not all embodiments" unless specifically stated otherwise. The terms "comprising," "including," "having," and variations thereof mean "including, but not limited to," unless expressly specified otherwise.
The low-light environment refers to a specific environment in which natural lighting conditions are not good and human vision is affected, and an environment with illuminance of 50 lumens or less is generally referred to as a low-light environment. An image captured in a low-light environment generally has problems of low brightness, low contrast, many noise points, and the like. According to the image processing method provided by the embodiment of the application, the characteristic images with different scales are extracted through the continuous updating network, the global information and the local information are extracted, the phenomenon that details of the recovered normal illumination image are lost can be avoided, the image color and the texture of the recovered image are enhanced through the color brightening network, the image quality is ensured, and the problem that the image processing effect is poor in the existing method for brightening the image shot under the low-light condition is solved.
The following will describe the image processing method provided in the embodiments of the present application in detail:
referring to fig. 1, fig. 1 is a schematic flowchart of an image processing method according to an embodiment of the present disclosure. The execution main body of the image processing method provided by the embodiment of the application is the terminal device, and the terminal device can be a mobile terminal such as a smart phone, a tablet computer or wearable device, and can also be a computer, a cloud server, an auxiliary computer and the like in various application scenes.
The image processing method as shown in fig. 1 may include S11 to S12, which are detailed as follows:
s11: and inputting the low-illumination image into a trained continuous updating network for extracting local features and global features to obtain feature images with different scales.
In the embodiment of the application, the local features and the global features in the low-illumination image are extracted through the continuous updating network, and then the local features and the global features are subjected to feature fusion to obtain feature images with different scales.
In this embodiment of the application, the continuous updating network may include N U-type network structures, the image features of different scales are extracted through the U-type network, the continuous updating network further includes a feature fusion structure, and the feature fusion structure performs feature fusion on the image features of different scales extracted through the U-type network, so as to finally obtain feature images of different scales.
N is a positive integer of 2 or more.
In an embodiment of the present application, the continuous update network may include 4U-type network structures, and the structure of each U-type network structure may be as shown in fig. 2, and fig. 2 shows a schematic structural diagram (illustrated by two U-type networks) of the U-type network structure of the continuous update network in the embodiment of the present application.
As shown in fig. 2, the U-type network structure may include a down-sampling structure 21 and an up-sampling structure 22, the down-sampling structure includes four feature blocks with different scales, the scale ratio of the four feature blocks (e.g., a1, a2, A3, a4 in fig. 2) with different scales may be 8:4:2:1 from high to low, that is, the step size of convolution of two adjacent feature blocks with the same scale is 1, and the step size of convolution of two adjacent features with reduced scale is 2. The upsampling structure also comprises four feature blocks with different scales, and the scale proportion of the four feature blocks (such as B1, B2, B3 and B4 in FIG. 2) with different scales can be 1:2:4:8 from low to high, namely the step size of convolution with the same scale of two adjacent feature blocks is 1, and the step size of deconvolution increase obtained by the scales of the two adjacent feature blocks is 2.
The different feature blocks are connected by using skip connection, specifically, different scale features obtained by a down-sampling structure are directly combined with the same scale features obtained by up-sampling settlement, the different scale features obtained in an up-sampling stage are fused with features of larger scale in an up-sampling mode, and global features and local features can be simultaneously obtained in the up-sampling stage by skipping connection.
For the combination of two U-shaped structures, the connection mode may be that the four features with different scales obtained by two U-shaped networks are first amplified by one time or two times, then the four obtained features with the same scale are fused through a feature fusion structure (FF structure) to obtain a single feature, and the obtained single feature is retransmitted to the U-shaped network for further operation.
In an embodiment of the present application, the feature fusion structure may include a convolutional layer, an averaging-pool layer, and a fully-connected layer.
Specifically, referring to fig. 3, fig. 3 is a schematic flow chart illustrating a feature fusion process provided in the embodiment of the present application. As shown in the image, in a specific application, the features extracted from the U-shaped structure are input into a feature fusion module, convolution layers in the feature fusion module are processed to obtain an initial convolution, the features are compressed by the initial convolution through an average pool layer to obtain a single feature value, the correlation between different channels is determined through two completely communicated layers to obtain a weight value, the obtained weight value is multiplied by the feature value obtained after the initial convolution to obtain a first weighted feature, the first weighted feature is added to the feature value obtained after the initial convolution, the above operations are repeated to obtain a second weighted feature value, and finally, the second weighted feature is compressed to obtain a feature image corresponding to the four feature fusions. The features with different scales are subjected to the operation, and finally, the feature images with different scales can be obtained.
In an embodiment of the present application, the feature images of different scales include a first scale feature image, a second scale feature image, and a third scale feature image.
S12: and inputting the characteristic images with different scales into the trained color enhancement network for processing to obtain a target image.
In the embodiment of the application, the color enhancement network can determine the weight of each feature image based on the residual error of the feature images with different scales, and then multiply each feature image by the corresponding weight to merge an image, namely a target image.
In an embodiment of the present application, please refer to fig. 4, and fig. 4 is a schematic diagram illustrating a process of processing feature images with different scales by using a color enhancement network according to an embodiment of the present application. As shown in fig. 4, the color enhancement network 400 may include a residual acquisition module 401, a convolution module 402, and a weight determination module 403.
The first scale feature image Y1, the second scale feature image Y2 and the third scale feature image Y3 are input into the residual acquisition module 401, so that a first residual Y2-Y1, a second residual Y3-Y2 and a third residual Y3-Y1 can be obtained. Then the first residual Y2-Y1, the second residual Y3-Y2 and the third residual Y3-Y1 are input into a convolution unit for processing to obtain a first feature D1 and a second feature D2, the first feature D1 and the second feature D2 are convolved to obtain a third feature D3 and a fourth feature D4, then, the second feature D2 and the fourth feature D4 are fused through a weight determination module and convolved to obtain a fifth feature D5, then the fifth feature D5 is subjected to weight decomposition through a weight determination module 403 to obtain a first weight G1, a second weight G2 and a third weight G3, the first weight G1 is multiplied by the first scale feature image Y1, the second weight G2 is multiplied by the second scale feature image Y2, the third weight G3 is multiplied by the third scale feature image Y3, and then the multiplied images are fused to obtain the target image.
In another embodiment of the present application, the image processing method may further include:
constructing a continuous updating network and a color enhancement network;
acquiring a sample data set;
and training the continuous updating network and the color enhancement network based on the sample data set to obtain the trained continuous updating network and the trained color enhancement network.
In the embodiment of the present application, a continuous update network with a network structure as shown in fig. 2 and a color enhancement network as shown in fig. 4 may be constructed, and then the continuous update network and the color enhancement network after training may be obtained by training the continuous update network and the color enhancement network with sample data.
In a specific application, the sample data may include a historical low-light image and a corresponding normal-light image.
In practical application, the sample data set can be obtained by selecting not less than 1000 groups of sample data. The sample data set is divided into a training set, a verification set and a test set. To meet the training requirement, 50% of the sample data may be used as the training set, and the rest may be used as the verification set and the test set.
After sample data is obtained, the continuously updated network is trained through training set data, rapid parameter adjustment is carried out through a verification set, and then the continuously updated network is tested through a test set, so that the continuously updated network after training is obtained.
When training the continuous updating network, inputting the historical low-illumination images in the sample images into the pre-constructed continuous updating network for processing to obtain the feature images with different scales, then adjusting the network parameters in the continuous updating network based on the normal illumination images and the obtained loss functions of the feature images with different scales, specifically, adjusting the structural similarity loss and Total Variation (TV) loss of the third-scale feature images and the corresponding normal illumination images as the loss functions, when the loss functions are converged, verifying and testing the continuous updating network with the adjusted network parameters based on the sample data in the verification set and the test set, wherein the verification and the test are completed by showing that the continuous updating network is trained, and the continuously updating network obtained after the training can be used in S11.
It should be noted that since images taken under low-light conditions usually have significant structural distortion problems, in order to improve the quality of the images quantitatively and qualitatively, Structural Similarity (SSIM) loss is used, while normal-light images restored from low-light images may have unstable illumination and noise, and therefore, total variation loss is used as a smoothness prior to minimize the gradient of the entire image.
When the color enhancement network is trained, feature images of different scales output by the trained continuously updated network can be input into the color enhancement network which is constructed in advance to be processed, a target image is obtained, then network parameters in the color enhancement network are adjusted based on a normal illumination image and a loss function of the obtained target image, specifically, the network parameters can be adjusted by taking a perception loss (probability loss) and Total Variation (TV) loss of the target image and the corresponding normal illumination image as a loss function, when the loss function is converged, the color enhancement network with the adjusted network parameters is verified and tested based on sample data in a verification set and a test set, the verification and the test are finished by indicating that the color enhancement network is trained, and the color enhancement network obtained after the training can be used in S12.
It should be noted that the perceptual loss includes a feature reconstruction loss (feature reconstruction loss) and a style reconstruction loss (style reconstruction loss), the feature reconstruction loss may represent euclidean distances between features to measure similarity of the features, the style reconstruction loss is mainly to obtain better map image color and texture, the style reconstruction loss is a squared florebir norm in which a gram matrix of an output image (an output target image) and a normal illumination image is only different, and an image with minimized style reconstruction loss retains style features of the normal illumination image but does not retain a spatial structure thereof.
Perceptual loss is a loss function that combines the characteristic reconstruction loss and the lattice reconstruction loss. And, likewise, to minimize the gradient of the entire image, a loss function of the color enhancement network is also constructed based on the total variation loss.
It can be seen from the above that, in the image processing method provided in the embodiment of the present application, the feature images of different scales are extracted through the continuous update network, the global information and the local information are extracted, it can be ensured that the recovered normal illumination image does not have a phenomenon of missing details, the image color and texture of the recovered image are enhanced through the color enhancement network, the image quality is ensured, and the problem of poor image processing effect in the current method for enhancing the image shot under the low light condition is solved.
It should be understood that, the sequence numbers of the steps in the foregoing embodiments do not imply an execution sequence, and the execution sequence of each process should be determined by its function and inherent logic, and should not constitute any limitation to the implementation process of the embodiments of the present application.
Based on the image processing method provided by the above embodiment, the embodiment of the invention further provides an embodiment of a terminal device for implementing the above method embodiment.
Referring to fig. 5, fig. 5 is a schematic structural diagram of a terminal device according to an embodiment of the present application. In the embodiment of the present application, each unit included in the terminal device is configured to execute each step in the embodiments corresponding to fig. 1 to fig. 4. Please refer to fig. 1 to 4 and fig. 1 to 4 for the corresponding embodiments. For convenience of explanation, only the portions related to the present embodiment are shown. As shown in fig. 5, the terminal device 50 includes: a feature extraction module 51 and a color enhancement module 52. Wherein:
the feature extraction module 51 is configured to input the low-illumination image into a trained continuous update network to perform local feature extraction and global feature extraction, so as to obtain feature images of different scales.
The color enhancement module 52 is configured to input the feature images with different scales into a color enhancement network that has been trained, and process the feature images to obtain a target image.
Optionally, the terminal device further includes:
the building module is used for building a continuous updating network and a color enhancement network;
the acquisition module is used for acquiring a sample data set;
and the training module is used for training the continuous updating network and the color enhancement network based on the sample data set to obtain the continuously updating network after the training and the color enhancement network after the training.
Optionally, the continuous update network includes N U-type network structures and a feature fusion structure, where N is a positive integer greater than or equal to 2.
Optionally, the U-shaped network structure includes a down-sampling structure and an up-sampling structure, the down-sampling structure includes four feature blocks with different scales, the up-sampling structure includes four feature blocks with different scales, and different feature blocks are connected by using a skip connection.
Optionally, the feature fusion structure comprises a convolutional layer, an average pool layer, and a fully connected layer.
Optionally, the color enhancement network includes a residual obtaining module, a convolution module, and a weight determining module.
It should be noted that, because the contents of information interaction, execution process, and the like between the modules are based on the same concept as that of the embodiment of the method of the present application, specific functions and technical effects thereof may be referred to specifically in the embodiment of the method, and are not described herein again.
Based on this, the terminal device provided in the embodiment of the present application can also extract feature images of different scales through a continuous update network, extract global information and local information, and can ensure that a recovered normal illumination image does not have a phenomenon of missing details, and then enhance image colors and textures of the recovered image through a color enhancement network, thereby ensuring image quality, and solving the problem that the image processing effect is poor in the current method for enhancing the image shot under a low-light condition.
Fig. 6 is a schematic structural diagram of a terminal device according to another embodiment of the present application. As shown in fig. 6, the terminal device 6 provided in this embodiment includes: a processor 60, a memory 61 and a computer program 62, such as an image processing program, stored in said memory 61 and executable on said processor 60. The processor 60, when executing the computer program 62, implements the steps in the various image processing method embodiments described above, such as S11-S12 shown in fig. 1. Alternatively, the processor 60, when executing the computer program 62, implements the functions of the modules/units in the terminal device embodiments, such as the functions of the units 51 to 52 shown in fig. 5.
Illustratively, the computer program 62 may be partitioned into one or more modules/units that are stored in the memory 61 and executed by the processor 60 to accomplish the present application. The one or more modules/units may be a series of computer program instruction segments capable of performing specific functions, which are used to describe the execution of the computer program 62 in the terminal device 6. For example, the computer program 62 may be divided into a first obtaining unit and a first processing unit, and the specific functions of each unit refer to the description in the embodiment corresponding to fig. 5, which is not described herein again.
The terminal device may include, but is not limited to, a processor 60, a memory 61. Those skilled in the art will appreciate that fig. 6 is merely an example of a terminal device 6 and does not constitute a limitation of terminal device 6 and may include more or less components than those shown, or some components in combination, or different components, for example, the terminal device may also include input output devices, network access devices, buses, etc.
The Processor 60 may be a Central Processing Unit (CPU), other general purpose Processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), an off-the-shelf Programmable Gate Array (FPGA) or other Programmable logic device, discrete Gate or transistor logic, discrete hardware components, etc. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
The memory 61 may be an internal storage unit of the terminal device 6, such as a hard disk or a memory of the terminal device 6. The memory 61 may also be an external storage device of the terminal device 6, such as a plug-in hard disk, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash memory Card (Flash Card), and the like, which are provided on the terminal device 6. Further, the memory 61 may also include both an internal storage unit and an external storage device of the terminal device 6. The memory 61 is used for storing the computer program and other programs and data required by the terminal device. The memory 61 may also be used to temporarily store data that has been output or is to be output.
The embodiment of the application also provides a computer readable storage medium. Referring to fig. 7, fig. 7 is a schematic structural diagram of a computer-readable storage medium according to an embodiment of the present disclosure, as shown in fig. 7, a computer program 71 is stored in the computer-readable storage medium 70, and when the computer program 71 is executed by a processor, the image processing method can be implemented.
The embodiment of the application provides a computer program product, and when the computer program product runs on a terminal device, the terminal device can realize the image processing method when executed.
It is obvious to those skilled in the art that, for convenience and simplicity of description, the foregoing division of the functional units and modules is merely used as an example, and in practical applications, the foregoing function distribution may be performed by different functional units and modules as needed, that is, the internal structure of the terminal device is divided into different functional units or modules to perform all or part of the above-described functions. Each functional unit and module in the embodiments may be integrated in one processing unit, or each unit may exist alone physically, or two or more units are integrated in one unit, and the integrated unit may be implemented in a form of hardware, or in a form of software functional unit. In addition, specific names of the functional units and modules are only for convenience of distinguishing from each other, and are not used for limiting the protection scope of the present application. The specific working processes of the units and modules in the system may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
In the above embodiments, the description of each embodiment has its own emphasis, and parts that are not described or illustrated in a certain embodiment may refer to the description of other embodiments.
Those of ordinary skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
The above-mentioned embodiments are only used for illustrating the technical solutions of the present application, and not for limiting the same; although the present application has been described in detail with reference to the foregoing embodiments, it should be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; such modifications and substitutions do not substantially depart from the spirit and scope of the embodiments of the present application and are intended to be included within the scope of the present application.

Claims (10)

1. An image processing method, comprising:
inputting the low-illumination image into a trained continuous updating network for extracting local features and global features to obtain feature images with different scales;
and inputting the characteristic images with different scales into the trained color enhancement network for processing to obtain a target image.
2. The image processing method according to claim 1, further comprising:
constructing a continuous updating network and a color enhancement network;
acquiring a sample data set;
and training the continuous updating network and the color enhancement network based on the sample data set to obtain the trained continuous updating network and the trained color enhancement network.
3. The image processing method according to claim 1, wherein the continuous update network includes N U-type network structures and a feature fusion structure, where N is a positive integer equal to or greater than 2.
4. The image processing method according to claim 3, wherein the U-type network structure comprises a down-sampling structure and an up-sampling structure, the down-sampling structure comprises four feature blocks with different scales, the up-sampling structure comprises four feature blocks with different scales, and the different feature blocks are connected by using skip connection.
5. The image processing method of claim 3, wherein the feature fusion structure comprises a convolutional layer, an average pond layer, and a fully-connected layer.
6. The image processing method of claim 1, wherein the color enhancement network comprises a residual acquisition module, a convolution module, and a weight determination module.
7. The image processing method according to any one of claims 1 to 6, wherein the feature images of different scales include a first-scale feature image, a second-scale feature image, and a third-scale feature image.
8. A terminal device, comprising:
the feature extraction module is used for inputting the low-illumination image into a trained continuous updating network to extract local features and global features so as to obtain feature images with different scales;
and the color enhancement module is used for inputting the characteristic images with different scales into the trained color enhancement network for processing to obtain a target image.
9. A terminal device comprising a memory, a processor, and a computer program stored in the memory and executable on the processor, wherein the processor when executing the computer readable instructions performs the steps of:
inputting the low-illumination image into a trained continuous updating network for extracting local features and global features to obtain feature images with different scales;
and inputting the characteristic images with different scales into the trained color enhancement network for processing to obtain a target image.
10. A computer-readable storage medium storing a computer program, the computer program when executed by a processor implementing the steps of:
inputting the low-illumination image into a trained continuous updating network for extracting local features and global features to obtain feature images with different scales;
and inputting the characteristic images with different scales into the trained color enhancement network for processing to obtain a target image.
CN202111415647.7A 2021-11-25 2021-11-25 Image processing method, terminal equipment and computer readable storage medium Pending CN114219725A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111415647.7A CN114219725A (en) 2021-11-25 2021-11-25 Image processing method, terminal equipment and computer readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111415647.7A CN114219725A (en) 2021-11-25 2021-11-25 Image processing method, terminal equipment and computer readable storage medium

Publications (1)

Publication Number Publication Date
CN114219725A true CN114219725A (en) 2022-03-22

Family

ID=80698386

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111415647.7A Pending CN114219725A (en) 2021-11-25 2021-11-25 Image processing method, terminal equipment and computer readable storage medium

Country Status (1)

Country Link
CN (1) CN114219725A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117440172A (en) * 2023-12-20 2024-01-23 江苏金融租赁股份有限公司 Picture compression method and device

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117440172A (en) * 2023-12-20 2024-01-23 江苏金融租赁股份有限公司 Picture compression method and device
CN117440172B (en) * 2023-12-20 2024-03-19 江苏金融租赁股份有限公司 Picture compression method and device

Similar Documents

Publication Publication Date Title
CN111179177B (en) Image reconstruction model training method, image reconstruction method, device and medium
CN111798400B (en) Non-reference low-illumination image enhancement method and system based on generation countermeasure network
CN110197229B (en) Training method and device of image processing model and storage medium
WO2021164234A1 (en) Image processing method and image processing device
CN110163801B (en) Image super-resolution and coloring method, system and electronic equipment
CN109977832B (en) Image processing method, device and storage medium
CN112508812A (en) Image color cast correction method, model training method, device and equipment
CN110674759A (en) Monocular face in-vivo detection method, device and equipment based on depth map
CN109255774B (en) Image fusion method, device and equipment
CN110599455A (en) Display screen defect detection network model, method and device, electronic equipment and storage medium
CN112365861B (en) Display image adjusting method, electronic device and computer readable storage medium
CN114219725A (en) Image processing method, terminal equipment and computer readable storage medium
CN112070703B (en) Method and system for enhancing underwater visual image of bionic robot fish
CN107729885B (en) Face enhancement method based on multiple residual error learning
CN113901928A (en) Target detection method based on dynamic super-resolution, and power transmission line component detection method and system
CN117152182A (en) Ultralow-illumination network camera image processing method and device and electronic equipment
CN113538304A (en) Training method and device of image enhancement model, and image enhancement method and device
CN116895008A (en) Crack identification model determination and crack identification method, device, equipment and medium
WO2023110880A1 (en) Image processing methods and systems for low-light image enhancement using machine learning models
CN115619666A (en) Image processing method, image processing apparatus, storage medium, and electronic device
CN112686314B (en) Target detection method and device based on long-distance shooting scene and storage medium
CN114663570A (en) Map generation method and device, electronic device and readable storage medium
CN116152586A (en) Model training method and device, electronic equipment and storage medium
CN113706438A (en) Image processing method, related device, equipment, system and storage medium
CN111612690A (en) Image splicing method and system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination