CN116188295A - Hair enhancement method, neural network, electronic device, and storage medium - Google Patents

Hair enhancement method, neural network, electronic device, and storage medium Download PDF

Info

Publication number
CN116188295A
CN116188295A CN202211659099.7A CN202211659099A CN116188295A CN 116188295 A CN116188295 A CN 116188295A CN 202211659099 A CN202211659099 A CN 202211659099A CN 116188295 A CN116188295 A CN 116188295A
Authority
CN
China
Prior art keywords
residual
image
module
modules
features
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211659099.7A
Other languages
Chinese (zh)
Inventor
张航
许合欢
王进
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Rainbow Software Co ltd
Original Assignee
Rainbow Software Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Rainbow Software Co ltd filed Critical Rainbow Software Co ltd
Priority to CN202211659099.7A priority Critical patent/CN116188295A/en
Publication of CN116188295A publication Critical patent/CN116188295A/en
Priority to PCT/CN2023/139420 priority patent/WO2024131707A1/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/80Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level
    • G06V10/806Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level of extracted features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Software Systems (AREA)
  • Artificial Intelligence (AREA)
  • Computing Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Evolutionary Computation (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Multimedia (AREA)
  • Medical Informatics (AREA)
  • Databases & Information Systems (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Molecular Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Image Processing (AREA)

Abstract

The application relates to a hair enhancement method, a neural network, an electronic device and a storage medium, wherein the hair enhancement method sequentially carries out residual calculation and feature fusion on image features of an original image through a plurality of residual modules to obtain residual module fusion features, wherein input features of a subsequent residual module in two adjacent residual modules are output features of a preceding residual module; the feature reconstruction is carried out based on the fusion features of the residual error module, so that an enhanced image corresponding to the original image is obtained, the problem of poor processing effect on the texture details of the hair or the hair in the portrait of the pet in the related art is solved, and the texture details in the image are enhanced.

Description

Hair enhancement method, neural network, electronic device, and storage medium
Technical Field
The present application relates to the field of image processing technology, and in particular, to a hair enhancement method, a neural network, an electronic device, and a storage medium.
Background
With the popularization of mobile phone devices, photographing is becoming a way for people to record life, and in order to better freeze pictures, the quality requirements of people for mobile phone photographing are also higher and higher, for example, pictures are clean, colors are rich, and textures are clear.
Due to the limitation of shooting conditions, the problems of blurring, noise, virtual focus and the like inevitably exist in the areas such as the hair of the pet or the hair in the portrait during shooting, so that the quality of the shot pictures is low. The current common image quality improvement scheme is a super-resolution reconstruction method based on deep learning, and can process a low-resolution image through a convolutional neural network to obtain a high-resolution image, so that missing high-frequency detail information in the image is increased. However, the mainstream super-resolution reconstruction method is aimed at natural images, and the resolution of the images is improved when photographing, but the texture details of the hair or the hair in the portrait of the pet are often poor in effect.
At present, no effective solution is proposed for the problem of poor treatment effect on hair texture details of pets or figures in the related art.
Disclosure of Invention
The application provides a hair enhancement method, a neural network, an electronic device and a storage medium, which are used for solving the problem that the processing effect on hair texture details in pet hair or figures is poor in the related art.
In a first aspect, the present application provides a method of hair enhancement, the method comprising:
Acquiring image characteristics of an original image;
sequentially carrying out residual calculation and feature fusion on the image features of the original image through a plurality of residual modules to obtain residual module fusion features, wherein the input features of the subsequent residual modules in two adjacent residual modules are output features of the preceding residual modules;
and carrying out feature reconstruction based on the fusion features of the residual error module to obtain an enhanced image corresponding to the original image.
In some embodiments, the performing residual calculation and feature fusion on the image features of the original image sequentially through a plurality of residual modules, and obtaining residual module fusion features includes:
convolving and fusing the image features based on a first residual error module to obtain first features;
performing convolution fusion on the first features based on a second residual error module to obtain second features;
and fusing the first feature and the second feature to obtain the fusion feature of the residual error module.
In some of these embodiments, the plurality of residual modules further comprises a third residual module and a fourth residual module.
In some embodiments, each residual module includes a plurality of residual layers, and the method for acquiring the output characteristic of the residual module includes:
For two adjacent residual layers, performing convolution calculation on the final output characteristics of a previous residual layer through a subsequent residual layer to obtain convolution output characteristics, and adding the convolution output characteristics and the final output characteristics of the previous residual layer to obtain final output characteristics of the subsequent residual layer;
under the condition that a plurality of residual layers exist, splicing the final output characteristics of each residual layer and the final output characteristics of the initial layer of the residual module to obtain residual layer splicing characteristics;
and determining the output characteristic of the residual error module according to the input characteristic of the residual error module and the splicing characteristic of the residual error layer.
In some of these embodiments, the acquiring image features of the original image includes:
acquiring initial characteristics of an original image;
and downsampling the initial characteristics to obtain the image characteristics of the original image.
In some embodiments, the downsampling the initial feature to obtain the image feature of the original image includes:
and gradually downsampling the initial feature based on a plurality of downsampling modules to obtain the image feature of the original image, wherein the input feature of the subsequent downsampling module in two adjacent downsampling modules is the output feature of the preceding downsampling module.
In some of these embodiments, the downsampling of the initial feature is accomplished by a wavelet transform.
In some embodiments, the performing feature reconstruction based on the residual module fusion feature, obtaining an enhanced image corresponding to the original image includes:
performing multiple upsampling and feature fusion calculation on the fusion features of the residual error module based on multiple upsampling modules to obtain the enhanced image; the number of the up-sampling modules corresponds to the number of the down-sampling modules, the input characteristics of the up-sampling modules in the two adjacent up-sampling modules are determined together according to the output characteristics of the up-sampling modules in the past and the output characteristics of the target down-sampling modules, and the target down-sampling modules correspond to the up-sampling modules in the past.
In some of these embodiments, the wavelet transform comprises:
respectively sampling the initial characteristics of the original image at intervals on rows and columns according to a preset step length to obtain a sampling result;
and calculating each frequency band information of the initial characteristic according to the sampling result to serve as the image characteristic of the original image.
In some of these embodiments, the hair enhancement method is implemented based on a neural network, and the acquisition method for training a sample image pair of the neural network includes:
Collecting a first sample image, wherein the image quality of the first sample image meets a preset image quality threshold;
performing image degradation on the first sample image to obtain a second sample image, wherein the image quality of the second sample image is lower than that of the first sample image;
the first sample image and the second sample image are taken as a sample image pair.
In a second aspect, an embodiment of the present application provides a neural network, including an acquisition module, a plurality of residual modules, and a reconstruction module;
the acquisition module is used for acquiring image characteristics of the original image;
the residual error modules are used for sequentially carrying out residual error calculation and feature fusion on the image features of the original image to obtain residual error module fusion features, wherein the input features of the subsequent residual error modules in two adjacent residual error modules are output features of the previous residual error modules;
and the reconstruction module is used for carrying out feature reconstruction based on the fusion features of the residual error module to obtain an enhanced image corresponding to the original image.
In a third aspect, an embodiment of the present application provides an electronic device, including a processor and a memory, where the memory is configured to store executable instructions of the processor; the processor is configured to perform, via execution of the executable instructions, a hair enhancement method implementing any of the first aspects described above.
In a fourth aspect, embodiments of the present application provide a computer readable storage medium storing one or more programs executable by one or more processors to implement the hair enhancement method as described in any of the first aspects above.
Compared with the related art, in the hair enhancement method provided in the embodiment, residual calculation and feature fusion are sequentially performed on image features of an original image through a plurality of residual modules, so as to obtain residual module fusion features, wherein input features of a subsequent residual module in two adjacent residual modules are output features of a previous residual module; the feature reconstruction is carried out based on the fusion features of the residual error module, so that an enhanced image corresponding to the original image is obtained, the problem of poor processing effect on the texture details of the hair or the hair in the portrait of the pet in the related art is solved, and the texture details in the image are enhanced.
Additional features and advantages of the application will be set forth in the description which follows, and in part will be obvious from the description, or may be learned by practice of the application. Other advantages of the present application may be realized and attained by the structure particularly pointed out in the written description and drawings.
Drawings
The accompanying drawings are included to provide an understanding of the technical aspects of the present application, and are incorporated in and constitute a part of this specification, illustrate the technical aspects of the present application and together with the examples of the present application, and not constitute a limitation of the technical aspects of the present application.
FIG. 1 is a flow chart of a hair enhancement method according to an embodiment of the present application;
FIG. 2 is a flow chart of a method of generating residual module fusion features according to an embodiment of the present application;
FIG. 3 is a schematic diagram of a plurality of residual modules according to an embodiment of the present application;
FIG. 4 is a flow chart of a method of computing residual module output characteristics according to an embodiment of the present application;
FIG. 5 is a schematic diagram of the internal structure of a residual module according to an embodiment of the present application;
FIG. 6 is a flow chart of wavelet transform according to an embodiment of the present application;
FIG. 7 is an effect schematic of wavelet transform according to an embodiment of the present application;
FIG. 8 is a flow chart of a method of acquiring a sample image pair according to an embodiment of the present application;
FIG. 9 is a schematic diagram of the structure of a neural network according to a preferred embodiment of the present application;
FIG. 10 is a schematic diagram of a comparison of an original image and an enhanced image according to a preferred embodiment of the present application;
fig. 11 is a block diagram of a neural network according to an embodiment of the present application.
Detailed Description
For a clearer understanding of the objects, technical solutions and advantages of the present application, the present application is described and illustrated below with reference to the accompanying drawings and examples.
Unless defined otherwise, technical or scientific terms used herein shall have the same meaning as commonly understood by one of ordinary skill in the art to which this application belongs. The terms "a," "an," "the," "these," and the like in this application are not intended to be limiting in number, but rather are singular or plural. The terms "comprising," "including," "having," and any variations thereof, as used in the present application, are intended to cover a non-exclusive inclusion; for example, a process, method, and system, article, or apparatus that comprises a list of steps or modules (units) is not limited to the list of steps or modules (units), but may include other steps or modules (units) not listed or inherent to such process, method, article, or apparatus. The terms "connected," "coupled," and the like in this application are not limited to physical or mechanical connections, but may include electrical connections, whether direct or indirect. Reference to "a plurality" in this application means two or more. "and/or" describes an association relationship of an association object, meaning that there may be three relationships, e.g., "a and/or B" may mean: a exists alone, A and B exist together, and B exists alone. Typically, the character "/" indicates that the associated object is an "or" relationship. The terms "first," "second," "third," and the like, as referred to in this application, merely distinguish similar objects and do not represent a particular ordering of objects.
The present application describes a number of embodiments, but the description is illustrative and not limiting and it will be apparent to those of ordinary skill in the art that many more embodiments and implementations are possible within the scope of the embodiments described herein. Although many possible combinations of features are shown in the drawings and discussed in the detailed description, many other combinations of the disclosed features are possible. Any feature or element of any embodiment may be used in combination with or in place of any other feature or element of any other embodiment unless specifically limited.
The present application includes and contemplates combinations of features and elements known to those of ordinary skill in the art. The embodiments, features and elements of the present disclosure may also be combined with any conventional features or elements to form a unique inventive arrangement as defined in the claims. Any feature or element of any embodiment may also be combined with features or elements from other inventive arrangements to form another unique inventive arrangement as defined in the claims. Thus, it should be understood that any of the features shown and/or discussed in this application may be implemented alone or in any suitable combination. Accordingly, the embodiments are not to be restricted except in light of the attached claims and their equivalents. Further, various modifications and changes may be made within the scope of the appended claims.
Furthermore, in describing representative embodiments, the specification may have presented the method and/or process as a particular sequence of steps. However, to the extent that the method or process does not rely on the particular order of steps set forth herein, the method or process should not be limited to the particular sequence of steps described. Other sequences of steps are possible as will be appreciated by those of ordinary skill in the art. Accordingly, the particular order of the steps set forth in the specification should not be construed as limitations on the claims. Furthermore, the claims directed to the method and/or process should not be limited to the performance of their steps in the order written, and one skilled in the art can readily appreciate that the sequences may be varied and still remain within the spirit and scope of the embodiments of the present application.
An embodiment of the present application provides a hair enhancement method, as shown in fig. 1, including the steps of:
step S101, image features of an original image are acquired.
The image features of the original image in this embodiment are related to the hair, and it should be noted that the hair in this embodiment includes pet hair and/or human hair, and the original image may be any kind of image, and if the original image includes hair, the detail texture of the hair may be enhanced by the method in this embodiment, so as to obtain a clearer image.
The process of acquiring the image features can be realized by a trained neural network through convolution calculation.
Step S102, carrying out residual calculation and feature fusion on the image features of the original image sequentially through a plurality of residual modules to obtain residual module fusion features, wherein the input features of the subsequent residual modules in the two adjacent residual modules are output features of the preceding residual modules.
The neural network for hair enhancement in this embodiment includes a plurality of residual modules, the residual modules sequentially perform residual computation on input features to obtain a plurality of residual features, and then perform feature fusion on the plurality of residual features through convolution computation to obtain residual module fusion features. The output characteristics of the first residual module are input characteristics of the second residual module, the output characteristics of the second residual module are input characteristics of the third residual module, and the like, and the input of the first residual module is the image characteristics of the original image. The number of residual modules is not limited in this embodiment, and thus the number of residual modules may be 2, 3, 4, 5, or even more.
Through a plurality of cascaded residual modules, the receptive field of the neural network can be deepened, features on different scales can be better extracted, complex hair textures can be recovered, and preferably, the scale of a convolution layer used for feature fusion is 1 multiplied by 1, so that the correlation among features with different depths is increased.
And step S103, carrying out feature reconstruction based on fusion features of the residual error module to obtain an enhanced image corresponding to the original image.
In particular, feature reconstruction may be achieved by convolution calculations.
Through the steps S101 to S103, in this embodiment, the image features of the original image are calculated based on the plurality of residual modules, so as to obtain residual module fusion features including details such as directions and textures in the original image under different scales, and an enhanced image obtained by performing feature reconstruction based on the residual module fusion features is higher in resolution and richer in details compared with the original image, so that the problem of poor processing effect on hair texture details in pet hair or figures in the related art is solved, and the texture details in the image are enhanced.
Specifically, each residual module of the neural network in this embodiment is cascaded, and the fusion characteristic of the residual modules is obtained by fusing the outputs of each residual module. Fig. 2 is a flowchart of a method for generating a residual module fusion feature according to an embodiment of the present application, as shown in fig. 2, the method includes the following steps:
step S201, carrying out convolution fusion on image features based on a first residual error module to obtain first features;
Step S202, carrying out convolution fusion on the first features based on a second residual error module to obtain second features;
and step S203, fusing the first feature and the second feature to obtain a residual error module fusion feature.
In this embodiment, a method for processing image features by using multiple residual modules is provided, a first feature output by a first residual module is used as an input of a second residual module, a receptive field for extracting features can be increased step by step, and finally, all the outputs are fused to obtain features under different receptive fields, so that a recovery effect on an original image is enhanced. It should be noted that, in this embodiment, the fusion of the first feature and the second feature may be implemented by a 1×1 convolution layer to enhance the correlation between features of different sensitivity fields.
Obviously, the more the number of residual modules, the deeper the receptive field, the more features of different scales can be extracted, and the greater the computation load, so in order to balance the detail features and computation loads, in some embodiments, the neural network further includes a third residual module and a fourth residual module, as shown in fig. 3, and the neural network includes four residual modules (Multi-Scale Res-blocks, abbreviated as MSRBs). At the moment, carrying out convolution fusion on the image features based on the first residual error module to obtain first features; and carrying out convolution fusion on the first feature based on the second residual error module to obtain a second feature, carrying out convolution fusion on the second feature based on the third residual error module to obtain a third feature, carrying out convolution fusion on the third feature based on the fourth residual error module to obtain a fourth feature, and finally carrying out convolution fusion on the first feature, the second feature, the third feature and the fourth feature through a convolution layer of the fusion module to obtain a fusion feature of the residual error module. The fusion module in this embodiment is a 1×1 convolution layer, which is used to change the number of output channels and increase the correlation between features at different receptive field depths.
In some embodiments, the residual module may also include a plurality of residual layers, and the output feature of the residual module is calculated by using each residual layer, and fig. 4 is a flowchart of a method for calculating the output feature of the residual module according to an embodiment of the present application, where the first feature, the second feature, the third feature, and the fourth feature in the foregoing embodiments are all obtained as the output feature of the residual module by using the method, and specifically, the method includes the following steps:
step S401, for two adjacent residual layers, performing convolution calculation on the final output characteristics of the previous residual layer through the subsequent residual layer to obtain convolution output characteristics, and adding the convolution output characteristics and the final output characteristics of the previous residual layer to obtain final output characteristics of the subsequent residual layer;
step S402, under the condition that a plurality of residual layers exist, splicing the final output characteristics of each residual layer and the final output characteristics of the initial layer of the residual module to obtain residual layer splicing characteristics;
step S403, determining the output characteristic of the residual error module according to the input characteristic of the residual error module and the splicing characteristic of the residual error layer.
Specifically, the convolution layer can be used for performing convolution calculation on the splicing characteristics of the residual layer to reduce the channel, and then the channel is added with the input characteristics of the residual module, so that the output characteristics of the residual module are obtained.
The first layer of the residual module is used as an initial layer and can be a common convolution layer for calculating the input characteristics of the residual module and directly obtaining the final output characteristics of the initial layer, the residual module is a residual layer from the second layer, and the residual layer of the second layer adds the convolution output characteristics of the residual module and the final output characteristics of the initial layer to be used as the final output characteristics of the residual module.
Fig. 5 is a schematic diagram of the internal structure of a residual module according to an embodiment of the present application, as shown in fig. 5, where the residual module is composed of a convolution layer for residual calculation and a convolution layer for splice fusion, and in this embodiment, by way of example, the residual module includes 4 residual structures for residual calculation, and after residual calculation, output features of each residual layer are spliced (cnocat), and then connected to 1×1 convolution layer for reducing the number of channels, so as to reduce the calculation amount of the neural network. Specifically, firstly, the input feature S of the residual module is obtained by performing convolution calculation through an initial layer of the residual module to obtain S01, S01 is a final output feature of the initial layer, S01 is obtained by passing through a convolution layer to obtain convolution output features S01', S01' and S01, a first residual structure is formed by adding the convolution output features S02 and S02 of the first residual layer, S02 is obtained by passing through a convolution layer to obtain convolution output features S02', S02' and S02 to form a second residual structure, and finally, the final output features S03', S03' and S03 are obtained by passing through a convolution layer to obtain a third residual structure, the final output features S04 of the third residual layer are obtained by adding the convolution output features S01, S02, S03 and S04 in channel dimensions, a residual layer splicing feature is obtained, the convolution fusion calculation is performed on the residual layer splicing features of 1×1, the correlation between features of different feeling depths is increased, and the channel number is reduced to obtain the final residual structure, and the residual structure is formed by adding the final output features of the residual module. The "yes" in fig. 5 indicates addition, and the addition process in this embodiment is preferably elementwise add, so as to implement element-by-element addition, so that more information in the original image can be retained, and it is ensured that the texture details of the enhanced image and the direction information of the hair in the original image are consistent.
Through the steps S401 to S403, the receptive fields are gradually increased through the plurality of residual layers in each residual module, and the multi-scale features under different receptive fields are obtained, which is beneficial to restoring the hair texture.
In some embodiments, the image features of the original image are obtained by first obtaining initial features of the original image, such as bottom features at the pixel level, and then downsampling the initial features to obtain the image features of the original image for residual calculation. In this embodiment, the image features are obtained through downsampling, so that not only the decomposed high-frequency information and the decomposed low-frequency information can be obtained, but also more image details can be obtained.
Further, in acquiring the image features of the original image based on the initial features, it may be achieved by step-wise downsampling. Specifically, the initial features are downsampled step by step based on a plurality of downsampling modules to obtain image features on different scales. Wherein, in two adjacent downsampling modules, the input characteristic of the subsequent downsampling module is the output characteristic of the preceding downsampling module.
Preferably, the multiple progressive decomposition downsampling of the initial features may be achieved by wavelet transform (Wavelet Transform, abbreviated WT). Compared with the conventional method for realizing downsampling through convolution calculation, the wavelet transformation can save the calculation amount and simultaneously not lose various characteristic information of the original image, so that not only can the decomposed high-low frequency information be obtained efficiently, but also the recovery can be completed through inverse transformation without losing details, and the calculation amount is small, thereby being very beneficial to deployment at a mobile terminal. Therefore, for texture features such as hair, the details can be better kept by adopting wavelet transformation, and the loss is reduced. Alternatively, a discrete wavelet transform (Discrete Wavelet Transform, abbreviated as DWT) is employed in this embodiment.
After the wavelet transformation, in order to reduce the calculation amount, convolution calculation can be performed on the input characteristics after the wavelet transformation to reduce the number of channels, so as to finally obtain the output characteristics of the downsampling module.
Preferably, the initial feature progressive decomposition and feature extraction includes a total of 3 downsampling modules, each including DWT decomposition and convolution calculations. Illustratively, the first layer decomposition and convolution structure performs DWT decomposition on the initial feature x0, convolves the decomposed feature to reduce the number of channels, and then improves nonlinearity by ReLU operation to obtain x1. And the second layer of decomposition and convolution structure carries out DWT decomposition on the x1, and the convolution plus ReLU operation is carried out on the decomposed features to obtain output features x2. And performing DWT decomposition on the characteristic x2 by the third layer of decomposition and convolution operation, and performing convolution operation on the decomposed characteristic to obtain an output characteristic x3, wherein the x3 can be used as an input characteristic S of a residual error module. The size of the convolution layer is preferably 3×3 to reduce the calculation amount, and the number of convolution layers is not limited.
In some embodiments, fig. 6 is a flow chart of wavelet transformation according to an embodiment of the present application, as shown in fig. 6, the method includes:
Step S601, respectively sampling the initial features of the original image at intervals on the rows and the columns according to a preset step length to obtain a sampling result.
The preset step length can be set according to requirements, and when the preset step length is 2, sampling calculation can be performed through the following formulas 1 to 6:
p01=p [::: 0::: 2::/2 formula 1
p02=p [::: 1:: 2::/2 formula 2
p1=p01 [:::::::: 0::: 2] formula 3
p2=p02 [:::::::: 0::: 2] equation 4
p3=p01 [:::::::: 1::: 2] equation 5
p4=p02 [:::::::: 1::2] formula 6
Where p denotes a pixel of an initial feature, p01 denotes a pixel obtained by sampling every two pixels from 0 in the column direction of the image, and p02 denotes a pixel obtained by sampling every two pixels from 1 in the column direction of the image. p1 to p4 represent four pixels in one 2×2 square, p1 is a pixel obtained by sampling every two pixels in the row direction of the image from 0 for p01, p2 is a pixel obtained by sampling every two pixels in the row direction of the image from 0 for p02, p3 is a pixel obtained by sampling every two pixels in the row direction of the image from 1 for p01, and p4 is a pixel obtained by sampling every two pixels in the row direction of the image from 1 for p 02. And the like, completing the whole sampling process to obtain a sampling result.
Step S602, each band information of the initial feature is calculated as an image feature of the original image based on the sampling result.
Specifically, the calculation process of the band information may be completed by the following formulas 7 to 10:
ll=p1+p2+p3+p4 equation 7
Hl= -p1-p2+p3+p4 equation 8
Lh= -p1+p2-p3+p4 equation 9
Hh=p1-p2-p3+p4 equation 10
Where LL is low-frequency information, HL is vertical-direction high-frequency information, LH is horizontal-direction high-frequency information, HH is diagonal-direction high-frequency information, specifically, low-frequency reaction image profile, high-frequency reaction image details, so that image features can be better preserved by wavelet transformation. As shown in fig. 7, the image on the left side is an input original image, and the image on the right side is a schematic view after wavelet decomposition. After wavelet transformation, 4 different frequency band information is obtained, and the abscissa in the right image represents the image size after wavelet transformation.
Through the step S601 and the step S602, the image features including the information of each frequency band can be obtained without loss, which is beneficial to obtaining details such as the texture, the direction and the like of the hair.
Correspondingly, the process of carrying out feature reconstruction based on the fusion features of the residual modules to obtain the enhanced image is to carry out up-sampling and feature fusion calculation on the fusion features of the residual modules for a plurality of times based on a plurality of up-sampling modules to obtain the enhanced image, wherein the number of the up-sampling modules corresponds to the number of the down-sampling modules, the input features of the following up-sampling modules in two adjacent up-sampling modules are determined together according to the output features of the preceding up-sampling modules and the output features of the target down-sampling modules, and the target down-sampling modules correspond to the following up-sampling modules. Specifically, for the output characteristics of the preceding upsampling module and the output characteristics of the target downsampling module, the input characteristics of the following upsampling module are obtained by element-by-element addition through elementwise add. In the case where the downsampling is a wavelet transform, the upsampling corresponds to an inverse wavelet transform (Inverse Wavelet Transform, abbreviated IWT) to reduce detail loss in the original image.
Specifically, for the LL, HL, LH, HH component which is obtained, firstly, the components are spliced in the channel dimension, and then restored, and the specific calculation formula of the IWT is shown as formula 11 to formula 18:
p1=ll/2 equation 11
p2=hl/2 equation 12
p3=lh/2 equation 13
p4=hh/2 equation 14
Rlt [:: 0::2,0::2] = p1-p2-p3+p4 equation 15
Rlt [:: 1::2,0::2] = p1-p2+p3-p4 equation 16
Rlt [:, 0::2,1::2] =p1+p2-p 3-p4 equation 17
Rlt [:: 1::2,1::2] = p1+p2+p3+p4 equation 18
Where Rlt is the result of the final inverse wavelet transform.
Specifically, the feature reconstruction and the step-by-step synthesis up-sampling comprise 3 up-sampling modules, each up-sampling module comprises a convolution layer and an IWT layer, the fusion feature of the residual error module is regarded as the input y3 of the first up-sampling module, the first up-sampling module convolves the y3 output by the bottom multi-scale residual error module to increase the channel number, the ReLU operation is connected to promote nonlinearity, the IWT is used for obtaining the features y3', y3' and the x2 obtained in the down-sampling process are added to obtain the input feature y2 of the second up-sampling module, and the second up-sampling module carries out the feature reconstruction on the y2 through convolution, the ReLU operation and the IWT to obtain the input feature y1 of the third up-sampling module. The third up-sampling module also uses convolution, reLU operation and IWT to reconstruct the features of y1 to obtain y1', y1' and x0, and adds them to obtain feature y0. After the up-sampling process is completed, y0 is calculated by a convolution layer of 3×3, and the final output feature y is obtained as an enhanced image. The addition process in this embodiment is preferably elementwise add to perform element-by-element addition, so that more information in the original image can be retained, thereby ensuring that the texture details of the enhanced image and the direction information of the hair in the original image are consistent.
In some embodiments, the present application implements the hair enhancement method described above based on a neural network, and when training the neural network, a corresponding sample image pair is required, fig. 8 is a flowchart of a method for acquiring a sample image pair according to an embodiment of the present application, the method includes the following steps:
in step S801, a first sample image is acquired, and the image quality of the first sample image satisfies a preset image quality threshold.
Specifically, the first sample image of the high-definition pet hair or the human hair can be acquired through high-definition image acquisition equipment such as a single-lens reflex camera, and the acquired first sample image is required to be smooth in hair acquisition, clear in texture, high in detail resolution and good in hair direction consistency, and based on the first sample image, a corresponding image quality threshold can be set to screen the first sample image.
In step S802, image degradation is performed on the first sample image to obtain a second sample image, where the image quality of the second sample image is lower than that of the first sample image.
The degradation refers to a process of degrading the image quality, and can be realized by simulating operations such as JPEG compression, raw noise, lens blurring, scaling and the like, and finally, a low-quality pet hair image of which the real shot image is degraded is obtained.
Step S803, the first sample image and the second sample image are taken as a sample image pair.
Through the steps S801 to S803, the training set is obtained by using the image acquisition device capable of obtaining high-quality images in real time, and high-definition hair images under different light rays, different environments and different angles are acquired, so that the hair in the hair image is required to be compliant, clear in texture, high in detail resolution and good in consistency of hair directions. And then the paired low-quality images are obtained through degradation, the real scene real shot low-quality hair images are simulated, and finally a sample image pair is obtained, so that strict alignment between input and output is ensured, the problem of pixel dislocation is avoided, and the training result of the neural network is better.
Further, the training of the neural network comprises the following steps:
s1, acquiring a plurality of sample image pairs;
s2, inputting a training set formed by a plurality of sample image pairs into a neural network to be trained based on a multi-scale residual error network structure, and training. The loss function of the neural network in this embodiment is shown in formula 19:
Figure BDA0004012958080000141
in order to improve the generated authenticity, the loss function in the embodiment is obtained by adopting a plurality of weighted summation of sub-loss functions. Where L represents the final loss function, n represents the number of sample image pairs, L 1 Is to calculate the loss pixel by pixel, L SSIM Is a loss of structural similarity, L VGG Is the perceived loss, L GAN Is to generate a weight lambda against the loss of the network 1 、λ 2 、λ 3 、λ 4 Can be set according to the requirements. And calculating the loss according to the output result of the neural network and the real training set, and finishing training when the value of the loss function reaches the minimum value or the iteration number exceeds a preset threshold value.
And S3, saving the neural network model for hair enhancement when the convergence condition is met.
A preferred embodiment is given below taking the application scenario of pet hair enhancement as an example.
In this embodiment, the neural network has a structure as shown in fig. 9, and includes an initial feature extraction module, a plurality of downsampling modules, a plurality of residual modules, a fusion module, a plurality of upsampling modules and a repair enhancement module, where the initial feature extraction module is used to extract basic features of an original image, the downsampling modules, the residual modules, the fusion module and the upsampling modules are used to mine more feature information of the image, and the repair enhancement module is used to implement final repair enhancement of pet hair.
The initial feature extraction module is realized by a 3×3 convolution layer, and is responsible for extracting pixel-level bottom-layer features x0 from the low-quality pet hair image x of the input neural network, and representing feature information of x by using more output channels. The size of the convolution kernel is preferably 3 multiplied by 3, so that the increase of network parameters caused by the oversized convolution kernel can be avoided, and the calculation performance consumed in the network reasoning stage can be reduced.
And the three downsampling modules decompose and downsample the x0 step by step, and output characteristics x1, x2 and x3 are sequentially obtained through the DWT decomposition layer and the convolution layer.
In this embodiment, the plurality of residual modules are exemplified by 4 identical multi-scale residual modules, and the fusion module is a 1×1 convolution layer for changing the number of output channels and increasing the correlation between the features at different receptive field depths. Wherein each residual module consists of a plurality of 3×3 convolutional layers and 1×1 convolutional layers, the residual layers being used for residual calculation. And finally, the fusion module is used for carrying out convolution on the output characteristics of each of the 4 residual modules after the output characteristics are spliced in the channel dimension to obtain a bottom layer characteristic extraction result, namely, the residual module fusion characteristics.
The plurality of up-sampling modules are illustratively three up-sampling modules, each up-sampling module comprises a convolution layer and an IWT reconstruction layer, and the output characteristic y0 is obtained after y1', y1' and x0 are added through calculation of the three up-sampling modules.
The final repair enhancement module is realized by a deconvolution layer with the convolution kernel size of 3 multiplied by 3 and the step length of 2, and deconvolution is carried out on y0 to obtain a final repair reconstruction result y.
It can be seen that, in this embodiment, the convolution layers of the initial feature extraction module, the downsampling module, the upsampling module and the repair enhancement module are all 3×3, so that parameter calculation can be reduced, the calculation amount of the neural network is reduced, and the deployment on the mobile terminal is facilitated. The convolution layers of the fusion modules are all 1 multiplied by 1, so that the correlation among the features under different sensing field depths can be increased.
The "yes" in fig. 9 indicates addition, and the addition process in this embodiment is preferably elementwise add, so as to implement element-by-element addition, so that more information in the original image can be retained, and it is ensured that the texture details of the enhanced image and the direction information of the hair in the original image are consistent.
Further, the present embodiment implements progressive decomposition downsampling and progressive synthesis upsampling by DWT and IWT. Compared with the traditional implementation of downsampling and upsampling by convolution and deconvolution, the adoption of DWT and IWT has the following two advantages: 1. parameters and calculated amount can be reduced, DWT and IWT belong to non-parameter operation, calculation is simple, and performance consumption caused by up-down sampling with parameters is avoided; 2. after the original image is represented by four components HH, HL, LH, LL, high-frequency detail information of the image can be effectively mined, and the DWT and the IWT are a pair of lossless conversion operations, so that the content of the original image can be restored without losing detail.
As shown in fig. 10, src represents an original image, rlt represents a restored enhanced image, the restored and restored pet hair has clearer texture, and the direction and original image are attached, so that the original image hair resolving power can be obviously enhanced, and the human visual effect is improved.
Therefore, the hair enhancement method based on the multi-scale residual error network structure in the embodiment can solve the problems of blurring, noise, virtual focus and the like generated in a hair region in an image, can acquire characteristics of different sensing fields through the multi-scale residual error structure, well excavates missing high-frequency detail information, is convenient to train, ensures stability of a training process, and finally realizes repair enhancement of a low-quality hair region.
It should be noted that the steps illustrated in the above-described flow or flow diagrams of the figures may be performed in a computer system, such as a set of computer-executable instructions, and that, although a logical order is illustrated in the flow diagrams, in some cases, the steps illustrated or described may be performed in an order other than that illustrated herein.
In this embodiment, a neural network is further provided, and the neural network is used to implement the foregoing embodiments and preferred implementations, and will not be described in detail. The terms "module," "unit," "sub-unit," and the like as used below may refer to a combination of software and/or hardware that performs a predetermined function. While the means described in the following embodiments are preferably implemented in software, implementations in hardware, or a combination of software and hardware, are also possible and contemplated.
Fig. 11 is a block diagram of a neural network according to an embodiment of the present application, as shown in fig. 11, for performing hair enhancement, including an acquisition module 1101, a plurality of residual modules 1102, and a reconstruction module 1103;
an acquisition module 1101, configured to acquire image features of an original image;
the plurality of residual modules 1102 are used for sequentially carrying out residual calculation and feature fusion on the image features of the original image to obtain residual module fusion features, wherein the input features of the subsequent residual modules in the two adjacent residual modules are output features of the preceding residual modules;
and the reconstruction module 1103 is configured to perform feature reconstruction based on the fusion features of the residual module, so as to obtain an enhanced image corresponding to the original image.
Through the step neural network, in the embodiment, the image features of the original image are calculated based on the plurality of residual modules 1102, so that residual module fusion features including details such as directions and textures in the original image are obtained, and the reconstruction module 1103 performs feature reconstruction based on the residual module fusion features to obtain an enhanced image.
Further, the first residual error module carries out convolution fusion on the image characteristics to obtain first characteristics; the second residual error module carries out convolution fusion on the first characteristic to obtain a second characteristic; and the fusion module fuses the first feature and the second feature to obtain a fusion feature of the residual error module.
Further, the neural network further comprises a third residual module and a fourth residual module.
Further, for two adjacent residual layers, performing convolution calculation on the final output characteristics of the previous residual layer through the subsequent residual layer to obtain convolution output characteristics, and adding the convolution output characteristics and the final output characteristics of the previous residual layer to obtain final output characteristics of the subsequent residual layer; under the condition that a plurality of residual layers exist, the fusion layer splices the final output characteristics of each residual layer and the final output characteristics of the initial layer of the residual module to obtain residual layer splicing characteristics; the input features of the residual modules and the splicing features of the residual layers jointly determine the output features of the residual modules.
Further, the obtaining module 1101 is further configured to obtain an initial feature of an original image; and downsampling the initial characteristics to obtain the image characteristics of the original image.
Further, the obtaining module 1101 performs step-by-step downsampling on the initial features based on a plurality of downsampling modules to obtain image features of an original image, where input features of a subsequent downsampling module in two adjacent downsampling modules are output features of a preceding downsampling module.
Further, downsampling the initial features is accomplished by wavelet transform.
Further, the reconstruction module 1103 is further configured to perform multiple upsampling and feature fusion calculation on the residual module fusion feature based on the multiple upsampling modules, so as to obtain an enhanced image; the number of the up-sampling modules corresponds to the number of the down-sampling modules, and in two adjacent up-sampling modules, the input characteristics of the following up-sampling module are determined together according to the output characteristics of the preceding up-sampling module and the output characteristics of the target down-sampling module, and the target down-sampling module corresponds to the following up-sampling module.
Further, the wavelet transformation comprises the steps of respectively sampling the initial characteristics of the original image at intervals on rows and columns according to a preset step length to obtain a sampling result; each band information of the initial feature is calculated as an image feature of the original image based on the sampling result.
Further, the method for acquiring the sample image pair of the training neural network comprises the following steps: collecting a first sample image, wherein the image quality of the first sample image meets a preset image quality threshold; performing image degradation on the first sample image to obtain a second sample image, wherein the image quality of the second sample image is lower than that of the first sample image; the first sample image and the second sample image are taken as a sample image pair.
The above-described respective modules may be functional modules or program modules, and may be implemented by software or hardware. For modules implemented in hardware, the various modules described above may be located in the same processor; or the above modules may be located in different processors in any combination.
There is also provided in this embodiment an electronic device comprising a memory having stored therein a computer program and a processor arranged to run the computer program to perform the steps of any of the method embodiments described above.
Optionally, the electronic apparatus may further include a transmission device and an input/output device, where the transmission device is connected to the processor, and the input/output device is connected to the processor.
Alternatively, in the present embodiment, the above-described processor may be configured to execute the following steps by a computer program:
s1, acquiring image characteristics of an original image.
S2, carrying out residual calculation and feature fusion on the image features of the original image sequentially through a plurality of residual modules to obtain residual module fusion features, wherein the input features of the subsequent residual modules in two adjacent residual modules are the output features of the previous residual modules.
And S3, carrying out feature reconstruction based on the fusion features of the residual error module to obtain an enhanced image corresponding to the original image.
It should be noted that, specific examples in this embodiment may refer to examples described in the foregoing embodiments and alternative implementations, and are not described in detail in this embodiment.
In addition, in combination with the sample tag acquiring method provided in the above embodiment, a computer readable storage medium may be provided in this embodiment. The storage medium having stored thereon one or more programs executable by one or more processors; the program, when executed by a processor, implements any of the methods of the embodiments described above.
It should be understood that the specific embodiments described herein are merely illustrative of this application and are not intended to be limiting. All other embodiments, which can be made by one of ordinary skill in the art without undue burden from the present application, are within the scope of the present application in light of the embodiments provided herein.
It is evident that the drawings are only examples or embodiments of the present application, from which the present application can also be adapted to other similar situations by a person skilled in the art without the inventive effort. In addition, it should be appreciated that while the development effort might be complex and lengthy, it would nevertheless be a routine undertaking of design, fabrication, or manufacture for those of ordinary skill having the benefit of this disclosure, and thus should not be construed as an admission of insufficient detail.
It should be noted that, user information (including but not limited to user equipment information, user personal information, etc.) and data (including but not limited to data for analysis, stored data, presented data, etc.) referred to in the present application are information and data authorized by the user or sufficiently authorized by each party.
The term "embodiment" in this application means that a particular feature, structure, or characteristic described in connection with the embodiment may be included in at least one embodiment of the application. The appearances of such phrases in various places in the specification are not necessarily all referring to the same embodiment, nor are separate or alternative embodiments mutually exclusive. It will be clear or implicitly understood by those of ordinary skill in the art that the embodiments described in this application can be combined with other embodiments without conflict.
The above examples only represent a few embodiments of the present application, which are described in more detail and are not to be construed as limiting the scope of the patent. It should be noted that it would be apparent to those skilled in the art that various modifications and improvements could be made without departing from the spirit of the present application, which would be within the scope of the present application. Accordingly, the scope of protection of the present application shall be subject to the appended claims.
Those of ordinary skill in the art will appreciate that all or some of the steps, systems, functional modules/units in the apparatus, and methods disclosed above may be implemented as software, firmware, hardware, and suitable combinations thereof. In a hardware implementation, the division between the functional modules/units mentioned in the above description does not necessarily correspond to the division of physical components; for example, one physical component may have multiple functions, or one function or step may be performed cooperatively by several physical components. Some or all of the components may be implemented as software executed by a processor, such as a digital signal processor or microprocessor, or as hardware, or as an integrated circuit, such as an application specific integrated circuit. Such software may be distributed on computer readable media, which may include computer storage media (or non-transitory media) and communication media (or transitory media). The term computer storage media includes both volatile and nonvolatile, removable and non-removable media implemented in any method or technology for storage of information such as computer readable instructions, data structures, program modules or other data, as known to those skilled in the art. Computer storage media includes, but is not limited to, RAM, ROM, EEPROM, flash memory or other memory technology, CD-ROM, digital Versatile Disks (DVD) or other optical disk storage, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices, or any other medium which can be used to store the desired information and which can be accessed by a computer. Furthermore, as is well known to those of ordinary skill in the art, communication media typically embodies computer readable instructions, data structures, program modules or other data in a modulated data signal such as a carrier wave or other transport mechanism and includes any information delivery media.

Claims (13)

1. A method of hair enhancement, the method comprising:
acquiring image characteristics of an original image;
sequentially carrying out residual calculation and feature fusion on the image features of the original image through a plurality of residual modules to obtain residual module fusion features, wherein the input features of the subsequent residual modules in two adjacent residual modules are output features of the preceding residual modules;
and carrying out feature reconstruction based on the fusion features of the residual error module to obtain an enhanced image corresponding to the original image.
2. The method of claim 1, wherein sequentially performing residual computation and feature fusion on the image features of the original image by using a plurality of residual modules, and obtaining residual module fusion features includes:
convolving and fusing the image features based on a first residual error module to obtain first features;
performing convolution fusion on the first features based on a second residual error module to obtain second features;
and fusing the first feature and the second feature to obtain the fusion feature of the residual error module.
3. The hair enhancement method of claim 2, wherein the plurality of residual modules further comprises a third residual module and a fourth residual module.
4. The hair enhancement method according to claim 1 or 2, wherein each of the residual modules includes a plurality of residual layers, and the method of obtaining the output characteristics of the residual modules includes:
for two adjacent residual layers, performing convolution calculation on the final output characteristics of a previous residual layer through a subsequent residual layer to obtain convolution output characteristics, and adding the convolution output characteristics and the final output characteristics of the previous residual layer to obtain final output characteristics of the subsequent residual layer;
under the condition that a plurality of residual layers exist, splicing the final output characteristics of each residual layer and the final output characteristics of the initial layer of the residual module to obtain residual layer splicing characteristics;
and determining the output characteristic of the residual error module according to the input characteristic of the residual error module and the splicing characteristic of the residual error layer.
5. The hair enhancement method according to claim 1, wherein said acquiring image features of the original image comprises:
acquiring initial characteristics of an original image;
and downsampling the initial characteristics to obtain the image characteristics of the original image.
6. The method of claim 5, wherein downsampling the initial feature to obtain an image feature of the original image comprises:
And gradually downsampling the initial feature based on a plurality of downsampling modules to obtain the image feature of the original image, wherein the input feature of the subsequent downsampling module in two adjacent downsampling modules is the output feature of the preceding downsampling module.
7. A method of hair enhancement according to claim 5 or 6, wherein said downsampling of said initial characteristics is achieved by wavelet transformation.
8. The method of claim 7, wherein the performing feature reconstruction based on the residual module fusion feature to obtain an enhanced image corresponding to the original image comprises:
performing multiple upsampling and feature fusion calculation on the fusion features of the residual error module based on multiple upsampling modules to obtain the enhanced image; the number of the up-sampling modules corresponds to the number of the down-sampling modules, the input characteristics of the up-sampling modules in the two adjacent up-sampling modules are determined together according to the output characteristics of the up-sampling modules in the past and the output characteristics of the target down-sampling modules, and the target down-sampling modules correspond to the up-sampling modules in the past.
9. The hair enhancement method according to claim 7, wherein the wavelet transform comprises:
respectively sampling the initial characteristics of the original image at intervals on rows and columns according to a preset step length to obtain a sampling result;
and calculating each frequency band information of the initial characteristic according to the sampling result to serve as the image characteristic of the original image.
10. The hair enhancement method according to claim 1, wherein the hair enhancement method is implemented based on a neural network, and the acquisition method for training the sample image pair of the neural network comprises:
collecting a first sample image, wherein the image quality of the first sample image meets a preset image quality threshold;
performing image degradation on the first sample image to obtain a second sample image, wherein the image quality of the second sample image is lower than that of the first sample image;
the first sample image and the second sample image are taken as a sample image pair.
11. The neural network is characterized by comprising an acquisition module, a plurality of residual error modules and a reconstruction module;
the acquisition module is used for acquiring image characteristics of the original image;
The residual error modules are used for sequentially carrying out residual error calculation and feature fusion on the image features of the original image to obtain residual error module fusion features, wherein the input features of the subsequent residual error modules in two adjacent residual error modules are output features of the previous residual error modules;
and the reconstruction module is used for carrying out feature reconstruction based on the fusion features of the residual error module to obtain an enhanced image corresponding to the original image.
12. An electronic device comprising a processor and a memory for storing executable instructions of the processor; the processor is configured to perform the hair enhancement method of any one of claims 1 to 10 via execution of the executable instructions.
13. A computer readable storage medium storing one or more programs executable by one or more processors to implement the hair enhancement method of any one of claims 1 to 10.
CN202211659099.7A 2022-12-22 2022-12-22 Hair enhancement method, neural network, electronic device, and storage medium Pending CN116188295A (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202211659099.7A CN116188295A (en) 2022-12-22 2022-12-22 Hair enhancement method, neural network, electronic device, and storage medium
PCT/CN2023/139420 WO2024131707A1 (en) 2022-12-22 2023-12-18 Hair enhancement method, neural network, electronic device, and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211659099.7A CN116188295A (en) 2022-12-22 2022-12-22 Hair enhancement method, neural network, electronic device, and storage medium

Publications (1)

Publication Number Publication Date
CN116188295A true CN116188295A (en) 2023-05-30

Family

ID=86449858

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211659099.7A Pending CN116188295A (en) 2022-12-22 2022-12-22 Hair enhancement method, neural network, electronic device, and storage medium

Country Status (2)

Country Link
CN (1) CN116188295A (en)
WO (1) WO2024131707A1 (en)

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TWI624804B (en) * 2016-11-07 2018-05-21 盾心科技股份有限公司 A method and system for providing high resolution image through super-resolution reconstrucion
CN112990171B (en) * 2021-05-20 2021-08-06 腾讯科技(深圳)有限公司 Image processing method, image processing device, computer equipment and storage medium
CN114129171B (en) * 2021-12-01 2022-06-03 山东省人工智能研究院 Electrocardiosignal noise reduction method based on improved residual error dense network
CN114742733A (en) * 2022-04-19 2022-07-12 中国工商银行股份有限公司 Cloud removing method and device, computer equipment and storage medium
CN115100583A (en) * 2022-08-29 2022-09-23 君华高科集团有限公司 Method and system for real-time supervision of safety of kitchen food

Also Published As

Publication number Publication date
WO2024131707A1 (en) 2024-06-27

Similar Documents

Publication Publication Date Title
CN110969577B (en) Video super-resolution reconstruction method based on deep double attention network
Lee et al. Deep recursive hdri: Inverse tone mapping using generative adversarial networks
CN111127336B (en) Image signal processing method based on self-adaptive selection module
CN107123089B (en) Remote sensing image super-resolution reconstruction method and system based on depth convolution network
Zheng et al. Learning frequency domain priors for image demoireing
CN110163237A (en) Model training and image processing method, device, medium, electronic equipment
CN111681177B (en) Video processing method and device, computer readable storage medium and electronic equipment
EP2574216A1 (en) Method and device for recovering a digital image from a sequence of observed digital images
CN113850741B (en) Image noise reduction method and device, electronic equipment and storage medium
CN113066034A (en) Face image restoration method and device, restoration model, medium and equipment
CN112422870B (en) Deep learning video frame insertion method based on knowledge distillation
Huang et al. Hybrid image enhancement with progressive laplacian enhancing unit
CN114881888A (en) Video Moire removing method based on linear sparse attention transducer
CN116468605A (en) Video super-resolution reconstruction method based on time-space layered mask attention fusion
Singh et al. Weighted least squares based detail enhanced exposure fusion
Rasheed et al. LSR: Lightening super-resolution deep network for low-light image enhancement
Achddou et al. Fully synthetic training for image restoration tasks
CN116188295A (en) Hair enhancement method, neural network, electronic device, and storage medium
CN115456891A (en) Under-screen camera image restoration method based on U-shaped dynamic network
CN115311149A (en) Image denoising method, model, computer-readable storage medium and terminal device
CN111383171B (en) Picture processing method, system and terminal equipment
Yang et al. Multi-scale extreme exposure images fusion based on deep learning
CN117635478B (en) Low-light image enhancement method based on spatial channel attention
CN114782256B (en) Image reconstruction method and device, computer equipment and storage medium
CN112529801B (en) Motion blur restoration method based on high-frequency image block estimation blur kernel

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination