CN112001940B - Image processing method and device, terminal and readable storage medium - Google Patents

Image processing method and device, terminal and readable storage medium Download PDF

Info

Publication number
CN112001940B
CN112001940B CN202010852014.1A CN202010852014A CN112001940B CN 112001940 B CN112001940 B CN 112001940B CN 202010852014 A CN202010852014 A CN 202010852014A CN 112001940 B CN112001940 B CN 112001940B
Authority
CN
China
Prior art keywords
portrait
background
scaling
target
cut
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010852014.1A
Other languages
Chinese (zh)
Other versions
CN112001940A (en
Inventor
方冬冬
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Oppo Chongqing Intelligent Technology Co Ltd
Original Assignee
Oppo Chongqing Intelligent Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Oppo Chongqing Intelligent Technology Co Ltd filed Critical Oppo Chongqing Intelligent Technology Co Ltd
Priority to CN202010852014.1A priority Critical patent/CN112001940B/en
Publication of CN112001940A publication Critical patent/CN112001940A/en
Application granted granted Critical
Publication of CN112001940B publication Critical patent/CN112001940B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/194Segmentation; Edge detection involving foreground-background segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration by the use of more than one image, e.g. averaging, subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30196Human being; Person

Abstract

The application discloses an image processing method, which comprises the following steps: separating a portrait and a background of an input image; reconstructing the background based on a preset first training model to obtain a target background; reconstructing the portrait to obtain a target portrait based on a preset second training model, wherein the second training model is different from the first training model; and fusing the target background and the target portrait to obtain a target image. The application also discloses an image processing device, a terminal and a nonvolatile computer readable storage medium. According to the image processing method, the background and the portrait are respectively reconstructed by using two different training models to respectively obtain the clearer background and the clearer portrait, then the portrait and the background are fused, so that the obtained target image is clearer compared with the input image, and the imaging quality is favorably improved.

Description

Image processing method and device, terminal and readable storage medium
Technical Field
The present application relates to the field of image processing technologies, and in particular, to an image processing method, an image processing apparatus, a terminal, and a non-volatile computer-readable storage medium.
Background
When the mobile phone is used for shooting images, due to the reasons of defocusing, shaking, amplification and the like, the human image and the background in the images are blurred and unclear; when the human body is close to the mobile phone camera, the background is very fuzzy, and meanwhile, the human body is more likely to be fuzzy due to small movement and inaccurate focusing of the human body; when the human body is far away from the mobile phone camera, the human body occupies too small the whole picture, so that the resolution of the human body is low, and meanwhile, the background blurring is more likely to be caused by small movement and inaccurate focusing of the human body. The degradation of the sharpness of both the background and the human image affects the imaging quality of the captured image.
Disclosure of Invention
The embodiment of the application provides an image processing method, an image processing device, a terminal and a non-volatile computer readable storage medium.
The image processing method of the embodiment of the application comprises the following steps: separating a portrait and a background of an input image; reconstructing the background based on a preset first training model to obtain a target background; reconstructing the portrait to obtain a target portrait based on a preset second training model, wherein the second training model is different from the first training model; and fusing the target background and the target portrait to obtain a target image.
According to the image processing method, firstly, the portrait and the background of an input image are separated, then the background is reconstructed through the first training model to obtain the target background, the portrait is reconstructed through the second training model to obtain the target portrait, the second training model is different from the first training model, finally, the target background and the target portrait are fused to obtain the target image, due to the fact that the sharpness of the portrait and the background is reduced, the portrait and the background are reconstructed through the two different training models respectively, the clearer background and the clearer portrait can be obtained, the target portrait obtained through fusion is clearer compared with the input image, and the improvement of imaging quality is facilitated.
The image processing device comprises a separation module, a first reconstruction module, a second reconstruction module and a fusion module, wherein the separation module is used for separating a portrait and a background of an input image; the first reconstruction module is used for reconstructing the background based on a preset first training model to obtain a target background; the second modeling block is used for reconstructing the portrait to obtain a target portrait based on a preset second training model, and the second training model is different from the first training model; the fusion module is used for the target background and the target portrait to obtain a target image.
In the image processing device of the embodiment of the application, firstly, the portrait and the background of an input image are separated, then the background is reconstructed through the first training model to obtain the target background, the portrait is reconstructed through the second training model to obtain the target portrait, the second training model is different from the first training model, finally, the target background and the target portrait are fused to obtain the target image, the portrait and the background are reconstructed through two different training models respectively due to the fact that the definition of the portrait and the definition of the background are different, the clearer background and the clearer portrait can be obtained, the target portrait obtained through fusion is clearer compared with the input image, and the improvement of the imaging quality is facilitated.
The terminal of the embodiment of the application comprises a processor, and the processor is used for: separating a portrait and a background of an input image; reconstructing the background based on a preset first training model to obtain a target background; reconstructing the portrait to obtain a target portrait based on a preset second training model, wherein the second training model is different from the first training model; and fusing the target background and the target portrait to obtain a target image.
In the terminal of the embodiment of the application, firstly, the portrait and the background of an input image are separated, then the background is reconstructed through the first training model to obtain the target background, the portrait is reconstructed through the second training model to obtain the target portrait, the second training model is different from the first training model, the target background and the target portrait are finally fused to obtain the target image, the portrait and the background are different due to the fact that the definition of the portrait and the definition of the background are reduced, the portrait and the background are respectively reconstructed through two different training models, clearer background and portrait can be obtained, the target portrait obtained through final fusion is clearer compared with the input image, and the improvement of imaging quality is facilitated.
A non-transitory computer-readable storage medium of an embodiment of the present application stores a computer program that, when executed by one or more processors, implements an image processing method of an embodiment of the present application. The image processing method of the embodiment of the application comprises the following steps: separating a portrait and a background of an input image; reconstructing the background based on a preset first training model to obtain a target background; reconstructing the portrait to obtain a target portrait based on a preset second training model, wherein the second training model is different from the first training model; and fusing the target background and the target portrait to obtain a target image.
In the non-volatile computer-readable storage medium according to the embodiment of the application, firstly, the portrait and the background of an input image are separated, then the background is reconstructed through the first training model to obtain the target background, the portrait is reconstructed through the second training model to obtain the target portrait, the second training model is different from the first training model, the target background and the target portrait are finally fused to obtain the target image, the portrait and the background are reconstructed through the two different training models respectively due to the fact that the definition of the portrait and the definition of the background are different, and the clearer background and the clearer portrait can be obtained, so that the target portrait obtained through final fusion is clearer compared with the input image, and the improvement of the imaging quality is facilitated.
Additional aspects and advantages of embodiments of the present application will be set forth in part in the description which follows and, in part, will be obvious from the description, or may be learned by practice of embodiments of the present application.
Drawings
The above and/or additional aspects and advantages of the present application will become apparent and readily appreciated from the following description of the embodiments, taken in conjunction with the accompanying drawings of which:
fig. 1 is a schematic flowchart of an image processing method according to an embodiment of the present application;
fig. 2 is a block diagram of a terminal according to an embodiment of the present application;
FIG. 3 is a block diagram of an image processing apparatus according to an embodiment of the present application;
FIG. 4 is a schematic flow chart of an image processing method according to an embodiment of the present application;
FIG. 5 is a block diagram of an image processing apparatus according to an embodiment of the present application;
FIG. 6 is a flowchart illustrating an image processing method according to an embodiment of the present application;
FIG. 7 is a block diagram of an acquisition unit of an acquisition module of an image processing apparatus according to an embodiment of the present application;
FIG. 8 is a schematic flow chart of an image processing method according to an embodiment of the present application;
FIG. 9 is a block diagram illustrating a segmentation unit of an acquisition block of an image processing apparatus according to an embodiment of the present application;
FIG. 10 is a schematic flow chart of an image processing method according to an embodiment of the present application;
FIG. 11 is a block diagram of a first reconstruction module of an image processing apparatus according to an embodiment of the present application;
FIG. 12 is a schematic diagram of an image processing method according to an embodiment of the present application;
fig. 13 is a flowchart illustrating an image processing method according to an embodiment of the present application;
FIG. 14 is a block diagram of a second reconstruction module of the image processing apparatus according to the embodiment of the present application;
FIG. 15 is a schematic diagram illustrating an image processing method according to an embodiment of the present application;
fig. 16 is a flowchart illustrating an image processing method according to an embodiment of the present application;
fig. 17 is a block diagram of a fusion module of the image processing apparatus according to the embodiment of the present application; and
fig. 18 is a schematic diagram illustrating a connection relationship between a computer-readable storage medium and a processor according to an embodiment of the present application.
Detailed Description
Embodiments of the present application will be further described with reference to the drawings. The same or similar reference numbers in the drawings identify the same or similar elements or elements having the same or similar functionality throughout. In addition, the embodiments of the present application described below in conjunction with the accompanying drawings are exemplary and are only for the purpose of explaining the embodiments of the present application, and are not to be construed as limiting the present application.
Referring to fig. 1 to 3, an image processing method according to an embodiment of the present disclosure includes the following steps:
011: separating a portrait and a background of an input image;
012: reconstructing a background based on a preset first training model to obtain a target background;
013: reconstructing the portrait to obtain a target portrait based on a preset second training model, wherein the second training model is different from the first training model; and
014: and fusing the target background and the target portrait to obtain a target image.
The image processing apparatus 10 according to the embodiment of the present application includes a separation module 11, a first reconstruction module 12, a second reconstruction module 13, and a fusion module 14, and the separation module 11, the first reconstruction module 12, the second reconstruction module 13, and the fusion module 14 may be respectively configured to implement step 011, step 012, step 013, and step 014. That is, the separation module 11 may be used to separate a portrait and a background of an input image; the first reconstruction module 12 may be configured to reconstruct a background based on a preset first training model to obtain a target background; the second reconstruction module 13 may be configured to reconstruct the portrait based on a preset second training model, where the second training model is different from the first training model, to obtain a target portrait; the fusion module 14 may be configured to fuse the target background and the target portrait to obtain the target image.
The terminal 100 of the embodiment of the present application includes a processor 20, and the processor 20 may be configured to separate a portrait and a background of an input image; reconstructing the background based on a preset first training model to obtain a target background; reconstructing the portrait to obtain a target portrait based on a preset second training model, wherein the second training model is different from the first training model; and fusing the target background and the target portrait to obtain a target image. That is, processor 20 may be configured to implement step 011, step 012, step 013, and step 014.
In the image processing method, the image processing apparatus 10, and the terminal 100 according to the embodiment of the application, firstly, the portrait and the background of the input image are separated, then, the background is reconstructed by the first training model to obtain the target background, the portrait is reconstructed by the second training model to obtain the target portrait, the second training model is different from the first training model, and finally, the target background and the target portrait are fused to obtain the target image.
The terminal 100 includes a housing 30 and a processor 20, the processor 20 being mounted within the housing 30. The terminal 100 may be a mobile phone, a tablet computer, a display, a smart watch, a head display device, a camera, a gate, a door access device, a game machine, etc., which are not listed herein. In the embodiment of the present application, the terminal 100 is a mobile phone as an example, and it is understood that the specific form of the terminal 100 is not limited to the mobile phone. The housing 30 may also be used to mount functional modules of the terminal 100, such as a power supply device, an imaging device, and a communication device, so that the housing 30 provides protection for the functional modules, such as dust prevention, drop prevention, and water prevention.
Referring to fig. 2, a terminal 100 (e.g., a mobile phone) generally includes a camera 40, and a person can take a picture through the camera 40, and now likes to record life through the camera 40. However, when the terminal 100 is used to take a picture, the portrait and the background in the picture may be blurred and unclear due to out-of-focus, jitter, magnification, and the like, and a poor photographing experience may be easily provided for a user. And the reasons for the reduction of the definition of the portrait and the background are different, if the portrait and the background are optimized through the same model, the situation that the portrait becomes clear but the background is still unclear or the background clear image is still unclear is easily caused.
Specifically, in step 011, a portrait and a background of an input image are separated. Specifically, a portrait mask of an input portrait can be obtained through some portrait segmentation models (e.g., deep lab model, U-net, full Convolutional neural Networks (FCN)), and then the portrait of the input image is separated from the background to obtain two pictures of one portrait and one background. In order to improve the edge information of the portrait and the background, the separated portrait and the separated background may be subjected to guided filtering and other processing. The input image may be an image captured in real time by the user during shooting, an image captured by the user, or an image captured by another terminal 100, which is not limited herein.
In step 012, a target background is obtained by reconstructing a background based on a preset first training model. Because the background definition is reduced, the method is simpler than that of a portrait, low-resolution data corresponding to the high-resolution data can be generated by acquiring the high-resolution data and using auxiliary means such as downsampling and artificially designed fuzzy core, training and testing data pairs are formed, and then training and testing are performed to obtain a first training model. For example, the first training model may be obtained by supervised learning, such as using Very Deep Convolutional Super Resolution (VDSR) network. The high-resolution background corresponding to the original background can be reconstructed through the first training model to be used as the target background, and the definition of the background can be improved.
In step 013, a target portrait is obtained after the portrait is reconstructed based on a preset second training model. Due to the complexity of the reasons for the reduction of the definition of the portrait, if a training model which is the same as the reconstructed background is used, the definition of the obtained portrait is still low, and the training model for reconstructing the portrait is required to be inconsistent with the training model for reconstructing the background. The second training model can be obtained by training through a generative method, for example, low-quality pictures and corresponding high-quality pictures under various conditions are collected to form a training and testing data pair, and the training and testing are performed to obtain the second training model. For example, an Enhanced Super-Resolution generation countermeasure network (ESRGAN) mode may be used to obtain a second training model, and the separated portrait is input into the second training model, where the second training model may obtain a higher-Resolution and clearer portrait as the target portrait.
In step 014, the target background and the target portrait are fused to obtain a target image. It is understood that the target background and the target portrait with high definition are obtained in step 012 and step 013, respectively, and the target background and the target portrait can be fused to obtain the target portrait. For example, a target portrait may be added to a corresponding location in a target background; or according to the relation between the portrait and the background in the input image, converting the coordinates to fuse the target portrait to the target background. Because the target background and the target portrait are clear, the fused target image is clearer.
Referring to fig. 4 and 5, in some embodiments, step 011 includes the steps of:
0111: acquiring a first portrait mask of an input image;
0112: removing the interference portrait in the first portrait mask to obtain a second portrait mask;
0113: performing feathering treatment on the second portrait mask to obtain a third portrait mask;
0114: expanding the third portrait mask outwards by preset pixel values to obtain a fourth portrait mask; and
0115: the fourth portrait mask is segmented to separate out the portrait and the background.
In some embodiments, the separation module 11 includes an obtaining unit 111, a removing unit 112, a feathering unit 113, an expanding unit 114, and a segmenting unit 115, where the obtaining unit 111 may be configured to obtain a first portrait mask of the input image; the removing unit 112 may be configured to remove the interfering person in the first person mask to obtain a second person mask; the feathering unit 113 may be configured to feather the second portrait mask to obtain a third portrait mask; the expanding unit 114 may be configured to expand the third portrait mask outward by preset pixel values to obtain a fourth portrait mask; the segmentation unit 115 may be configured to segment the fourth portrait mask to separate out the portrait and the background. That is, the obtaining unit 111, the removing unit 112, the feathering unit 113, the expanding unit 114, and the dividing unit 115 may be respectively configured to implement the steps 0111, 0112, 0113, 0114, and 0115.
In some embodiments, the processor 20 may be configured to: acquiring a first portrait mask of an input image; removing the interference portrait in the first portrait mask to obtain a second portrait mask; performing feathering treatment on the second portrait mask to obtain a third portrait mask; expanding the third portrait mask outwards by preset pixel values to obtain a fourth portrait mask; and segmenting the fourth portrait mask to separate out the portrait and the background. That is, the processor 20 may also be used to implement step 0111, step 0112, step 0113, step 0114, and step 0115.
Specifically, a first portrait mask of an input image can be obtained through FCN, U-net, depeLabV 3 and the like, then connected domain analysis is carried out on the first portrait mask, the portrait area in the first portrait mask is detected, a portrait with the portrait area smaller than a set threshold value is used as an interference portrait, a detected portrait with errors (such as some sculptures or portrait trawls and the like) is also used as the interference portrait, the interference portrait is removed to obtain a second portrait mask, the influence of the interference portrait is avoided, and the obtained portrait mask is more accurate.
Furthermore, the second portrait mask is subjected to feathering treatment in a guiding filtering mode and the like to obtain a third portrait mask, and noise in the second portrait mask can be reduced and edge information of the second portrait mask can be enriched through the feathering treatment. And then expanding the third portrait mask outwards to preset pixel values, namely, outwards amplifying the third portrait mask to the preset pixel values to obtain a fourth portrait mask, wherein the preset pixel values can be set or preset fixed values according to user requirements, so that fusion of the background and the portrait is facilitated, and edge information of the portrait is further kept. The fourth portrait mask may then be segmented by some segmentation algorithms to obtain a portrait and a background. Therefore, the finally obtained edge information of the portrait and the background is complete, and the image definition is improved.
Referring to fig. 6 and 7, in some embodiments, step 0111 includes the following steps:
01111: cutting a preset region to be amplified in an input image to obtain an image to be processed;
01112: the method comprises the steps of sampling an image to be processed according to a first scaling to obtain a down-sampled image, wherein the first scaling is determined according to the size of an area to be amplified; and
01113: a downsampled portrait mask of the downsampled image is acquired as a first portrait mask.
In some embodiments, the obtaining unit 111 includes a cropping sub-unit 1111, a downsampling sub-unit 1112, and an obtaining sub-unit 1113, and the cropping sub-unit 1111 may be configured to crop a predetermined region to be enlarged in the input image to obtain an image to be processed; the downsampling subunit 1112 may be configured to downsample the image to be processed according to a first scaling ratio, so as to obtain a downsampled image, where the first scaling ratio is determined according to the size of the area to be enlarged; the acquiring sub-unit 1113 is configured to acquire a downsampled portrait mask of the downsampled image as the first portrait mask. That is, the clipping sub-unit 1111, the down-sampling sub-unit 1112, and the obtaining sub-unit 1113 may be respectively used to implement step 01111, step 01112, and step 01113.
In some embodiments, the processor 20 may be further configured to: cutting a preset region to be amplified in an input image to obtain an image to be processed; the method comprises the steps of sampling an image to be processed according to a first scaling to obtain a down-sampled image, wherein the first scaling is determined according to the size of an area to be amplified; and acquiring a downsampled portrait mask of the downsampled image to serve as the first portrait mask. That is, the processor 20 may also be used to implement step 01111, step 01112, and step 01113.
Specifically, a region to be enlarged in the input image is cut out to obtain an image to be processed, where the region to be enlarged may be preset, or may be automatically generated according to a zoom multiple when a photo is taken, which is not limited herein. The region to be enlarged may be a region that expands around with respect to the center of the input image, or may be any other region (including a region having a portrait, for example). By intercepting the region to be amplified in the input image, the image in the region to be amplified can be processed in a key mode, and the definition of the partial region is improved.
Further, the down-sampling of the image to be processed according to the first scaling ratio may be specifically performed by performing a bicubic down-sampling, a bilinear down-sampling, or the like. Wherein the first scaling is determined according to the length or width of the region to be magnified. For example, when the length or width of the region to be magnified is larger, the first scaling ratio is smaller; the smaller the length or width of the region to be magnified, the greater the first scaling. Then, a portrait mask for the downsampled image is obtained as a first portrait mask. The portrait mask of the down-sampled image is acquired, for example, by means of depeplab v 3.
In one embodiment, the height of the input image is H, the width of the input image is W, and the parameters of the region to be enlarged are left, top, height and width, wherein left and top refer to the left vertex coordinates of the region to be enlarged, height refers to the height of the region to be enlarged, and width refers to the width of the region to be enlarged. And determining a first scaling according to the height or width and the size of the graphic frame in the depeplab V3, and performing double-triple down-sampling on the image to be processed to ensure that the size of the image to be processed is the same as the size of the graphic frame in the depeplab V3. By scaling the image to be processed by using a bicubic downsampling mode, the processing speed can be improved.
Referring to fig. 8 to 9, in some embodiments, step 0114 includes the following steps:
01141: up-sampling the fourth portrait mask according to the first scaling so as to obtain an up-sampled portrait mask; and
01142: the upsampled portrait mask is segmented to obtain a background and a portrait.
In some embodiments, the segmentation unit 115 includes an upsampling sub-unit 1141 and a segmentation sub-unit 1142, and the upsampling sub-unit 1141 may be configured to upsample the fourth portrait mask according to the first scaling to obtain an upsampled portrait mask; the segmentation unit 115 may be configured to segment the upsampled portrait mask to obtain a background and a portrait. That is, the upsampling sub-unit 1141 may be used to implement step 01141, and the partitioning sub-unit 1142 may be used to implement step 01142.
In some embodiments, the processor 20 is further configured to: up-sampling the fourth portrait mask according to the first scaling so as to obtain an up-sampled portrait mask; and segmenting the up-sampled portrait mask to obtain a background and a portrait. That is, the processor 20 is also configured to implement step 01141 and step 01142.
Specifically, since the image to be processed is downsampled according to the first scaling in step 0111, in order to restore the size of the image to the original size, the fourth portrait mask needs to be upsampled to obtain an upsampled portrait mask. Specifically, the fourth portrait mask may be subjected to bicubic upsampling, bilinear upsampling, and other processing manners, which is not limited herein. It will be appreciated that the size of the upsampled portrait mask is substantially the same as the size of the image to be processed. After the up-sampling portrait mask is obtained, the up-sampling portrait mask can be segmented through a neural network model or other models to separate out a background and a portrait, so that the finally obtained portrait and the edge information of the background are rich.
Referring to fig. 10 and 11, in some embodiments, the first training model includes a plurality of scaling models with different multiples, and step 012 includes the following steps:
0121: selecting a scaling model with a multiple corresponding to a second scaling as a background scaling model according to a preset first mapping relation, wherein the second scaling is determined according to a region to be amplified and an input image;
0122: determining a first area to be cut according to the second scaling and the area to be amplified;
0123: zooming the background by using a background zooming model to obtain a background to be cut;
0124: cutting a first area to be cut out from the background to be cut out to serve as a cutting background; and
0125: and adjusting the size of the cutting background according to the relation between the size of the cutting background and the size of the input image to obtain the target background.
In some embodiments, the first reconstruction module 12 comprises a first selection unit 121, a first determination unit 122, a first scaling unit 123, a first clipping unit 124 and a first adjustment unit 125. The first selecting unit 121, the first determining unit 122, the first scaling unit 123, the first clipping unit 124 and the first adjusting unit 125 may be used to implement steps 0121, 0122, 0123, 0124 and 0125, respectively. That is, the first selecting unit 121 may be configured to select, according to a preset first mapping relationship, a scaling model of a multiple corresponding to the second scaling ratio as the background scaling model; the first determining unit 122 may be configured to determine a first region to be cropped according to the second scaling and the region to be zoomed; the first scaling unit 123 may be configured to scale the background by using a background scaling model to obtain a background to be clipped; the first cutting unit 124 may be configured to cut out a first area to be cut out from the background to be cut out as a cutting background; the first adjusting unit 125 may be configured to adjust the size of the cropping background according to a relationship between the size of the cropping background and the size of the input image to obtain the target background.
In some embodiments, the processor 20 is further configured to: selecting a scaling model with a multiple corresponding to a second scaling as a background scaling model according to a preset first mapping relation, wherein the second scaling is determined according to a region to be amplified and an input image; determining a first region to be cut according to the second scaling and the region to be amplified; zooming the background by using a background zooming model to obtain a background to be cut; cutting out a first area to be cut from the background to be cut to serve as a cutting background; and adjusting the size of the cutting background according to the relation between the size of the cutting background and the size of the input image to obtain the target background. That is, the processor 20 is also used to implement step 0121, step 0122, step 0123, step 0124, and step 0125.
Specifically, the background definition is reduced, so that the method is simpler than that of a human image, high-resolution data is acquired, and corresponding low-resolution data can be simulated by auxiliary means such as downsampling and artificially designed fuzzy core. And forming training and testing data pairs, and then performing training and testing, wherein a plurality of scaling models of different multiples can be obtained in the training process, for example, models of one, two, three, four, five, six or more multiples can be obtained. The first mapping relation comprises the relation between the second scaling and the scaling model, and the scaling model of the corresponding multiple can be found through the second scaling. In one example, in the first mapping relationship, a scaling model close to a multiple of the second scaling ratio is selected as the background scaling model, and if the second scaling ratio is 2.7, a triple scaling model is selected as the background scaling model.
The second scaling may be determined according to the size of the region to be enlarged and the size of the input image, for example: if the height of the region to be enlarged is H and the width of the region to be enlarged is W, and the height of the input image is H and the width of the input image is W, the second scaling ratio may be H/H or W/W. The area of the zoomed background needing to be cut is the first area to be cut, and the size and the position of the first area to be cut can be determined through the second zooming proportion and the area to be amplified. For example, the region to be enlarged is scaled according to the second scaling ratio to obtain the first region to be cropped. Or, after the background is amplified by the corresponding multiple, if the size of the background does not satisfy the preset condition, the background needs to be cut to meet the preset condition (for example, the convolution layer requires that the background should satisfy n (a × B)), and the first region to be cut is determined according to the second scaling and the region to be amplified, so that the size of the background in the first region to be cut satisfies the preset condition.
Further, zooming the background by using a background zooming model to obtain a portrait to be cut; then, cutting out a first area to be cut out of the background to be cut out to serve as a cutting background; and finally, adjusting the size of the cutting background according to the relation between the size of the cutting background and the size of the input image to obtain the target background. For example, if the size of the cut background is the same as the size of the input image, the size of the cut background does not need to be adjusted; if the size of the clipping background is different from the size of the input image, it is necessary to perform interpolation processing on the size of the clipping background, and the size of the clipping background is adjusted to be the same as the size of the input image by interpolation methods such as bicubic interpolation processing and bilinear interpolation processing. Therefore, the problem that the size of the finally obtained target image is inconsistent with the size of the input image is solved.
Referring to fig. 12, the specific process of reconstructing the background by VDSR method is as follows: firstly, sequentially performing 3 × 3 convolution layer treatment, 3 × 3 convolution layer first expansion coefficient treatment, 3 × 3 convolution layer second expansion coefficient treatment, 3 × 3 convolution layer third expansion coefficient treatment, 3 × 3 convolution layer first expansion coefficient, 3 × 3 convolution layer second expansion coefficient treatment and 3 × 3 convolution layer third expansion coefficient treatment on a background, and then performing Subpixe layer treatment; secondly, after 3 × 3 convolution layer processing is carried out on the background, 1 × 1 convolution layer processing and Subpixe layer processing are carried out in sequence; and finally, outputting the target background after being processed by a Subpixe layer. Because the network layer of the VDSR mode is deeper, the VDSR mode can have a larger receptive field; meanwhile, the VDSR carries out 0 complementing operation on the image before each convolution, thus ensuring that all feature maps and the final output image are consistent in size and solving the problem that the image is smaller and smaller through gradual convolution. The resolution ratio and the definition of the finally obtained target background are higher.
Referring to fig. 13 and 14, in some embodiments, step 013 includes the following steps:
0131: selecting a scaling model corresponding to a preset third scaling multiple as a portrait scaling model according to a preset second mapping relation;
0132: determining a second region to be cut according to the third scaling;
0133: zooming the portrait by using the portrait zooming model to obtain a portrait to be cut;
0134: cutting out a second area to be cut out from the portrait to be cut out to be used as a portrait to be cut out; and
0135: and adjusting the size of the cut portrait according to the relation between the size of the cut portrait and the size of the input image to obtain the target portrait.
In some embodiments, the second re-modeling block 13 includes a second selecting unit 131, a second determining unit 132, a second scaling unit 133, a second clipping unit 134, and a second adjusting unit 135, and the second selecting unit 131, the second determining unit 132, the second scaling unit 133, the second clipping unit 134, and the second adjusting unit 135 may be used to implement steps 0131, steps 0132, steps 0133, steps 0134, and steps 0135, respectively. That is, the second selecting unit 131 may be configured to select, according to the preset second mapping relationship, a scaling model corresponding to a preset third scaling factor as the portrait scaling model; the second determining unit 132 may be configured to determine the second region to be cropped according to the third scaling ratio; the second cutting unit 134 may be configured to cut out a second to-be-cut region from the to-be-cut portrait as a cut portrait; the second adjusting unit 135 may be configured to adjust the size of the cropped portrait according to the relationship between the size of the cropped portrait and the size of the input image to obtain the target portrait.
In some embodiments, the processor 20 may be further configured to: selecting a scaling model corresponding to a preset third scaling ratio by multiple as a portrait scaling model according to a preset second mapping relation; determining a second region to be cut according to the third scaling; zooming the portrait by using the portrait zooming model to obtain a portrait to be cut; cutting out a second area to be cut out from the portrait to be cut out to be used as a portrait to be cut out; and adjusting the size of the cut portrait according to the relation between the size of the cut portrait and the size of the input image to obtain the target portrait. That is, processor 20 may also be used to implement step 0131, step 0132, step 0133, step 0134, and step 0135.
In particular, since the reason for the decrease in human image definition is complicated, it is necessary to capture a picture of low quality in various situations (e.g., different brightness, different motions of a person, etc.) and a picture of corresponding high quality. And then aligning the low-quality picture and the corresponding high-quality picture through Scale-invariant feature transform (SIFT), acquiring a preset third scaling ratio as the proportional relation between the low-quality picture and the corresponding high-quality picture, adjusting the ratio between the low-quality picture and the high-quality picture, forming a training and testing data pair, and then training and testing. Multiple scaling models of different multiples may be obtained during the training process, for example one, two, three, four, five, six, or more multiples of the scaling models may be obtained. The third scaling may be a preset scaling value in the second training model, or may be an empirical value obtained after continuously training the portrait.
Further, the second mapping relationship includes a relationship between a third scaling ratio and the scaling model, and how many times of the scaling model is adopted can be determined according to the third scaling ratio to serve as the portrait scaling model. And determining the size and the position of the second region to be cropped after the portrait is zoomed according to the third zoom scale so as to conveniently crop the zoomed portrait. For example, according to the mapping relationship between the third scaling ratio and the second clipping region to be clipped, the size of the corresponding second clipping region can be determined through the third scaling ratio. Or the area to be enlarged is zoomed according to a third zoom ratio and is provided with a second area to be cut. Here, step 0131 and step 0132 may be executed simultaneously or separately, and are not limited herein. And zooming the portrait by using the portrait zooming model to obtain the portrait to be cut. And then cutting out a second region to be cut out in the portrait to be cut out as a cut portrait, and finally comparing the size of the cut portrait with the size of the input image.
Further, if the size of the cut portrait is equal to that of the input image, the size of the cut portrait does not need to be adjusted, and the cut portrait is the target portrait at the moment; if the size of the cut portrait is not equal to the size of the input image, interpolation processing (such as bilinear interpolation processing, bicubic interpolation processing and the like) needs to be performed on the cut portrait, the size of the cut portrait is adjusted to be the same as the size of the input image, and the adjusted cut portrait is the target portrait.
Referring to fig. 15, a specific process of reconstructing a network structure of a portrait by using an ESRGAN may include: and carrying out the first 3X 3 convolution layer treatment, the second 3X 3 convolution layer treatment, the U-net treatment, the third 3X 3 convolution layer treatment, the 1X 1 convolution layer treatment and the Subpixe layer treatment on the human image once to obtain the target human image.
Referring to fig. 16 and 17, in some embodiments, step 014 includes the steps of:
0141: performing interpolation processing on the up-sampling portrait mask so as to enable the size of the up-sampling portrait mask to be the same as that of the input image;
0142: performing Gaussian blur on the up-sampling portrait mask subjected to interpolation processing to obtain a fusion parameter; and
0143: and fusing the target background and the target portrait according to a preset fusion rule and a preset fusion parameter to obtain a target image.
In certain embodiments, the fusion module 14 includes an interpolation unit 141, a blurring unit 142, and a fusion unit 143, and the interpolation unit 141, the blurring unit 142, and the fusion unit 143 may be used to implement steps 0141, 0142, and 0143, respectively. That is, the interpolation unit 141 may be configured to perform interpolation processing on the upsampled portrait mask so that the size of the third portrait mask is the same as the size of the input image; the blurring unit 142 may be configured to perform gaussian blurring on the upsampled portrait mask after the interpolation processing is performed, so as to obtain a fusion parameter; the fusion unit 143 may be configured to fuse the target background and the target portrait according to a preset fusion rule and a preset fusion parameter, so as to obtain a target image.
In some embodiments, the processor 20 may be further configured to: performing interpolation processing on the up-sampling portrait mask to enable the size of the third portrait mask to be the same as that of the input image; performing Gaussian blur on the up-sampling portrait mask subjected to interpolation processing to obtain a fusion parameter; and fusing the target background and the target portrait according to a preset fusion rule and a preset fusion parameter to obtain a target image. That is, processor 20 may also be used to implement steps 0141, 0142, and 0143.
Specifically, the upsampled portrait mask is obtained in step 0142, and in order to avoid the difference between the size of the upsampled portrait mask and the size of the input portrait, interpolation processing may be performed on the upsampled portrait, for example, scaling the upsampled portrait by algorithms such as bilinear interpolation and bicubic interpolation, so that the size of the upsampled portrait mask is the same as the size of the input image. And then, performing Gaussian blur on the up-sampling portrait mask after interpolation processing to obtain fusion parameters, wherein the fusion parameters can be used for fusing a target portrait and a target background.
More specifically, after gaussian blurring is performed on the upsampled portrait mask, a gaussian blur value can be obtained, and the value can be used as a fusion parameter. Or performing Gaussian blur on the up-sampling portrait mask for multiple times through a pre-trained neural network model, and taking a numerical value with the best effect as a fusion parameter. The fusion parameter may specifically be any value between 0 and 1, for example, a value such as 0.3, 0.4, 0.48, 0.5, 0.55, 0.65, 0.75, 0.8, 0.9, and the like, and may represent a fusion ratio between the target background and the target portrait.
And then fusing the target background and the target portrait according to a preset fusion rule and a preset fusion parameter to obtain a target image. Specifically, a fusion model may be set, a fusion rule is preset in the fusion model, the fusion rule includes the relationship between the target image and the fusion parameter, the target background and the target portrait, the target background and the fusion parameter are input into the fusion model, so that the target image can be output. By obtaining the fusion parameters through Gaussian blur, the target background and the target portrait can be better fused.
In one embodiment, the method includes the steps of performing gaussian blur on an upsampled portrait mask to obtain a fusion parameter mask _ fuse, wherein a target portrait is SR _ port, a target background is SR _ background, a target image is SR, and a fusion rule is SR = mask _ fuse SR _ port + (1-mask _ fuse) SR _ background.
Referring to fig. 18, one or more non-transitory computer-readable storage media 300 containing a computer program 301 according to an embodiment of the present disclosure, when the computer program 301 is executed by one or more processors 20, the processor 20 may execute the image processing method according to any of the embodiments.
For example, referring to fig. 3, the computer program 301, when executed by the one or more processors 20, causes the processors 20 to perform the steps of:
011: separating a portrait and a background of an input image;
012: reconstructing the background based on a preset first training model to obtain a target background;
013: reconstructing the portrait to obtain a target portrait based on a preset second training model, wherein the second training model is different from the first training model; and
014: and fusing the target background and the target portrait to obtain a target image.
For another example, referring to fig. 16, when the computer program 301 is executed by the one or more processors 20, the processor 20 is caused to perform the steps of:
0141: performing interpolation processing on the up-sampling portrait mask to enable the size of the up-sampling portrait mask to be the same as that of the input image;
0142: performing Gaussian blur on the up-sampling portrait mask subjected to interpolation processing to obtain a fusion parameter; and
0143: and fusing the target background and the target portrait according to a preset fusion rule and a preset fusion parameter to obtain a target image.
In the description herein, reference to the description of the terms "certain embodiments," "one embodiment," "some embodiments," "illustrative embodiments," "examples," "specific examples," or "some examples" means that a particular feature, structure, material, or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the application. In this specification, schematic representations of the above terms do not necessarily refer to the same embodiment or example. Furthermore, the particular features, structures, materials, or characteristics described may be combined in any suitable manner in any one or more embodiments or examples.
Furthermore, the terms "first", "second" and "first" are used for descriptive purposes only and are not to be construed as indicating or implying relative importance or to implicitly indicate the number of technical features indicated. Thus, a feature defined as "first" or "second" may explicitly or implicitly include at least one of the feature. In the description of the present application, "a plurality" means at least two, e.g., two, three, unless specifically limited otherwise.
Any process or method descriptions in flow charts or otherwise described herein may be understood as representing modules, segments, or portions of code which include one or more executable instructions for implementing specific logical functions or steps of the process, and the scope of the preferred embodiments of the present application includes other implementations in which functions may be executed out of order from that shown or discussed, including substantially concurrently or in reverse order, depending on the functionality involved, as would be understood by those reasonably skilled in the art of the present application.
Although embodiments of the present application have been shown and described above, it is understood that the above embodiments are exemplary and should not be construed as limiting the present application, and that variations, modifications, substitutions and alterations of the above embodiments may be made by those of ordinary skill in the art within the scope of the present application, which is defined by the claims and their equivalents.

Claims (11)

1. An image processing method, comprising:
separating a portrait and a background of an input image;
reconstructing the background based on a preset first training model to obtain a target background;
reconstructing the portrait to obtain a target portrait based on a preset second training model, wherein the second training model is different from the first training model; and
fusing the target background and the target portrait to obtain a target image;
the second training model comprises a plurality of scaling models with different multiples, and the target portrait is obtained after the portrait is reconstructed based on the preset second training model, and the method comprises the following steps:
selecting the scaling model corresponding to the preset third scaling multiple as a portrait scaling model according to the preset second mapping relation;
determining a second region to be cut according to the third scaling;
zooming the portrait by using the portrait zooming model to obtain a portrait to be cut;
cutting out the second to-be-cut area from the to-be-cut portrait to serve as a cut portrait; and
adjusting the size of the cut portrait according to the relation between the size of the cut portrait and the size of the input image to obtain the target portrait;
the first training model comprises a plurality of scaling models with different multiples, and the target background is obtained after the background is reconstructed based on a preset first training model, and the method comprises the following steps:
selecting a scaling model corresponding to a second scaling multiple as a background scaling model according to a preset first mapping relation, wherein the second scaling is determined according to a region to be amplified and the input image;
determining a first region to be cut according to the second scaling and the region to be amplified;
zooming the background by using the background zooming model to obtain a background to be cut;
cutting the first area to be cut out from the background to be cut out to serve as a cutting background; and
and adjusting the size of the cutting background according to the relation between the size of the cutting background and the size of the input image to obtain the target background.
2. The image processing method according to claim 1, wherein the separating the portrait and the background of the input image comprises:
acquiring a first portrait mask of the input image;
removing the interference portrait in the first portrait mask to obtain a second portrait mask;
performing feathering treatment on the second portrait mask to obtain a third portrait mask;
expanding the third portrait mask outwards by preset pixel values to obtain a fourth portrait mask; and
segmenting the fourth portrait mask to separate the portrait and the background.
3. The method according to claim 2, wherein the obtaining the first portrait mask of the input image comprises:
cutting a preset region to be amplified in the input image to obtain an image to be processed;
the image to be processed is subjected to down-sampling according to a first scaling so as to obtain a down-sampled image, and the first scaling is determined according to the size of the area to be amplified; and
acquiring a downsampled portrait mask of the downsampled image to be used as the first portrait mask;
the segmenting the fourth portrait mask to separate the portrait and the background includes:
up-sampling the fourth portrait mask according to the first scaling to obtain an up-sampled portrait mask; and
and segmenting the up-sampling portrait mask to obtain the background and the portrait.
4. The image processing method according to claim 3, wherein the fusing the target background and the target portrait to obtain a target image comprises:
performing interpolation processing on the up-sampling portrait mask to enable the size of the up-sampling portrait mask to be the same as that of the input image;
performing Gaussian blur on the up-sampling portrait mask subjected to interpolation processing to obtain a fusion parameter; and
and fusing the target background and the target portrait according to a preset fusion rule and the fusion parameter to obtain the target image.
5. An image processing apparatus characterized by comprising:
a separation module for separating a portrait and a background of an input image;
the first reconstruction module is used for reconstructing the background based on a preset first training model to obtain a target background;
the second reconstruction module is used for reconstructing the portrait to obtain a target portrait based on a preset second training model, and the second training model is different from the first training model; and
the fusion module is used for obtaining a target image by the target background and the target portrait;
the second training model comprises a plurality of scaling models with different multiples, and the second modeling block comprises:
the second selection unit is used for selecting the scaling model corresponding to the preset third scaling multiple as the portrait scaling model according to the preset second mapping relation;
a second determining unit, configured to determine a second region to be trimmed according to the third scaling ratio;
the second zooming unit is used for zooming the portrait by using the portrait zooming model so as to obtain the portrait to be cut;
the second cutting unit is used for cutting out the second area to be cut out from the portrait to be cut out to serve as the portrait to be cut out; and
a second adjusting unit, configured to adjust the size of the cut portrait according to a relationship between the size of the cut portrait and the size of the input image, so as to obtain the target portrait;
the first reconstruction module comprises:
the first selection unit is used for selecting the scaling model corresponding to a second scaling factor according to a preset first mapping relation to serve as a background scaling model, and the second scaling factor is determined according to a region to be amplified and the input image;
the first determining unit is used for determining a first region to be cut according to the second scaling and the region to be enlarged;
the first scaling unit is used for scaling the background by using the background scaling model to obtain a background to be cut;
the first cutting unit is used for cutting out the first area to be cut from the background to be cut to serve as a cutting background; and
a first adjusting unit, configured to adjust the size of the clipping background according to a relationship between the size of the clipping background and the size of the input image, so as to obtain the target background.
6. An image processing terminal, characterized in that the terminal comprises a processor configured to:
separating a portrait and a background of an input image;
reconstructing the background based on a preset first training model to obtain a target background;
reconstructing the portrait to obtain a target portrait based on a preset second training model, wherein the second training model is different from the first training model; and
fusing the target background and the target portrait to obtain a target image;
the second training model comprises a plurality of scaling models of different multiples, the processor is further configured to:
selecting the scaling model corresponding to the preset third scaling multiple as a portrait scaling model according to the preset second mapping relation;
determining a second region to be cut according to the third scaling;
zooming the portrait by using the portrait zooming model to obtain a portrait to be cut;
cutting out the second to-be-cut area from the to-be-cut portrait to serve as a cut portrait; and
adjusting the size of the cut portrait according to the relation between the size of the cut portrait and the size of the input image to obtain the target portrait;
the first training model comprises a plurality of scaling models with different multiples, and the target background is obtained after the background is reconstructed based on a preset first training model, and the method comprises the following steps:
selecting a scaling model of a multiple corresponding to a second scaling as a background scaling model according to a preset first mapping relation, wherein the second scaling is determined according to a region to be amplified and the input image;
determining a first region to be cut according to the second scaling and the region to be amplified;
zooming the background by using the background zooming model to obtain a background to be cut;
cutting the first area to be cut out from the background to be cut out to serve as a cutting background; and
and adjusting the size of the cutting background according to the relation between the size of the cutting background and the size of the input image to obtain the target background.
7. The image processing terminal of claim 6, wherein the processor is further configured to:
acquiring a first portrait mask of the input image;
removing the interference portrait in the portrait mask to obtain a second portrait mask;
performing feathering treatment on the second portrait mask to obtain a third portrait mask;
expanding the third portrait mask outwards by preset pixel values to obtain a fourth portrait mask; and
segmenting the fourth portrait mask to separate the portrait and the background.
8. The image processing terminal of claim 7, wherein the processor is further configured to:
cutting a preset region to be amplified in the input image to obtain an image to be processed;
the image to be processed is subjected to down-sampling according to a first scaling so as to obtain a down-sampled image, and the first scaling is determined according to the size of the area to be amplified; and
acquiring a downsampled portrait mask of the downsampled image to be used as the first portrait mask; and
the processor is further configured to:
up-sampling the fourth portrait mask according to the first scaling to obtain an up-sampled portrait mask; and
and segmenting the up-sampling portrait mask to obtain the background and the portrait.
9. The image processing terminal of claim 8, wherein the first training model comprises a plurality of scaling models of different multiples, the processor further configured to:
selecting a scaling model corresponding to a second scaling factor as a background scaling model according to a preset first mapping relation, wherein the second scaling factor is determined according to the region to be amplified and the input image;
determining a first region to be cut according to the second scaling and the region to be amplified;
zooming the background by using the background zooming model to obtain a background to be cut;
cutting out the first area to be cut from the background to be cut to serve as a cutting background; and
and adjusting the size of the cutting background according to the relation between the size of the cutting background and the size of the input image to obtain the target background.
10. The image processing terminal of claim 9, wherein the processor is further configured to:
performing interpolation processing on the up-sampling portrait mask so that the size of the up-sampling portrait mask is the same as that of the input image;
performing Gaussian blur on the up-sampling portrait mask after interpolation processing to obtain a fusion parameter; and
and fusing the target background and the target portrait according to a preset fusion rule and the fusion parameter to obtain the target image.
11. A non-transitory computer-readable storage medium storing a computer program which, when executed by one or more processors, implements the image processing method of any one of claims 1 to 4.
CN202010852014.1A 2020-08-21 2020-08-21 Image processing method and device, terminal and readable storage medium Active CN112001940B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010852014.1A CN112001940B (en) 2020-08-21 2020-08-21 Image processing method and device, terminal and readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010852014.1A CN112001940B (en) 2020-08-21 2020-08-21 Image processing method and device, terminal and readable storage medium

Publications (2)

Publication Number Publication Date
CN112001940A CN112001940A (en) 2020-11-27
CN112001940B true CN112001940B (en) 2023-04-07

Family

ID=73473991

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010852014.1A Active CN112001940B (en) 2020-08-21 2020-08-21 Image processing method and device, terminal and readable storage medium

Country Status (1)

Country Link
CN (1) CN112001940B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112465064A (en) * 2020-12-14 2021-03-09 合肥工业大学 Image identification method, device and equipment based on deep course learning
CN115330606A (en) * 2022-07-07 2022-11-11 荣耀终端有限公司 Model training method, image processing method, device and storage medium

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108447020A (en) * 2018-03-12 2018-08-24 南京信息工程大学 A kind of face super-resolution reconstruction method based on profound convolutional neural networks
CN108961349A (en) * 2018-06-29 2018-12-07 广东工业大学 A kind of generation method, device, equipment and the storage medium of stylization image
CN110136144A (en) * 2019-05-15 2019-08-16 北京华捷艾米科技有限公司 A kind of image partition method, device and terminal device
CN111091521A (en) * 2019-12-05 2020-05-01 腾讯科技(深圳)有限公司 Image processing method and device, electronic equipment and computer readable storage medium

Family Cites Families (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7974489B2 (en) * 2007-05-30 2011-07-05 Avago Technologies Ecbu Ip (Singapore) Pte. Ltd. Buffer management for an adaptive buffer value using accumulation and averaging
CN102999901B (en) * 2012-10-17 2016-06-29 中国科学院计算技术研究所 Based on the processing method after the Online Video segmentation of depth transducer and system
CN109544453A (en) * 2018-11-16 2019-03-29 北京中竞鸽体育文化发展有限公司 Image adjusting method and device, electronic equipment, storage medium
CN110378235A (en) * 2019-06-20 2019-10-25 平安科技(深圳)有限公司 A kind of fuzzy facial image recognition method, device and terminal device
CN110378852A (en) * 2019-07-11 2019-10-25 北京奇艺世纪科技有限公司 Image enchancing method, device, computer equipment and storage medium
CN110348522B (en) * 2019-07-12 2021-12-07 创新奇智(青岛)科技有限公司 Image detection and identification method and system, electronic equipment, and image classification network optimization method and system
CN110428367B (en) * 2019-07-26 2023-04-14 北京小龙潜行科技有限公司 Image splicing method and device
CN110428378B (en) * 2019-07-26 2022-02-08 北京小米移动软件有限公司 Image processing method, device and storage medium
CN110428366B (en) * 2019-07-26 2023-10-13 Oppo广东移动通信有限公司 Image processing method and device, electronic equipment and computer readable storage medium
CN110443230A (en) * 2019-08-21 2019-11-12 北京百度网讯科技有限公司 Face fusion method, apparatus and electronic equipment
US10593021B1 (en) * 2019-09-11 2020-03-17 Inception Institute of Artificial Intelligence, Ltd. Motion deblurring using neural network architectures
CN111028170B (en) * 2019-12-09 2023-11-24 Oppo广东移动通信有限公司 Image processing method, image processing apparatus, electronic device, and readable storage medium

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108447020A (en) * 2018-03-12 2018-08-24 南京信息工程大学 A kind of face super-resolution reconstruction method based on profound convolutional neural networks
CN108961349A (en) * 2018-06-29 2018-12-07 广东工业大学 A kind of generation method, device, equipment and the storage medium of stylization image
CN110136144A (en) * 2019-05-15 2019-08-16 北京华捷艾米科技有限公司 A kind of image partition method, device and terminal device
CN111091521A (en) * 2019-12-05 2020-05-01 腾讯科技(深圳)有限公司 Image processing method and device, electronic equipment and computer readable storage medium

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
多摄像机下动态前景分离的视频拼接算法;贾克斌等;《北京工业大学学报》;20120710;第38卷(第07期);1057-1061 *

Also Published As

Publication number Publication date
CN112001940A (en) 2020-11-27

Similar Documents

Publication Publication Date Title
JP5762356B2 (en) Apparatus and method for depth reconstruction of dynamic scene based on focus
CN110428366B (en) Image processing method and device, electronic equipment and computer readable storage medium
CN111598808B (en) Image processing method, device and equipment and training method thereof
CN112330574B (en) Portrait restoration method and device, electronic equipment and computer storage medium
CN112367459B (en) Image processing method, electronic device, and non-volatile computer-readable storage medium
CN112001940B (en) Image processing method and device, terminal and readable storage medium
JP2013501993A (en) Method and apparatus for supplying an image for display
CN111325692B (en) Image quality enhancement method, image quality enhancement device, electronic device, and readable storage medium
TW200926062A (en) Image generation method and apparatus, program therefor, and storage medium for storing the program
KR101028628B1 (en) Image texture filtering method, storage medium of storing program for executing the same and apparatus performing the same
CN110910330A (en) Image processing method, image processing apparatus, electronic device, and readable storage medium
Dutta Depth-aware blending of smoothed images for bokeh effect generation
CN111028170A (en) Image processing method, image processing apparatus, electronic device, and readable storage medium
CN112215906A (en) Image processing method and device and electronic equipment
CN107392986B (en) Image depth of field rendering method based on Gaussian pyramid and anisotropic filtering
Chang et al. Beyond camera motion blur removing: How to handle outliers in deblurring
CN111083359B (en) Image processing method and apparatus, electronic device, and computer-readable storage medium
CN110992284A (en) Image processing method, image processing apparatus, electronic device, and computer-readable storage medium
CN113240573B (en) High-resolution image style transformation method and system for local and global parallel learning
Yu et al. Continuous digital zooming of asymmetric dual camera images using registration and variational image restoration
CN110503603B (en) Method for obtaining light field refocusing image based on guide up-sampling
Shen et al. Viewing-distance aware super-resolution for high-definition display
CN111080543A (en) Image processing method and device, electronic equipment and computer readable storage medium
Hao et al. Super-Resolution Degradation Model: Converting High-Resolution Datasets to Optical Zoom Datasets
JP5713256B2 (en) Image processing apparatus, imaging apparatus, and image processing program

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant