CN111127345A - Image processing method and device, electronic equipment and computer readable storage medium - Google Patents

Image processing method and device, electronic equipment and computer readable storage medium Download PDF

Info

Publication number
CN111127345A
CN111127345A CN201911241409.1A CN201911241409A CN111127345A CN 111127345 A CN111127345 A CN 111127345A CN 201911241409 A CN201911241409 A CN 201911241409A CN 111127345 A CN111127345 A CN 111127345A
Authority
CN
China
Prior art keywords
image
repaired
portrait
preset
feature
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201911241409.1A
Other languages
Chinese (zh)
Other versions
CN111127345B (en
Inventor
阎法典
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong Oppo Mobile Telecommunications Corp Ltd
Original Assignee
Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong Oppo Mobile Telecommunications Corp Ltd filed Critical Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority to CN201911241409.1A priority Critical patent/CN111127345B/en
Publication of CN111127345A publication Critical patent/CN111127345A/en
Application granted granted Critical
Publication of CN111127345B publication Critical patent/CN111127345B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • G06T5/77
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration by the use of more than one image, e.g. averaging, subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30196Human being; Person
    • G06T2207/30201Face

Abstract

The application discloses an image processing method, an image processing apparatus, an electronic device, and a non-volatile computer-readable storage medium. The image processing method comprises the following steps: acquiring an image to be restored with a portrait of a target user; detecting the relative movement condition between a target user and the electronic equipment; and when the relative movement condition meets a preset condition, performing restoration processing on the portrait area of the image to be restored. According to the image processing method, the image processing device, the electronic equipment and the nonvolatile computer readable storage medium, the portrait area is repaired only when the relative movement condition between the target user and the electronic equipment meets the preset condition, and the portrait area in the image does not need to be repaired every time the image with the portrait is acquired, so that the definition of the image is guaranteed, the power consumption of the electronic equipment is greatly saved, and the cruising ability of the electronic equipment is improved.

Description

Image processing method and device, electronic equipment and computer readable storage medium
Technical Field
The present disclosure relates to the field of image processing technologies, and in particular, to an image processing method, an image processing apparatus, an electronic device, and a non-volatile computer-readable storage medium.
Background
The ultra-clear portrait technology is a technology for processing a portrait in an image by using an image processing algorithm so as to enable the detail of the portrait to be richer and the definition to be higher. The repairing of the ultra-clear portrait needs to consume more energy consumption of the electronic equipment, and can affect the cruising ability of the electronic equipment.
Disclosure of Invention
The embodiment of the application provides an image processing method, an image processing device, an electronic device and a non-volatile computer readable storage medium.
The image processing method is used for the electronic equipment. The image processing method comprises the following steps: acquiring an image to be restored with a portrait of a target user; detecting a relative movement condition between the target user and the electronic equipment; and when the relative movement condition meets a preset condition, repairing the portrait area of the image to be repaired.
The image processing apparatus according to the embodiment of the present application is used for an electronic device. The image processing device comprises an acquisition module, a detection module and a restoration module. The acquisition module is used for acquiring an image to be restored with the portrait of the target user. The detection module is used for detecting the relative movement condition between the target user and the electronic equipment. And the repairing module is used for repairing the portrait area of the image to be repaired when the relative movement condition meets a preset condition.
The electronic equipment of the embodiment of the application comprises a shell and a processor. The processor is mounted on the housing, the processor being configured to: acquiring an image to be restored with a portrait of a target user; detecting a relative movement condition between the target user and the electronic equipment; and when the relative movement condition meets a preset condition, repairing the portrait area of the image to be repaired.
A non-transitory computer readable storage medium of an embodiment of the present application containing computer readable instructions which, when executed by a processor, cause the processor to perform the steps of: acquiring an image to be restored with a portrait of a target user; detecting a relative movement condition between the target user and the electronic equipment; and when the relative movement condition meets a preset condition, repairing the portrait area of the image to be repaired.
According to the image processing method, the image processing device, the electronic equipment and the nonvolatile computer readable storage medium, the portrait area is repaired only when the relative movement condition between the target user and the electronic equipment meets the preset condition, and the portrait area in the image does not need to be repaired every time the image with the portrait is acquired, so that the definition of the image is guaranteed, the power consumption of the electronic equipment is greatly saved, and the cruising ability of the electronic equipment is improved.
Additional aspects and advantages of embodiments of the present application will be set forth in part in the description which follows and, in part, will be obvious from the description, or may be learned by practice of the present application.
Drawings
The above and/or additional aspects and advantages of the present application will become apparent and readily appreciated from the following description of the embodiments, taken in conjunction with the accompanying drawings of which:
FIG. 1 is a schematic flow chart diagram of an image processing method according to some embodiments of the present application;
FIG. 2 is a schematic diagram of an image processing apparatus according to some embodiments of the present application;
FIG. 3 is a schematic view of an electronic device of some embodiments of the present application;
FIG. 4 is a schematic flow chart diagram of an image processing method according to some embodiments of the present application;
FIG. 5 is a schematic diagram of an image processing apparatus according to some embodiments of the present application;
FIG. 6 is a schematic view of a scene of an image processing method according to some embodiments of the present application;
FIG. 7 is a schematic diagram of a face detection model in accordance with certain embodiments of the present application;
FIG. 8 is a schematic flow chart diagram of an image processing method according to some embodiments of the present application;
FIG. 9 is a schematic diagram of an image processing apparatus according to some embodiments of the present application;
FIG. 10 is a schematic flow chart diagram of an image processing method according to some embodiments of the present application;
FIG. 11 is a schematic flow chart diagram of an image processing method according to some embodiments of the present application;
FIG. 12 is a schematic diagram of a repair module in an image processing apparatus according to some embodiments of the present application;
FIG. 13 is a schematic diagram of a second processing unit in an image processing apparatus according to some embodiments of the present application;
FIG. 14 is a schematic illustration of a face restoration model according to some embodiments of the present application;
FIG. 15 is a schematic flow chart diagram of an image processing method according to some embodiments of the present application;
FIG. 16 is a schematic diagram of a repair module in an image processing apparatus according to some embodiments of the present application;
FIG. 17 is a schematic flow chart diagram of an image processing method according to some embodiments of the present application;
FIG. 18 is a schematic diagram of a third acquisition unit in an image processing apparatus according to some embodiments of the present application;
FIG. 19 is a schematic flow chart diagram of an image processing method according to some embodiments of the present application;
FIG. 20 is a schematic diagram of a third processing unit in an image processing apparatus according to some embodiments of the present application;
FIG. 21 is a schematic flow chart diagram of an image processing method according to some embodiments of the present application;
FIG. 22 is a schematic diagram of the interaction of a non-volatile computer readable storage medium and a processor of certain embodiments of the present application.
Detailed Description
Reference will now be made in detail to embodiments of the present application, examples of which are illustrated in the accompanying drawings, wherein like or similar reference numerals refer to the same or similar elements or elements having the same or similar function throughout. The embodiments described below by referring to the drawings are exemplary only for the purpose of explaining the embodiments of the present application, and are not to be construed as limiting the embodiments of the present application.
Referring to fig. 1 and fig. 3, an image processing method is provided. The image processing method according to the embodiment of the present application may be applied to the electronic device 20. The image processing method comprises the following steps:
01: acquiring an image to be restored with a portrait of a target user;
02: detecting a relative movement between the target user and the electronic device 20; and
03: and when the relative movement condition meets a preset condition, performing restoration processing on the portrait area of the image to be restored.
Referring to fig. 2 and fig. 3, an image processing apparatus 10 is further provided. The image processing apparatus 10 according to the embodiment of the present application can be used for the electronic device 20. The image processing method according to the present embodiment can be realized by the image processing apparatus 10 according to the present embodiment. The image processing apparatus 10 includes an acquisition module 11, a detection module 12, and a repair module 13. Step 01 may be implemented by the obtaining module 11. Step 02 may be implemented by the detection module 12. Step 03 may be implemented by the repair module 13. That is, the obtaining module 11 may be used to obtain the image to be repaired having the portrait of the target user. The detection module 12 may be used to detect relative movement between the target user and the electronic device 20. The repairing module 13 may be configured to perform repairing processing on the portrait area of the image to be repaired when the phase shift pair movement condition satisfies a predetermined condition.
Referring to fig. 3, the present application further provides an electronic device 20. The image processing method according to the embodiment of the present application can also be implemented by the electronic device 20 according to the embodiment of the present application. Electronic device 20 may be a mobile phone, a tablet computer, a laptop computer, an intelligent wearable device (e.g., an intelligent watch, an intelligent bracelet, an intelligent helmet, an intelligent pair of glasses, etc.), an intelligent mirror, an unmanned aerial vehicle, an unmanned ship, etc., without limitation. The electronic device 20 includes a housing 22 and a processor 21. The processor 21 is mounted on the housing 22. Step 01, step 02, and step 03 may be implemented by the processor 21. That is, the processor 21 may be configured to obtain an image to be restored with a portrait of a target user and detect a relative movement between the target user and the electronic device 20. The processor 21 may also be configured to perform a repairing process on the portrait area of the image to be repaired when the relative movement condition satisfies a predetermined condition.
In the related technology, the super-definition portrait algorithm can be adopted to process the portrait in the image, so that the detail of the portrait is richer and the definition is higher. When the electronic device acquires an image containing a portrait, the electronic device usually performs a repairing process on the portrait in the image by using an ultra-clear portrait algorithm. In the practical application process, however, not all images including the portrait need to be repaired by using the ultra-clear portrait algorithm. If each image containing the portrait is subjected to restoration processing, the energy consumption of the electronic equipment is greatly consumed, and the cruising ability of the electronic equipment is influenced.
The image processing method, the image processing apparatus 10, and the electronic device 20 according to the embodiment of the present application detect the relative movement between the target user and the electronic device 20, and perform the repairing process on the portrait area of the image to be repaired only when the relative movement satisfies the predetermined condition. It is understood that when the target user moves, or the electronic device 20 shakes while the target user moves, the target user and the electronic device 20 are no longer relatively fixed, and there is relative movement between the target user and the electronic device 20. If relative movement occurs between the target user and the electronic device 20 during the imaging period of the image to be restored, which is shot by the camera 23 in the electronic device 20, is blurred, and the definition of the portrait area is not high, at this time, the electronic device 20 needs to perform restoration processing on the portrait area in the image to be restored to obtain an image of the portrait area with high definition. If the relative movement between the target user and the electronic device 20 is relatively large in the imaging period of the image to be restored, it indicates that the degree of blur of the portrait area in the image to be restored may be too high, at this time, the restoration effect obtained by restoring the portrait area by using the portrait restoration algorithm is not good, and the electronic device 20 may not perform the restoration processing on the portrait area in the image to be restored. Therefore, the image area is repaired only when the relative movement condition between the target user and the electronic device 20 meets the predetermined condition, and the image area in the image does not need to be repaired every time the image with the image is acquired, so that the definition of the image is guaranteed, the power consumption of the electronic device 20 is greatly saved, and the cruising ability of the electronic device 20 is improved.
Referring to fig. 4, in some embodiments, the step 02 of detecting a relative movement between the target user and the electronic device 20 includes:
021: acquiring two frames of initial images with portrait, wherein the time interval between the shooting moments of the two frames of initial images is less than a preset time length;
022: detecting coordinate information of a preset characteristic point of a human face in each frame of initial image;
023: calculating the relative displacement between the target user and the electronic device 20 according to the two pieces of coordinate information; and
024: the moving speed of the relative movement between the target user and the electronic device 20 is calculated from the relative displacement and the time interval.
When the relative movement condition meets a predetermined condition, performing a repairing process on the portrait area of the image to be repaired, including:
031: and when the moving speed is within a preset speed range, performing repairing processing on the portrait area of the image to be repaired.
The image processing method further includes:
and when the moving speed is out of the moving speed range, the image to be repaired is not subjected to repairing processing.
Referring to fig. 5, in some embodiments, the detecting module 12 includes a first obtaining unit 121, a first detecting unit 122, a first calculating unit 123, and a second calculating unit 124. The repair module 13 includes a first repair unit 131. Step 021 may be implemented by the first obtaining unit 121. Step 022 may be implemented by the first detection unit 122. Step 023 may be implemented by the first calculation unit 123. Step 024 may be implemented by the second calculation unit 124. Step 031 may be implemented by the first repair unit. That is, the first acquisition unit 121 may be configured to acquire two frames of initial images, where a time interval between the shooting times of the two frames of initial images is less than a predetermined time period. The first detection unit 122 may be configured to detect coordinate information of a predetermined feature point of a face in each frame of the initial image. The first calculation unit 123 may be configured to calculate a relative displacement between the target user and the electronic device 20 according to the two pieces of coordinate information. The second calculation unit 124 may be configured to calculate a moving speed of the relative movement between the target user and the electronic device 20 according to the relative displacement and the time interval. The first repairing unit 131 may be configured to perform a repairing process on the portrait area of the image to be repaired when the moving speed is within a predetermined speed range. When the moving speed is outside the moving speed range, the first repair unit 131 does not perform repair processing on the image to be repaired.
Referring back to fig. 3, in some embodiments, step 021, step 022, step 023, step 024, and step 031 can be implemented by processor 21. That is, the processor 21 may be configured to acquire two frames of initial images having a portrait, detect coordinate information of a predetermined feature point of a face in each frame of the initial images, calculate a relative displacement between the target user and the electronic device 20 according to the two coordinate information, and calculate a moving speed of a relative movement between the target user and the electronic device 20 according to the relative displacement and a time interval. And the time interval between the shooting moments of the two frames of initial images is less than the preset time length. The processor 21 may be further configured to perform a repairing process on the portrait area of the image to be repaired when the moving speed is within a predetermined speed range. When the moving speed is outside the moving speed range, the processor 21 does not perform the repair process on the image to be repaired.
Specifically, referring to fig. 3 and fig. 6, the camera 23 may capture a plurality of initial images, wherein the plurality of frames may be two frames, three frames, four frames, five frames, eight frames, ten frames, fifteen frames, twenty-four frames, and the like, without limitation. Subsequently, the processor 21 may select two initial images from the plurality of initial images, and the two initial images may be continuous or discontinuous as long as the time interval between the shooting moments of the two initial images is less than a predetermined time length. For example, the predetermined time period may be 1s, 2s, 5s, 10s, 30s, 60s, 120s, 240s, or the like. In one example, when two frames of initial images are two frames of images continuously captured by the camera 23 of the electronic device 20, one of the frames of initial images may be used as an image to be repaired, and the other frame of initial image is used to calculate a relative displacement in cooperation with the image to be repaired, for example, an initial image with a later capturing time may be used as an image to be repaired, an initial image with an earlier capturing time may be used to calculate a relative displacement in cooperation with the image to be repaired, and the like.
Assuming that the two frames of images to be restored selected by the processor 21 are the initial image M1 and the initial image M2, respectively, wherein the shooting time of the initial image M1 is earlier than the shooting time of the initial image M2, and the initial image M2 is the image to be restored, the processor 21 can detect the face in the initial image M1 and the face in the initial image M2, respectively. For example, the processor 21 may detect a face in each frame of the initial image according to the face detection model shown in fig. 7. The specific detection process of the face detection model shown in fig. 7 is as follows: performing feature extraction on the initial image by a Convolution layer and a Pooling layer (fusion and Pooling) to obtain a plurality of feature images; and the last convolution layer (Final Conv Feature Map) performs the last convolution on the Feature images output by the convolution layer and the pooling layer, and outputs the Feature images obtained by the last convolution to full-connected Layers (full-connected Layers). The full-connection layer classifies the characteristic images output by the last convolutional layer and outputs the classification result to a Coordinate output branch (Coordinate). And the coordinate output branch circuit outputs the position coordinates of the human face in the initial image. At this point, the detection of the human face in the initial image is completed. Then, the processor 21 further detects the predetermined feature points of the face in the initial image M1 and the predetermined feature points of the face in the initial image M2, and determines the coordinate information (x1, y1) of the predetermined feature points of the face in the initial image M1 and the coordinate information (x2, y2) of the predetermined feature points of the face in the initial image M2. In an example of the present application, the predetermined feature point is a center point of a human face, and in other examples, the predetermined feature point may also be a center point of a left eye, a center point of a right eye, a center point of a nose, a center point of a mouth, and the like, which are not limited herein. Subsequently, the processor 21 may calculate the relative displacement between the target user and the electronic device 20 according to the coordinate information (x1, y1) and the coordinate information (x2, y 2). Illustratively, the phase shift
Figure BDA0002306345630000041
Then, the processor 21 calculates the moving speed V according to the relative displacement S and the interval time Deltat,and V is S/delta t. The processor 21 then determines whether the moving speed V is within a predetermined speed range. If the moving speed V is within the predetermined speed range, the processor 21 performs the repairing process on the portrait area of the image to be repaired. If the moving speed V is outside the predetermined speed range, the processor 21 does not perform the repairing process on the portrait area of the image to be repaired.
The image processing method, the image processing apparatus 10 and the electronic device 20 according to the embodiment of the present application directly use the initial images of multiple frames captured by the camera 23 to obtain the moving speed, and there is no need to install an additional detection device on the electronic device 20 to detect the moving speed, so that the number of devices required to be installed on the electronic device 20 can be reduced, and the cost of the electronic device 20 can be reduced. In addition, when two frames of initial images are continuously shot images and one frame of initial image is an image to be restored, the electronic device 20 only needs to additionally acquire one frame of initial image on the basis of acquiring the image to be restored, and the number of images required to be acquired by the electronic device 20 can be reduced. Moreover, the image to be repaired and another frame of initial image are continuously shot, the moving speed calculated according to the two frames of initial images is more consistent with the actual relative moving speed between the target user and the electronic equipment 20 in the imaging period of the image to be repaired, and the repair decision determined based on the more accurate moving speed is more accurate.
Note that the predetermined speed range does not include a value of 0. Specifically, assuming that the predetermined speed range is [ V1, V2], V1 is greater than 0. Thus, when the moving speed V satisfies V1 ≤ V2 (i.e. the moving speed V is within the predetermined speed range), the processor 21 performs the repairing process on the portrait area of the image to be repaired; when the moving speed V satisfies 0 ≦ V < V1 or V > V2 (i.e., the moving speed V is outside the predetermined speed range), the processor 21 does not perform the restoration processing on the portrait area of the image to be restored. It can be understood that when the moving speed V satisfies 0 ≦ V < V1, the relative movement generated between the target user and the electronic device 20 in the imaging period of the image to be repaired is small, the definition of the image to be repaired is usually high, and the processor 21 may not need to perform the repairing process on the portrait area of the image to be repaired. Therefore, when the moving speed of the electronic device 20 is low, the human image area in the image to be repaired does not need to be repaired, which is beneficial to saving the power consumption of the electronic device 20 and improving the cruising ability of the electronic device 20.
Referring to fig. 8, in some embodiments, the step 02 of detecting a relative movement between the target user and the electronic device 20 includes:
025: the shake condition of the electronic device 20 is obtained through a sensor, wherein the shake condition is a shake amplitude of the electronic device 20 in a predetermined period.
When the relative movement condition meets a predetermined condition, performing a repairing process on the portrait area of the image to be repaired, including:
032: and when the shaking amplitude is within the preset amplitude range, carrying out restoration processing on the portrait area of the image to be restored.
The image processing method further includes:
and when the jitter amplitude is out of the preset amplitude range, the image to be repaired is not repaired.
Referring to fig. 9, in some embodiments, the detection module 12 includes a second obtaining unit 125. The repair module 13 includes a second repair unit 132. Step 025 may acquire, by the second acquisition unit, a shake condition of the electronic device 20 acquired by the sensor, where the shake condition is a shake amplitude of the electronic device 20 within a predetermined period. The second repairing unit 132 may be configured to perform repairing processing on the portrait area of the image to be repaired when the shaking amplitude is within a predetermined amplitude range. When the shake amplitude is outside the predetermined amplitude range, the second repair unit 132 does not perform repair processing on the image to be repaired.
Referring back to fig. 3, in some embodiments, step 025 and step 032 can be implemented by processor 21. That is, the processor 21 may be configured to obtain a shake condition of the electronic device 20 obtained by the sensor, where the shake condition is a shake amplitude of the electronic device 20 within a predetermined period. The processor 21 may be further configured to perform a repairing process on the portrait area of the image to be repaired when the shaking amplitude is within the predetermined amplitude range. When the dither amplitude is outside the predetermined amplitude range, the processor 21 does not perform the restoration process on the image to be restored.
Specifically, the shake condition of the electronic device 20 may be detected by a sensor, and the processor 21 may read the shake condition of the electronic device 20 detected by the sensor from the sensor. The sensors may be, for example, accelerometers, gyroscopes, etc. In one embodiment of the present application, the sensor is a gyroscope. The gyroscope may detect angular acceleration of the electronic device 20. The processor 21 may calculate, from the angular acceleration, a shake amplitude of the electronic device 20 within a predetermined period of time, where the predetermined period of time may be an imaging period of the image to be repaired. Subsequently, the processor 21 determines whether the jitter amplitude is within a predetermined amplitude range. If the jitter amplitude is within the predetermined amplitude range, the processor 21 performs the repairing process on the portrait area of the image to be repaired. If the jitter amplitude is out of the predetermined amplitude range, the processor 21 does not perform the action of repairing the portrait area of the image to be repaired. In addition, the gyroscope is generally capable of detecting angular accelerations of the electronic device 30 in three directions in space, and the processor 21 may calculate the shake amplitude of the electronic device 20 in the direction corresponding to the angular acceleration from the angular acceleration in each direction. The processor 21 determines whether the magnitudes of the jitter in the three directions are all within a predetermined magnitude range. If the shake amplitudes in the three directions are within the predetermined amplitude range, the processor 21 performs the repairing process on the portrait area of the image to be repaired. If the shake amplitude in any direction is out of the predetermined amplitude range, the processor 21 does not perform the action of repairing the portrait area of the image to be repaired. In this way, the processor 21 can more comprehensively acquire the jitter condition of the electronic device 20 and make an accurate repair decision according to the jitter condition.
The image processing method, the image processing apparatus 10, and the electronic device 20 according to the embodiment of the present application directly use the shake condition detected by the sensor to determine whether to perform the restoration processing on the image to be restored, and do not need to determine by processing the initial images of multiple frames, so that the amount of data to be processed by the electronic device 20 can be reduced, and the power consumption of the electronic device 20 can be reduced.
Note that the predetermined amplitude range does not include a value of 0. Specifically, assuming that the predetermined amplitude range is [ a1, a2], a1 is greater than 0. Thus, when the jitter amplitude a satisfies a condition that a1 is not less than a not more than a2 (i.e. the jitter amplitude a is within a predetermined amplitude range), the processor 21 performs the repairing process on the portrait area of the image to be repaired; when the jitter amplitude A satisfies 0 ≦ A < A1 or A > A2 (i.e. the jitter amplitude A is outside the predetermined amplitude range), the processor 21 does not perform the repairing process on the portrait area of the image to be repaired. It can be understood that when the jitter amplitude a satisfies 0 ≦ a < a1, the relative movement generated between the target user and the electronic device 20 in the imaging period of the image to be repaired is small, the definition of the image to be repaired is usually high, and the processor 21 may not perform the repairing process on the portrait area of the image to be repaired. Therefore, when the jitter amplitude of the electronic device 20 is small, the human image area in the image to be repaired does not need to be repaired, which is beneficial to saving the power consumption of the electronic device 20 and improving the cruising ability of the electronic device 20.
Referring to fig. 10 and 11, in some embodiments, the step 03 of performing a repairing process on the portrait area of the image to be repaired includes:
033: detecting a portrait area in an image to be restored;
034: performing convolution on the human image area for multiple times to obtain multiple characteristic images;
035: performing multiple times of upsampling and at least one time of deconvolution on the feature image output by the last convolution to obtain a residual image; and
036: and fusing the residual image and the portrait area to obtain a repaired image.
Wherein, the step 035 performs multiple upsampling and at least one deconvolution on the feature image output by the last convolution to obtain a residual image, including:
0351: in the first up-sampling process, performing up-sampling and deconvolution on the feature image output by the last convolution;
0352: in the second and more than second upsampling processes, fusing an image obtained by the previous upsampling, a feature image corresponding to the size of the image obtained by the previous upsampling and an image obtained by the previous N times of deconvolution, and performing upsampling or performing upsampling and deconvolution on the fused image;
0353: and fusing the image obtained by the last upsampling and the image obtained by the previous N times of deconvolution to obtain a residual image, wherein N is more than or equal to 1, and N belongs to N +.
Referring to fig. 12 and 13, in some embodiments, the repair module 13 includes a second detection unit 133, a first processing unit 134, a second processing unit 135, and a fusion unit 136. Second processing unit 135 includes a first processing subunit 1351, a second processing subunit 1352, and a fusion subunit 1353. Step 033 may be implemented by the second detecting unit 133. Step 034 may be implemented by the first processing unit 134. Step 035 can be implemented by the second processing unit 135. Step 036 may be implemented by the fusion unit 136. Step 0351 may be implemented by the first processing subunit 1351. Step 0352 may be implemented by the second processing subunit 1352. Step 0353 may be implemented by fusion subunit 1353. That is, the second detection unit 133 may be used to detect a portrait area in the image to be repaired. The first processing unit 134 may be configured to perform convolution on the portrait area a plurality of times to obtain a plurality of feature images. The second processing unit 135 may be configured to perform a plurality of upsampling and at least one deconvolution on the feature image output by the last convolution to obtain a residual image. The fusion unit 136 may be configured to fuse the residual image and the portrait area to obtain a restored image. The first processing sub-unit 1351 may be configured to perform upsampling and deconvolution on the feature image output by the last convolution in the first upsampling process. The second processing subunit 1352 may be configured to, in the second and more than second upsampling processes, fuse the image obtained from the previous upsampling, the feature image corresponding to the size of the image obtained from the previous upsampling, and the image obtained from the previous deconvolution for N times, and perform upsampling on the fused image or perform upsampling and deconvolution. The fusion subunit 1353 may be configured to fuse the image obtained by the last upsampling and the image obtained by the previous N deconvolution to obtain a residual image, where N is greater than or equal to 1, and N belongs to N +.
Referring back to fig. 3, in some embodiments, step 033, step 034, step 035, step 0351, step 0352, step 0353, step 0354, and step 036 may be implemented by the processor 21. That is, the processor 21 may also be configured to detect a portrait area in the image to be restored and perform convolution on the portrait area multiple times to obtain multiple feature images. The processor 21 may be further configured to perform upsampling and deconvolution on the feature image output by the last convolution for a plurality of times to obtain a residual image. The processor 21 may also be configured to fuse the residual image and the portrait area to obtain a restored image. The processor 21 is configured to, when performing multiple upsampling and at least one deconvolution on the feature image output by the last convolution to obtain a residual image, specifically: in the first up-sampling process, performing up-sampling and deconvolution on the feature image output after the last convolution; in the second and more than second upsampling processes, fusing an image obtained by the previous upsampling, a feature image corresponding to the size of the image obtained by the previous upsampling and an image obtained by the previous N times of deconvolution, and performing upsampling or performing upsampling and deconvolution on the fused image; and fusing the image obtained by the last upsampling and the image obtained by the previous N times of deconvolution to obtain a residual image, wherein N is more than or equal to 1, and N belongs to N +.
Specifically, referring to fig. 3, fig. 7 and fig. 14, the processor 21 may detect a human image region in the image to be repaired by using the human face detection model shown in fig. 7. The processor 21 then inputs the image area into a face restoration model for restoration. In one example, the face restoration model may be the face restoration model shown in fig. 14. As shown in fig. 14, after the face image region is input into the face restoration model, the processor 21 first performs a first convolution on the face image region, and then performs a first pooling on the feature images after the first convolution. Subsequently, the processor 21 performs a second convolution on the first pooled feature images and then performs a second pooling on the second convolved feature images. Subsequently, the processor 21 performs a third convolution on the second pooled feature images and then performs a third pooling on the third convolved feature images. Subsequently, the processor 21 performs a fourth convolution on the feature image after the third pooling, and then performs a fourth pooling on the feature image after the fourth convolution. Subsequently, the processor 21 performs a fifth convolution on the fourth pooled feature image. Subsequently, the processor 21 performs a first up-sampling and a first deconvolution on the fifth convolved feature image, wherein the first deconvolution actually performs two deconvolution actions, one deconvolution action outputting an image of a first size, and the other deconvolution action outputting an image of a second size, the second size being larger than the first size. Subsequently, the processor 21 fuses the image obtained by the first up-sampling and the feature image corresponding to the size of the image obtained by the first up-sampling (i.e., the feature image after the fourth convolution) (i.e., the linkage of the convolution layer corresponding to the fourth convolution shown in fig. 14 and the up-sampling layer corresponding to the second up-sampling), and performs the second up-sampling and the second deconvolution on the fused image, wherein the second deconvolution actually performs two deconvolution operations, one deconvolution operation outputs an image of the second size, and the other deconvolution operation outputs an image of the third size, and the third size is larger than the second size. Subsequently, the processor 21 fuses the second up-sampled image, the feature image corresponding to the size of the second up-sampled image (i.e., the feature image after the third convolution, as shown in fig. 14, the convolution layer corresponding to the third convolution is linked to the up-sampled layer corresponding to the third up-sampling), and the previous second deconvolution image (i.e., the image of the first size obtained by the first deconvolution), and performs the third up-sampling on the fused image. Subsequently, the processor 21 fuses the image obtained by the third up-sampling, the feature image corresponding to the size of the image obtained by the third up-sampling (i.e., the feature image after the second convolution, as shown in fig. 14, the convolution layer corresponding to the second convolution is linked to the up-sampling layer corresponding to the fourth up-sampling), the image obtained by the previous second deconvolution (i.e., the image of the second size obtained by the first deconvolution), and the image obtained by the previous deconvolution (i.e., the image of the second size obtained by the second deconvolution), and performs the fourth up-sampling on the fused image. Subsequently, the processor 21 fuses the image obtained by the fourth up-sampling and the image obtained by the previous deconvolution (i.e., the image of the third size obtained by the second deconvolution) to obtain a residual image. Finally, the processor 21 fuses the residual image with the portrait area of the input face restoration model, and a restoration image can be obtained. The details of the human face in the restored image are rich, and the definition is high.
The image processing method, the image processing apparatus 10, and the electronic device 20 according to the embodiment of the present application extract features of a portrait area through multiple convolutions and multiple pooling processes, amplify the extracted features through multiple upsampling processes, and process an image into which deconvolution is fused in a partial upsampling process, so as to implement feature transfer and image size expansion. In addition, partial up-sampling is to process the image fused with the feature image corresponding to the size of the image obtained by the previous up-sampling, namely, the feature extraction layer of the corresponding level is connected, so that high-level semantic features can be transmitted more fully during up-sampling, the details of portrait restoration can be more obvious, and the restoration of portrait details can be more exquisite. In addition, in the face repairing model shown in fig. 14, the image obtained by the first deconvolution is not directly transferred to the second upsampling process to be fused with the image to be subjected to the second upsampling, but is transferred to the third upsampling process and the fourth upsampling process, and similarly, the image obtained by the second deconvolution is not directly transferred to the third upsampling process but is transferred to the fourth upsampling process, so that the high-level features and the low-level features can be combined together in this way, the features are richer, and the details are restored more accurately. In addition, in the face restoration model shown in fig. 14, the images obtained by the second and the above upsampling processes are not deconvoluted any more, so that the problem of blocking effect caused by deconvolution of low-level features can be avoided.
Referring to fig. 15, in some embodiments, the step 03 of performing a repairing process on a portrait area of an image to be repaired includes:
037: acquiring a reference image, wherein the definition of the reference image is higher than the preset definition;
038: and carrying out portrait hyper-resolution algorithm processing on the image to be restored according to the reference image to obtain a restored image.
Referring to fig. 16, in some embodiments, the repair module 13 includes a third obtaining unit 137 and a third processing unit 138. Step 037 may be implemented by the third obtaining unit 137. Step 038 may be implemented by the third processing unit 138. That is, the third acquiring unit 137 may be configured to acquire a reference image having a resolution higher than a predetermined resolution. The third processing unit 138 may be configured to perform a portrait hyper-resolution algorithm process on the image to be repaired according to the reference image to obtain a repaired image.
Referring back to fig. 3, in some embodiments, step 037 and step 038 can be implemented by processor 21. That is, the processor 21 may be configured to obtain a reference image, wherein the sharpness of the reference image is higher than a predetermined sharpness. The processor 21 may also be configured to perform a portrait hyper-resolution algorithm process on the image to be repaired according to the reference image to obtain a repaired image.
Specifically, the reference image may include a preset user portrait or a preset standard portrait. Taking the electronic device 20 as a mobile phone as an example, the preset user portrait may be a portrait shot by a user in the electronic device 20 in advance, and it should be noted that the preset user portrait may be a certificate photograph in a user album or other image with a portrait with higher definition. When the user portrait is not preset in the electronic device 20, a preset standard portrait may be obtained, and the standard portrait may be used to download any high-definition portrait in the same area as the user, such as a high-definition poster, on the network. The definition of the preset user portrait and the definition of the preset standard portrait are both higher than the preset definition, the preset definition can be preset, and only the image higher than the preset definition can be used as a reference image (the preset user portrait or the preset standard portrait) so as to achieve a better image processing effect.
Referring to fig. 17, in some embodiments, step 037 acquires a reference image, including:
0371: carrying out face detection on a portrait area of an image to be restored and a preset user portrait;
0372: judging whether the similarity between the face of the image to be restored and the face of a preset user is greater than or equal to a first preset similarity or not;
0373: when the similarity between the face of the image to be restored and the face of a preset user is greater than or equal to a first preset similarity, taking the portrait of the preset user as a reference image;
0374: and when the similarity between the face of the image to be restored and the face of the preset user is smaller than a first preset similarity, acquiring a preset standard portrait as a reference image.
Referring to fig. 18, in some embodiments, the third obtaining unit 137 includes a detecting subunit 1371, a determining subunit 1372, a determining subunit 1373, and a first obtaining subunit 1374. Step 0371 may be implemented by the detection subunit 1371. Step 0372 may be implemented by the determining subunit 1372. Step 0373 may be implemented by the determining subunit 1373. Step 0374 may be implemented by the first acquisition subunit 1374. That is, the detecting subunit 1371 may be configured to perform face detection on the portrait area of the image to be restored and the preset user portrait. The determining subunit 1372 may be configured to determine whether a similarity between a face of the image to be restored and a face of a preset user is greater than or equal to a first preset similarity. The determining subunit 1373 may be configured to use the preset user portrait as the reference image when the similarity between the face of the image to be restored and the face of the preset user is greater than or equal to the first preset similarity. The first obtaining subunit 1374 may be configured to obtain a preset standard portrait as a reference image when a similarity between a face of the image to be restored and a face of a preset user is smaller than a first preset similarity.
Referring back to fig. 3, in some embodiments, step 0371, step 0372, step 0373 and step 0374 may be implemented by the processor 21. That is, the processor 21 may be configured to perform face detection on the portrait area of the image to be repaired and the portrait of the preset user, and determine whether the similarity between the face of the image to be repaired and the face of the preset user is greater than or equal to the first preset similarity. The processor 21 may be further configured to use the preset user portrait as a reference image when the similarity between the face of the image to be restored and the face of the preset user is greater than or equal to a first preset similarity. The processor 21 may further be configured to obtain a preset standard portrait as a reference image when the similarity between the face of the image to be restored and the face of the preset user is smaller than a first preset similarity.
Specifically, the processor 21 may perform face detection on the image to be restored and the portrait of the preset user, for example, may perform face detection on the image to be restored and the portrait of the preset user by using a face detection model shown in fig. 7. Subsequently, the processor 21 may further detect the face feature points in the image to be restored and the face feature points in the preset user figure. The processor 21 then compares the facial feature points of the two images. If the similarity of the face feature points of the two images is greater than the first preset similarity, it indicates that the portrait area of the image to be restored and the preset user portrait are the same person (i.e., the target user and the preset user are the same person), and at this time, the processor 21 may perform portrait hyper-resolution algorithm (i.e., ultra-clear portrait algorithm) processing on the portrait area of the image to be restored according to the preset user portrait to obtain the restored image. The two images of the same person are used for processing, so that the portrait in the obtained restored image is more similar to the target user, the portrait is more natural, and the user experience is better. If the similarity of the face feature points of the two images is lower than the first preset similarity, it is indicated that the portrait area of the image to be restored is not the same person as the portrait of the preset user (that is, the target user is not the same person as the preset user), and at this time, the standard portrait is used as the reference image to perform the hyper-resolution algorithm processing, so that the obtained effect is better. Therefore, the processor 21 may perform the portrait hyper-resolution algorithm processing on the portrait area of the image to be repaired according to the preset standard portrait to obtain the repaired image.
Referring to fig. 19, in some embodiments, the step 038 of performing a portrait hyper-resolution algorithm processing on the image to be repaired according to the reference image to obtain the repaired image includes:
0381: acquiring a first characteristic diagram of an image to be repaired after upsampling;
0382: acquiring a second characteristic diagram of the reference image after up-sampling and down-sampling;
0383: acquiring a third feature map of the reference image without up-sampling and down-sampling;
0384: acquiring a feature of the second feature map, wherein the similarity of the feature of the second feature map and the first feature map exceeds a second preset similarity to serve as a reference feature;
0385: acquiring the feature of which the similarity with the reference feature exceeds a third preset similarity in the second feature map to obtain an exchange feature map;
0386: merging the exchange characteristic diagram and the first characteristic diagram to obtain a fourth characteristic diagram;
0387: amplifying the fourth feature map by a preset multiple to obtain a fifth feature map; and
0388: and taking the fifth feature map as an image to be restored and executing the steps in a circulating manner until the obtained fifth feature map is the target magnification factor, and taking the fifth feature map with the target magnification factor as the restored image.
Referring to fig. 20, in some embodiments, the third processing unit 138 includes a second obtaining sub-unit 1381, a third obtaining sub-unit 1382, a fourth obtaining sub-unit 1383, a fifth obtaining sub-unit 1384, a sixth obtaining sub-unit 1385, a merging sub-unit 1386, an amplifying sub-unit 1387, and a third processing sub-unit 1388. Step 0381 may be realized by the second acquisition subunit 1381. Step 0382 may be realized by the third acquisition subunit 1382. Step 0383 may be implemented by a fourth acquisition subunit 1383. Step 0384 may be realized by a fifth acquisition sub-unit 1384. Step 0385 may be realized by a sixth acquisition subunit 1385. Step 0386 may be implemented by a merge sub-unit 1386. Step 0387 may be implemented by amplifying subunit 1387. Step 0388 may be implemented by the third processing subunit 1388. That is, the second obtaining subunit 1381 may be configured to obtain the first feature map of the image to be repaired after upsampling. The third obtaining sub-unit 1382 may be configured to obtain a second feature map of the reference image after up-sampling and down-sampling. The fourth acquiring sub-unit 1383 may be configured to acquire a third feature map of the reference image without up-sampling and down-sampling. The fifth acquiring sub-unit 1384 may be configured to acquire, as a reference feature, a feature in the second feature map having a similarity with the first feature map exceeding a second preset similarity. The sixth obtaining sub-unit 1385 may be configured to obtain the feature of the second feature map, which has a similarity with the reference feature exceeding a third preset similarity, to obtain an exchange feature map. The merge subunit 1366 may be configured to merge the swap feature map with the first feature map to obtain a fourth feature map. The amplification sub-unit 1387 may be used to amplify the fourth feature map by a predetermined factor to obtain a fifth feature map. The third processing subunit 1388 may be configured to use the fifth feature map as the image to be repaired and perform the above steps in a loop until the obtained fifth feature map is the target magnification, and the fifth feature map with the target magnification is the repaired image.
Referring back to fig. 3, in some embodiments, step 0381, step 0382, step 0383, step 0384, step 0385, step 0386, step 0387, and step 0388 may be implemented by the processor 21. That is, the processor 21 may be configured to acquire a first feature map of the image to be repaired after being upsampled, acquire a second feature map of the reference image after being upsampled and downsampled, and acquire a third feature map of the reference image without being upsampled and downsampled. The processor 21 may be further configured to obtain a feature in the second feature map, where the similarity with the first feature map exceeds a second preset similarity, as a reference feature, and obtain a feature in the second feature map, where the similarity with the reference feature exceeds a third preset similarity, to obtain an exchange feature map. The processor 21 is further configured to combine the exchange feature map and the first feature map to obtain a fourth feature map and amplify the fourth feature map by a predetermined multiple to obtain a fifth feature map. The processor 21 may be further configured to use the fifth feature map as an image to be repaired and perform the above steps in a loop, until the obtained fifth feature map is the target magnification, and the fifth feature map with the target magnification is the repaired image.
Specifically, the up-sampling may be understood as performing an enlargement process on the image to be restored or the reference image, and the down-sampling may be understood as performing a reduction process on the reference image.
More specifically, referring to fig. 21, in some embodiments, the step 0381 of acquiring a first feature map of the image to be repaired after upsampling includes:
03811, up-sampling the image to be repaired;
03812, inputting the up-sampled image to be repaired into a convolutional neural network for feature extraction to obtain a first feature map;
step 0382 is to acquire a second feature map of the reference image after up-sampling and down-sampling, and includes:
03821, down-sampling the reference image;
03822, up-sampling the down-sampled reference image;
03823, inputting the up-sampled reference image into a convolutional neural network for feature extraction to obtain a second feature map;
step 0383 is to acquire a third feature map of the reference image without up-sampling and down-sampling, including:
03831, inputting the reference image into a convolutional neural network for feature extraction to obtain a third feature map.
The processor 21 may perform upsampling (amplification) on the image to be repaired, and then input the upsampled image to be repaired into the convolutional neural network to perform feature extraction, so as to obtain a first feature map. The first feature map can be understood as an enlarged image of a portrait area in the image to be restored, and the first feature map includes various features in the portrait, such as five sense organs, skin, hair, contours, and the like. Since the first feature map is obtained by processing the enlarged image to be restored, the definition of the first feature map is low, and the definition of the reference image is relatively high, the processor 21 further needs to perform downsampling (reducing) on the reference image, and then perform upsampling on the downsampled image, so as to implement the blurring processing on the reference image, and improve the similarity between the second feature map and the first feature map. Features such as facial features, skin, hair, contours, etc. may also be included in the second profile. In addition, the processor 21 needs to input the reference image to the convolutional neural network for feature extraction to obtain a third feature map. It should be noted that the convolutional neural network is a network subjected to deep learning, and can perform high-accuracy feature extraction on an input image.
Subsequently, the processor 21 compares the features in the second feature map and the features in the first feature map, determines the similarity between the features, and compares the similarity with a second preset similarity. If the similarity is greater than or equal to the second predetermined similarity, the processor 21 may use the feature on the second feature map as the reference feature, which indicates that the feature of the second feature map is similar to the corresponding feature of the first feature map. The processor 21 compares the third feature map with the reference feature, determines the similarity between the third feature map and the reference feature, compares the similarity with a third preset similarity, and obtains a corresponding exchange feature map if the similarity is greater than or equal to the third preset similarity. Subsequently, the processor 21 merges the exchange feature map and the first feature map to obtain a fourth feature map, and then enlarges the fourth feature map by a predetermined multiple to obtain a fifth feature map. Subsequently, the processor 21 determines the magnification of the fifth feature map. If the magnification is equal to the target magnification, the processor 21 takes the fifth feature map as a repair image. It should be noted that the second predetermined similarity and the third predetermined similarity may be the same as the first predetermined similarity.
Referring to fig. 22, the present application further provides a non-volatile computer readable storage medium 30. Non-transitory computer readable storage medium 30 contains computer readable instructions. The computer readable instructions, when executed by the processor 21, cause the processor 21 to perform the image processing method according to any one of the above embodiments.
For example, referring to fig. 1 and 22 in conjunction, the computer readable instructions, when executed by the processor 21, cause the processor 21 to perform the steps of the following image processing method:
01: acquiring an image to be restored with a portrait of a target user;
02: detecting a relative movement between the target user and the electronic device 20; and
03: and when the relative movement condition meets a preset condition, performing restoration processing on the portrait area of the image to be restored.
As another example, referring to fig. 4 and 22, when executed by the processor 21, the computer readable instructions cause the processor 21 to perform the steps of the following image processing method:
021: acquiring two frames of initial images with portrait, wherein the time interval between the shooting moments of the two frames of initial images is less than a preset time length;
022: detecting coordinate information of a preset characteristic point of a human face in each frame of initial image;
023: calculating the relative displacement between the target user and the electronic device 20 according to the two pieces of coordinate information; and
024: the moving speed of the relative movement between the target user and the electronic device 20 is calculated from the relative displacement and the time interval.
The nonvolatile computer readable storage medium 30 may be disposed in the image processing apparatus 10 (shown in fig. 2) or the electronic device 20 (shown in fig. 3), or may be disposed in the cloud server. When the nonvolatile computer readable storage medium 30 is disposed in the cloud server, the image processing apparatus 10 or the electronic device 20 can communicate with the cloud server to obtain the corresponding computer readable instructions.
It will be understood that the computer readable instructions comprise computer program code. The computer program code may be in the form of source code, object code, an executable file or some intermediate form, etc. The non-transitory computer readable storage medium may include: any entity or device capable of carrying computer program code, recording medium, U-disk, removable hard disk, magnetic disk, optical disk, computer Memory, Read-Only Memory (ROM), Random Access Memory (RAM), software distribution medium, and the like.
The processor 21 may be referred to as a driver board. The driver board may be a Central Processing Unit (CPU), other general purpose Processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), an off-the-shelf Programmable Gate Array (FPGA) or other Programmable logic device, discrete Gate or transistor logic, discrete hardware components, etc.
In the description herein, references to the description of the terms "one embodiment," "certain embodiments," "an illustrative embodiment," "an example," "a specific example" or "some examples" or the like mean that a particular feature, structure, material, or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the present application. In this specification, schematic representations of the above terms do not necessarily refer to the same embodiment or example. Furthermore, the particular features, structures, materials, or characteristics described may be combined in any suitable manner in any one or more embodiments or examples. Furthermore, various embodiments or examples and features of different embodiments or examples described in this specification can be combined and combined by one skilled in the art without contradiction.
Any process or method descriptions in flow charts or otherwise described herein may be understood as representing modules, segments, or portions of code which include one or more executable instructions for implementing specific logical functions or steps of the process, and the scope of the preferred embodiments of the present application includes other implementations in which functions may be executed out of order from that shown or discussed, including substantially concurrently or in reverse order, depending on the functionality involved, as would be understood by those reasonably skilled in the art of the present application.
Although embodiments of the present application have been shown and described above, it is to be understood that the above embodiments are exemplary and not to be construed as limiting the present application, and that changes, modifications, substitutions and alterations can be made to the above embodiments by those of ordinary skill in the art within the scope of the present application.

Claims (14)

1. An image processing method for an electronic device, the image processing method comprising:
acquiring an image to be restored with a portrait of a target user;
detecting a relative movement condition between the target user and the electronic equipment; and
and when the relative movement condition meets a preset condition, repairing the portrait area of the image to be repaired.
2. The image processing method according to claim 1, wherein the detecting a relative movement between the target user and the electronic device comprises:
acquiring two frames of initial images with the portrait, wherein the time interval between the shooting moments of the two frames of initial images is less than a preset time length;
detecting coordinate information of preset feature points of the human face in each frame of the initial image;
calculating the relative displacement between the target user and the electronic equipment according to the two pieces of coordinate information; and
and calculating the moving speed of the relative movement between the target user and the electronic equipment according to the relative displacement and the time interval.
3. The image processing method according to claim 2, wherein two frames of the initial image are two frames of images continuously captured by a camera of the electronic device, and one frame of the initial image is the image to be restored.
4. The image processing method according to claim 2, wherein when the relative movement condition satisfies a predetermined condition, performing a repairing process on the portrait area of the image to be repaired includes:
when the moving speed is within a preset speed range, repairing the portrait area of the image to be repaired;
the image processing further comprises:
and when the moving speed is out of the moving speed range, the image to be repaired is not subjected to repairing treatment.
5. The image processing method according to claim 1, wherein the detecting a relative movement between the target user and the electronic device comprises:
and acquiring the shaking condition of the electronic equipment through a sensor.
6. The image processing method according to claim 5, wherein the shake condition is a shake amplitude of the electronic device within a predetermined period of time, and when the relative movement condition satisfies a predetermined condition, the repairing process for the portrait area of the image to be repaired includes:
when the jitter amplitude is within a preset amplitude range, repairing the portrait area of the image to be repaired;
the image processing further comprises:
and when the jitter amplitude is out of the preset amplitude range, the human image area of the image to be repaired is not repaired.
7. The image processing method according to claim 1, wherein the performing of the repairing process on the portrait area of the image to be repaired includes:
detecting a portrait area in the image to be repaired;
performing a plurality of convolutions on the portrait area to obtain a plurality of feature images;
performing multiple times of upsampling and at least one time of deconvolution on the feature image output by the last convolution to obtain a residual image; and
and fusing the residual image and the portrait area to obtain a repaired image.
8. The method according to claim 7, wherein the performing a plurality of upsampling and at least one deconvolution on the feature image output after the last convolution to obtain a residual image comprises:
in the first up-sampling process, performing up-sampling and deconvolution on the feature image output by the last convolution;
in the second and more than second upsampling processes, fusing an image obtained by the previous upsampling, the feature image corresponding to the size of the image obtained by the previous upsampling and an image obtained by the previous N times of deconvolution, and performing upsampling or performing upsampling and deconvolution on the fused image;
and fusing the image obtained by the last upsampling and the image obtained by the previous N times of deconvolution to obtain the residual image, wherein N is more than or equal to 1, and N belongs to N +.
9. The image processing method according to claim 1, wherein the performing of the repairing process on the portrait area of the image to be repaired includes:
acquiring a reference image, wherein the definition of the reference image is higher than the preset definition;
and carrying out portrait hyper-resolution algorithm processing on the image to be repaired according to the reference image to obtain a repaired image.
10. The image processing method according to claim 9, wherein the acquiring a reference image comprises:
carrying out face detection on the portrait area of the image to be restored and a preset user portrait;
when the similarity between the face of the image to be restored and the face of a preset user is greater than or equal to a first preset similarity, taking the portrait of the preset user as the reference image;
and when the similarity between the face of the image to be restored and the face of the preset user is smaller than a first preset similarity, acquiring a preset standard portrait as the reference image.
11. The image processing method according to claim 9, wherein performing a human image hyper-separation algorithm process on the image to be restored according to the reference image to obtain a restored image comprises:
acquiring a first characteristic diagram of the image to be repaired after up-sampling;
acquiring a second feature map of the reference image after up-sampling and down-sampling;
acquiring a third feature map of the reference image;
acquiring a feature of the second feature map, wherein the similarity of the feature of the second feature map and the first feature map exceeds a second preset similarity to serve as a reference feature;
acquiring the feature of which the similarity with the reference feature exceeds a third preset similarity in the second feature map to obtain an exchange feature map;
merging the exchange characteristic diagram and the first characteristic diagram to obtain a fourth characteristic diagram;
amplifying the fourth feature map by a preset multiple to obtain a fifth feature map; and
and taking the fifth feature map as the image to be repaired and executing the steps in a circulating manner until the obtained fifth feature map is the target magnification factor, and taking the fifth feature map with the target magnification factor as the repaired image.
12. An image processing apparatus for an electronic device, comprising:
the system comprises an acquisition module, a restoration module and a restoration module, wherein the acquisition module is used for acquiring an image to be restored with a portrait of a target user;
the detection module is used for detecting the relative movement condition between the target user and the electronic equipment; and
and the repairing module is used for repairing the portrait area of the image to be repaired when the relative movement condition meets a preset condition.
13. An electronic device, comprising:
a housing; and
a processor mounted on the housing, the processor being configured to implement the image processing method of any of claims 1-11.
14. A non-transitory computer readable storage medium containing computer readable instructions which, when executed by a processor, cause the processor to perform the image processing method of any one of claims 1 to 11.
CN201911241409.1A 2019-12-06 2019-12-06 Image processing method and device, electronic equipment and computer readable storage medium Active CN111127345B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911241409.1A CN111127345B (en) 2019-12-06 2019-12-06 Image processing method and device, electronic equipment and computer readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911241409.1A CN111127345B (en) 2019-12-06 2019-12-06 Image processing method and device, electronic equipment and computer readable storage medium

Publications (2)

Publication Number Publication Date
CN111127345A true CN111127345A (en) 2020-05-08
CN111127345B CN111127345B (en) 2024-02-02

Family

ID=70496293

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911241409.1A Active CN111127345B (en) 2019-12-06 2019-12-06 Image processing method and device, electronic equipment and computer readable storage medium

Country Status (1)

Country Link
CN (1) CN111127345B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111724329A (en) * 2020-07-03 2020-09-29 北京字节跳动网络技术有限公司 Image processing method and device and electronic equipment

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109190520A (en) * 2018-08-16 2019-01-11 广州视源电子科技股份有限公司 A kind of super-resolution rebuilding facial image method and device
CN110008817A (en) * 2019-01-29 2019-07-12 北京奇艺世纪科技有限公司 Model training, image processing method, device, electronic equipment and computer readable storage medium
CN110070487A (en) * 2019-04-02 2019-07-30 清华大学 Semantics Reconstruction face oversubscription method and device based on deeply study
CN110222668A (en) * 2019-06-17 2019-09-10 苏州大学 Based on the multi-pose human facial expression recognition method for generating confrontation network
CN110310247A (en) * 2019-07-05 2019-10-08 Oppo广东移动通信有限公司 Image processing method, device, terminal and computer readable storage medium
CN110475067A (en) * 2019-08-26 2019-11-19 Oppo广东移动通信有限公司 Image processing method and device, electronic equipment, computer readable storage medium

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109190520A (en) * 2018-08-16 2019-01-11 广州视源电子科技股份有限公司 A kind of super-resolution rebuilding facial image method and device
CN110008817A (en) * 2019-01-29 2019-07-12 北京奇艺世纪科技有限公司 Model training, image processing method, device, electronic equipment and computer readable storage medium
CN110070487A (en) * 2019-04-02 2019-07-30 清华大学 Semantics Reconstruction face oversubscription method and device based on deeply study
CN110222668A (en) * 2019-06-17 2019-09-10 苏州大学 Based on the multi-pose human facial expression recognition method for generating confrontation network
CN110310247A (en) * 2019-07-05 2019-10-08 Oppo广东移动通信有限公司 Image processing method, device, terminal and computer readable storage medium
CN110475067A (en) * 2019-08-26 2019-11-19 Oppo广东移动通信有限公司 Image processing method and device, electronic equipment, computer readable storage medium

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
段立娟等: "基于小波域的深度残差网络图像超分辨率算法" *
王丙付: "基于跳连接反卷积神经网络的自动眼镜摘除" *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111724329A (en) * 2020-07-03 2020-09-29 北京字节跳动网络技术有限公司 Image processing method and device and electronic equipment
CN111724329B (en) * 2020-07-03 2022-03-01 北京字节跳动网络技术有限公司 Image processing method and device and electronic equipment

Also Published As

Publication number Publication date
CN111127345B (en) 2024-02-02

Similar Documents

Publication Publication Date Title
JP6411505B2 (en) Method and apparatus for generating an omnifocal image
KR102463101B1 (en) Image processing method and apparatus, electronic device and storage medium
US11488293B1 (en) Method for processing images and electronic device
CN111062981B (en) Image processing method, device and storage medium
CN112602088B (en) Method, system and computer readable medium for improving quality of low light images
CN111833461B (en) Method and device for realizing special effect of image, electronic equipment and storage medium
WO2011077659A1 (en) Image processing device, imaging device, and image processing method
CN110858316A (en) Classifying time series image data
WO2024021742A9 (en) Fixation point estimation method and related device
US20200082197A1 (en) Information processing method, information processing device, and recording medium
CN111127345B (en) Image processing method and device, electronic equipment and computer readable storage medium
CN112714263B (en) Video generation method, device, equipment and storage medium
CN113658065A (en) Image noise reduction method and device, computer readable medium and electronic equipment
CN115908120B (en) Image processing method and electronic device
CN115115552B (en) Image correction model training method, image correction device and computer equipment
KR20130098675A (en) Face detection processing circuit and image pick-up device including the same
CN112099712B (en) Face image display method and device, electronic equipment and storage medium
CN115049572A (en) Image processing method, image processing device, electronic equipment and computer readable storage medium
CN111080543B (en) Image processing method and device, electronic equipment and computer readable storage medium
CN111126568B (en) Image processing method and device, electronic equipment and computer readable storage medium
CN108109107B (en) Video data processing method and device and computing equipment
JP6063680B2 (en) Image generation apparatus, image generation method, imaging apparatus, and imaging method
CN115082496A (en) Image segmentation method and device
CN107977644B (en) Image data processing method and device based on image acquisition equipment and computing equipment
CN112106352A (en) Image processing method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant