CN111031241B - Image processing method and device, terminal and computer readable storage medium - Google Patents

Image processing method and device, terminal and computer readable storage medium Download PDF

Info

Publication number
CN111031241B
CN111031241B CN201911252850.XA CN201911252850A CN111031241B CN 111031241 B CN111031241 B CN 111031241B CN 201911252850 A CN201911252850 A CN 201911252850A CN 111031241 B CN111031241 B CN 111031241B
Authority
CN
China
Prior art keywords
image
shot
striped
face
camera
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201911252850.XA
Other languages
Chinese (zh)
Other versions
CN111031241A (en
Inventor
欧阳丹
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong Oppo Mobile Telecommunications Corp Ltd
Original Assignee
Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong Oppo Mobile Telecommunications Corp Ltd filed Critical Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority to CN201911252850.XA priority Critical patent/CN111031241B/en
Publication of CN111031241A publication Critical patent/CN111031241A/en
Application granted granted Critical
Publication of CN111031241B publication Critical patent/CN111031241B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/62Control of parameters via user interfaces
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/61Control of cameras or camera modules based on recognised objects
    • H04N23/611Control of cameras or camera modules based on recognised objects where the recognised objects include parts of the human body
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/80Camera processing pipelines; Components thereof

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Human Computer Interaction (AREA)
  • Image Processing (AREA)
  • Studio Devices (AREA)

Abstract

The application discloses an image processing method, an image processing apparatus, a terminal and a computer readable storage medium. The image processing method comprises the steps of obtaining an original image shot by a first camera, wherein the first camera is arranged below a screen; carrying out striping processing on the original image to obtain a striping image; when a face exists in the strip-removed image, finding out a shot image with the similarity of the face in the strip-removed image being greater than a first preset similarity from a face image library as a reference image, wherein the shot image is shot by a second camera, and the definition of the shot image is higher than that of the strip-removed image; and processing the de-striped image according to the reference image to obtain a repaired image. The image processing method, the image processing device, the terminal and the computer readable storage medium disclosed by the application process the strip-removed image through the reference image to obtain the repaired image, so that the definition of the repaired image is improved.

Description

Image processing method and device, terminal and computer readable storage medium
Technical Field
The present application relates to the field of image processing technologies, and in particular, to an image processing method, an image processing apparatus, a terminal, and a computer-readable storage medium.
Background
In order to increase the screen ratio, some terminals set a camera below the screen, and the light-entering amount of the camera below the screen is less than that of the camera above the screen, and the display content of the screen and/or the circuit structure on the screen affect the captured image, so that the quality of the finally captured image is not high.
Disclosure of Invention
Embodiments of the present application provide an image processing method, an image processing apparatus, a terminal, and a computer-readable storage medium.
The image processing method of the embodiment of the application comprises the following steps: acquiring an original image shot by a first camera, wherein the first camera is arranged below a screen; carrying out de-striping processing on the original image to obtain a de-striped image; when a face exists in the de-striped image, finding out a shot image with the similarity greater than a first preset similarity with the face in the de-striped image from a face image library as a reference image, wherein the shot image is shot by a second camera, and the definition of the shot image is higher than that of the de-striped image; and processing the de-striped image according to the reference image to obtain a repaired image.
An image processing apparatus according to an embodiment of the present application includes: the device comprises a first acquisition module, a first processing module, a determination module and a second processing module. The first acquisition module is used for acquiring an original image shot by a first camera, and the first camera is arranged below the screen. The first processing module is used for carrying out de-striping processing on the original image to obtain a de-striped image. The determining module is used for searching a shot image with the similarity degree greater than a first preset similarity degree with the face in the strip-removed image from a face image library to serve as a reference image when the strip-removed image has the face. The shot image is shot by a second camera, and the definition of the shot image is higher than that of the strip-removed image. The second processing module is used for processing the de-striped image according to the reference image to obtain a repaired image.
The terminal of the embodiment of the application comprises a shell and a processor, wherein the processor is installed on the shell. The processor is configured to: acquiring an original image shot by a first camera, wherein the first camera is arranged below a screen; carrying out de-striping processing on the original image to obtain a de-striped image; when a face exists in the de-striped image, finding out a shot image with the similarity greater than a first preset similarity with the face in the de-striped image from a face image library as a reference image, wherein the shot image is shot by a second camera, and the definition of the shot image is higher than that of the de-striped image; and processing the de-striped image according to the reference image to obtain a repaired image.
A computer-readable storage medium of an embodiment of the present application, having stored thereon a computer program that, when executed by a processor, enables acquisition of an original image captured by a first camera disposed below a screen; carrying out de-striping processing on the original image to obtain a de-striped image; when a face exists in the de-striped image, finding out a shot image with the similarity greater than a first preset similarity with the face in the de-striped image from a face image library as a reference image, wherein the shot image is shot by a second camera, and the definition of the shot image is higher than that of the de-striped image; and processing the de-striped image according to the reference image to obtain a repaired image.
According to the image processing method, the image processing device, the terminal and the computer readable storage medium, the original image shot by the first camera is subjected to the stripe removing processing to obtain the stripe removing image, so that the influence of display stripes of a terminal screen on the image is avoided; and then, acquiring an image which is the same as the strip-removed image from the face image library as a reference image, and processing the strip-removed image through the reference image to obtain a repaired image.
Additional aspects and advantages of the present application will be set forth in part in the description which follows and, in part, will be obvious from the description, or may be learned by practice of the present application.
Drawings
The foregoing and/or additional aspects and advantages of the present application will become apparent and readily appreciated from the following description of the embodiments, taken in conjunction with the accompanying drawings of which:
FIG. 1 is a schematic flow chart diagram of an image processing method according to some embodiments of the present application.
FIG. 2 is a schematic diagram of an image processing apparatus according to some embodiments of the present application.
Fig. 3 is a front view and a side view of a terminal according to some embodiments of the present application.
FIG. 4 is a flow chart illustrating an image processing method according to some embodiments of the present application.
FIG. 5 is a schematic diagram of an image processing apparatus according to some embodiments of the present application.
Fig. 6 and 7 are schematic flow charts of image processing methods according to some embodiments of the present disclosure.
FIG. 8 is a schematic diagram of a second processing module of an image processing apparatus according to some embodiments of the present application.
Fig. 9 and 10 are schematic flow diagrams of image processing methods according to some embodiments of the present application.
FIG. 11 is a schematic diagram of an image processing apparatus according to some embodiments of the present application.
FIG. 12 is a flow chart illustrating an image processing method according to some embodiments of the present application.
FIG. 13 is a schematic diagram of a setup module of an image processing apparatus according to some embodiments of the present application.
FIG. 14 is a flow chart illustrating an image processing method according to some embodiments of the present application.
FIG. 15 is a schematic diagram of a seventh acquisition unit of the setup module of certain embodiments of the present application.
FIG. 16 is a flow chart illustrating an image processing method according to some embodiments of the present application.
FIG. 17 is a schematic diagram of an image processing system according to some embodiments of the present application.
Fig. 18 is a schematic diagram of a cloud in accordance with some embodiments of the present application.
FIG. 19 is a schematic diagram of an image processing apparatus according to some embodiments of the present application.
FIG. 20 is a flow chart illustrating an image processing method according to some embodiments of the present application.
FIG. 21 is a schematic diagram of an image processing apparatus according to some embodiments of the present application.
FIG. 22 is a flow chart illustrating an image processing method according to some embodiments of the present application.
FIG. 23 is a schematic diagram of an image processing apparatus according to some embodiments of the present application.
FIG. 24 is a flow chart illustrating an image processing method according to some embodiments of the present application.
FIG. 25 is a schematic diagram of an image processing apparatus according to some embodiments of the present application.
FIG. 26 is a flow chart illustrating an image processing method according to some embodiments of the present application.
FIG. 27 is a schematic view of a second processing unit of a third processing module of certain embodiments of the present application.
FIG. 28 is a scene schematic of a de-striped image according to some embodiments of the present application.
FIG. 29 is a schematic view of a scene of an image in a face image library according to some embodiments of the present application.
FIG. 30 is a schematic view of a scene of a first image, a second image, and a fringe image according to some embodiments of the present disclosure.
FIG. 31 is a schematic view of a scene of an original image, a striped image, and a de-striped image according to some embodiments of the present disclosure.
FIG. 32 is a schematic view of a scene of a captured image according to some embodiments of the present application.
FIG. 33 is a schematic view of a scene of an image in a face image library and a captured image according to some embodiments of the present application.
Fig. 34 and 35 are schematic views of a scene of a reference image according to some embodiments of the present application.
Fig. 36 is a schematic diagram of a connection between a computer-readable storage medium and a terminal according to some embodiments of the present application.
Detailed Description
Reference will now be made in detail to embodiments of the present application, examples of which are illustrated in the accompanying drawings, wherein like or similar reference numerals refer to the same or similar elements or elements having the same or similar function throughout. The embodiments described below with reference to the drawings are exemplary and intended to be used for explaining the present application and should not be construed as limiting the present application.
Referring to fig. 1 and fig. 3 together, an image processing method according to an embodiment of the present disclosure includes:
01, acquiring an original image shot by a first camera 221, wherein the first camera 221 is arranged below a screen 240;
02, carrying out de-striping processing on the original image to obtain a de-striped image;
03, judging whether the strip-removed image has a human face or not;
04, when a face exists in the strip-removed image, finding out a shot image with the similarity greater than or equal to a first preset similarity with the face in the strip-removed image from the face image library as a reference image, wherein the shot image is shot by a second camera 222, and the definition of the shot image is higher than that of the strip-removed image; and
and 05, processing the de-striped image according to the reference image to obtain a repaired image.
Referring to fig. 2, an image processing apparatus 100 according to an embodiment of the present disclosure includes a first obtaining module 11, a first processing module 12, a first determining module 13, a determining module 14, and a second processing module 15. The image processing apparatus 100 may be configured to implement the image processing method according to the embodiment of the present disclosure, and step 01 may be performed by the first obtaining module 11, step 02 may be performed by the first processing module 12, step 03 may be performed by the first determining module 13, step 04 may be performed by the determining module 14, and step 05 may be performed by the second processing module 15. That is, the first obtaining module 11 may be configured to obtain an original image captured by the first camera 221, where the first camera 221 is disposed below the screen 240; the first processing module 12 may be configured to perform a de-striping process on the original image to obtain a de-striped image; the first judging module 13 is configured to judge whether a face exists in the de-striped image; when a face exists in the de-striped image, the determining module 14 finds out a shot image with a similarity greater than or equal to a first preset similarity with the face in the de-striped image from the face image library as a reference image, the shot image is shot by the second camera 222, and the definition of the shot image is higher than that of the de-striped image; and the second processing module 15 is configured to process the de-striped image according to the reference image to obtain a repaired image.
Referring to fig. 1 and 3 together, a terminal 200 according to an embodiment of the present disclosure includes a housing 210, an imaging device 220, and a processor 230, wherein the imaging device 220 and the processor 230 are mounted on the housing 210. The imaging device 220 includes a first camera 221 and a second camera 222. The processor 230 may be configured to implement the image processing method according to the embodiment of the present application, and the steps 01, 02, 03, 04, and 05 may all be implemented by the processor 230, that is, the processor 230 may be configured to: acquiring an original image shot by a first camera 221, wherein the first camera 221 is arranged below a screen 240; carrying out striping processing on the original image to obtain a striping image; judging whether the strip-removed image has a human face or not; when a face exists in the strip-removed image, finding out a shot image with the similarity greater than or equal to a first preset similarity with the face in the strip-removed image from a face image library as a reference image, wherein the shot image is shot by a second camera 222, and the definition of the shot image is higher than that of the strip-removed image; and processing the de-striped image according to the reference image to obtain a repaired image.
According to the image processing method, the image processing device 100 and the terminal 200 of the embodiment of the application, the original image shot by the first camera 221 is subjected to the stripe removing processing to obtain the stripe removed image, so that the influence of the display stripes of the screen 240 of the terminal 200 on the image is avoided; and then, acquiring an image which is the same as the strip-removed image from the face image library as a reference image, and processing the strip-removed image through the reference image to obtain a repaired image.
Specifically, acquiring the original image captured by the first camera 221 may include: the acquisition of the image captured by the first camera 221 stored in the album from the album, the subsequent processing of the original image to obtain a restored image, is a post-processing of the image in the album, with the aim of improving the quality of the image stored in the album. Acquiring the original image captured by the first camera 221 may further include: when the first camera 221 captures a currently acquired image, the subsequent processing on the original image is the processing on the image during the capturing by the first camera 221, and the captured output is a processed restored image.
The terminal 200 may further include a screen 240 (display screen) and a rear cover 250. The first camera 221 is hidden under the screen 240 and is not perceived by the user, so that the full screen effect of the screen 240 is realized. The second camera 222 may be provided at a side where the rear cover 250 is located. That is, the first camera 221 may be a front-facing off-screen camera; the second camera 222 is a rear-mounted on-screen camera. Generally, a circuit structure is arranged on a screen, and when a terminal is normally used, a display pattern may be arranged on the screen, and when a camera under the screen is used for shooting, the display pattern and the circuit structure are represented as stripes on an original image, so that the quality of the shot image is low. In the image processing method, the image processing apparatus 100 and the terminal 200 of the present application, the original image captured by the first camera 221 is first subjected to the stripe removing process, so that the influence of the display pattern and/or the line structure of the screen 240 on the image formation is eliminated to a certain extent.
The images shot by the second camera 222 are stored in the face image library, and because the second camera 222 is a rear-mounted on-screen camera, the hardware configuration is usually many times higher than that of the front-mounted first camera 221, and the shot images are not affected by the screen 240, so that the definition is far higher than that of the original images and higher than that of the strip-removed images. And carrying out face detection on the de-striped image to judge whether a face exists on the de-striped image, and searching a shot image with the similarity of the face in the de-striped image being more than or equal to a first preset similarity from a face image library to be used as a reference image when the face exists on the obtained de-striped image. That is, an image of the same person as the person in the streak-removed image is found in the face image library, and this image is used as a reference image. For example, by detecting the face in the de-striped image and the face feature of one image in the face image library, where the face feature includes at least one of facial features, skin features, and hair features, and comparing, if the similarity between the face of the de-striped image and the face of one image in the face image library is greater than or equal to a first preset similarity, it may be determined that the person in the image in the face image library is the same as the person in the de-striped image, and the image in the face image library may be used as a reference image. And if the similarity between the face in the de-striped image and the face in one image in the face image library is smaller than a first preset similarity, determining that the person in the image in the face image library is not the same person as the person in the de-striped image, and discarding the image in the face image library. The larger the value of the first preset similarity is, the more similar the human face in the reference image and the human face in the de-striped image are, the more the same person can be illustrated as the person in the reference image and the person in the de-striped image. The smaller the value of the first preset similarity is, the smaller the comparison workload is, and the higher the comparison speed is. In this embodiment, the first preset similarity may range from 70% to 100%, for example, the first preset similarity may be 70%, 71%, 75%, 80%, 85%, 89%, 90%, 91%, 92%, 94%, 95%, 98%, 99%, 100%, and so on. When the first preset similarity is within the range, the comparison accuracy can be guaranteed, the comparison speed can be higher, and the overall speed of image processing is further increased.
When the number of the acquired reference images is multiple, and the de-striped image is processed according to the reference image, the de-striped image can be processed according to the reference image with the highest similarity to the de-striped image in the multiple reference images, so as to obtain the repaired image. Taking the first preset similarity as 85% and 3 face images in the face image library as an example for explanation, please refer to fig. 28 and 29 together, where fig. 28 is a de-striped image, the similarity between the face in the first image (left image in fig. 29) and the face in the de-striped image is 70%, the similarity between the face in the second image (middle image in fig. 29) and the face in the de-striped image is 89%, and the similarity between the face in the third image (right image in fig. 29) and the face in the de-striped image is 98%. The second and third images are taken as reference images. When the de-striped image is processed according to the reference image, the reference image with the highest similarity (i.e. the third image) can be selected to process the de-striped image to obtain a repaired image.
And when the number of the acquired reference images is one, processing the strip-removed images directly according to the reference images to obtain a repaired image.
In some embodiments, if the similarity between the face in the image in the face image library and the face in the de-striped image is smaller than the first preset similarity, that is, the number of reference images is zero, the de-striped image is not processed. In other embodiments, when the number of the reference images is zero, a preset standard human image may be acquired to process the streak-removed image to obtain a repaired image. The preset standard portrait can be a high-definition poster and the like.
In other embodiments, the first camera 221 may be a rear-mounted off-screen camera, and in this case, the second camera 222 may be a rear-mounted on-screen camera. Alternatively, the first camera 221 may be a front-facing off-screen camera, and in this case, the second camera 222 is a front-facing on-screen camera. Alternatively, the first camera 221 may be a rear-mounted off-screen camera, in which case the second camera 222 may be a front-mounted on-screen camera, and so on, to name but a few.
Referring to fig. 3 and 4 together, in some embodiments, the terminal 200 stores therein calibration data including a stripe image, and the step 02 includes:
021, obtaining a fringe-removed image according to the original image and the fringe image.
Referring to fig. 4 and 5, in some embodiments, the first processing module 12 can include a first processing unit 121, wherein step 021 can be executed by the first processing unit 121. That is, the first processing unit 121 is configured to obtain a de-striped image according to the original image and the striped image.
Referring to fig. 3 and fig. 4, in some embodiments, step 021 may be implemented by processor 230, that is, processor 230 may be configured to: and obtaining a de-fringe image according to the original image and the fringe image.
Specifically, the calibration data is the calibration operation performed on the camera by the terminal 200 in the production process. Since the first camera 221 is an off-screen camera, when the first camera 221 is calibrated, the calibration data includes a stripe image formed as a display pattern on the screen of the terminal 200.
More specifically, referring to fig. 3 and 6 together, in some embodiments, the specific manner of obtaining the calibration data (fringe image) includes:
001, when light directly enters the first camera 221, acquiring a first image captured by the first camera 221;
002, when the light enters the first camera 221 after passing through the screen 240, acquiring a second image captured by the first camera 221;
003, obtaining a stripe image corresponding to the screen 240 according to the first image and the second image, and storing the stripe image as calibration data;
004 to subtract the fringe image from the original image to obtain a de-fringe image.
Step 001, step 002, step 003 and step 004 are all operations performed by the terminal 200 during production. In a manufacturing process of the terminal 200 (for example, an assembling process of the first camera 221 of the terminal 200), before the first camera 221 is mounted on the housing 210 of the terminal 200 but before the screen 240 is mounted, a first image photographed by the first camera 221 in this state is acquired, and at this time, light is directly incident into the first camera 221, and thus, the first image is a fringe-free image. When the screen 240 is mounted on the housing 210 without changing other hardware conditions, the first camera 221 is located below the screen 240, and the light needs to pass through the screen 240 and then enter the first camera 221 to obtain a second image, so that the second image includes display stripes reflecting the display pattern and/or the circuit structure of the screen 240. It should be noted that the shooting scenes of the first image and the second image are the same, so as to achieve a better calibration effect. The first image is subtracted from the second image to obtain a fringe image, which can be stored as calibration data in the terminal 200.
Specifically, referring to fig. 30, fig. 30 is a first image (left), a second image (middle) and a stripe image (right) in sequence. As can be seen, the first image includes only the portrait, and the second image includes the display stripes and the portrait reflecting the display pattern and/or the line structure of the screen 240. Since the second image is identical to the scene (i.e., the portrait) on the first image, the first image is subtracted from the second image to obtain a stripe image with only stripe information left, and the stripe image is stored in the storage unit of the terminal 200 or in the image processing apparatus 100.
More specifically, referring to fig. 3 and 31 together, fig. 31 includes an original image (left side of fig. 31) and a streak image (middle of fig. 31) captured by the first camera 221. The original image is subtracted from the streak image to obtain a streak-removed image (right image of fig. 31).
In some embodiments, processing the de-striped image based on the reference image may include applying a hyper-differentiation algorithm to the de-striped image. Specifically, referring to fig. 7, step 05 includes:
051, obtaining the first characteristic image after up sampling of the strip removing image;
052, obtaining a second characteristic diagram of the reference image after up-sampling and down-sampling;
053, acquiring a third feature map of the reference image without up-sampling and down-sampling;
054, obtaining the feature of the second feature pattern whose similarity with the first feature pattern exceeds a second preset similarity as a reference feature;
055, obtaining the feature of the third feature diagram whose similarity with the reference feature exceeds the third preset similarity to obtain an exchange feature diagram;
056, merging the exchange characteristic diagram and the first characteristic diagram to obtain a fourth characteristic diagram;
057, amplifying the fourth characteristic diagram by a preset multiple to obtain a fifth characteristic diagram; and
058, taking the fifth feature map as a de-striped image and executing the steps in a circulating manner until the obtained fifth feature map is the target magnification, and then taking the fifth feature map with the target magnification as a repaired image.
Referring to fig. 7 and 8, in some embodiments, the second processing module 15 includes: a first acquisition unit 151, a second acquisition unit 152, a third acquisition unit 153, a fourth acquisition unit 154, a fifth acquisition unit 155, a merging unit 156, an amplification unit 157, and a second processing unit 158. Wherein step 051 may be performed by the first obtaining unit 151; step 052 may be performed by the second obtaining unit 152; step 053 may be performed by the third obtaining unit 153; step 054 may be performed by the fourth obtaining unit 154; step 055 may be performed by the fifth obtaining unit 155; step 056 may be performed by the merging unit 156; step 057 may be performed by the amplification unit 157; step 058 may be performed by the second processing unit 158. That is, the first obtaining unit 151 may be configured to obtain a first feature map of the strip-removed image after upsampling; the second obtaining unit 152 may be configured to obtain a second feature map of the reference image after performing up-sampling and down-sampling; the third obtaining unit 153 may be configured to obtain a third feature map of the reference image without performing upsampling and downsampling; the fourth obtaining unit 154 may be configured to obtain, as a reference feature, a feature in the second feature map, where a similarity with the first feature map exceeds a second preset similarity; the fifth obtaining unit 155 may be configured to obtain a feature, in the third feature map, of which the similarity to the reference feature exceeds a third preset similarity, so as to obtain an exchange feature map; the merging unit 156 may be configured to merge the exchange feature map and the first feature map to obtain a fourth feature map; the amplifying unit 157 may be configured to amplify the fourth feature map by a predetermined factor to obtain a fifth feature map; the second processing unit 158 is configured to use the fifth feature map as a de-striped image and perform the above steps in a loop until the obtained fifth feature map is the target magnification, and the fifth feature map with the target magnification is the repaired image.
Referring to fig. 3 and 7, in some embodiments, step 051, step 052, step 053, step 054, step 055, step 056, step 057 and step 058 may be implemented by the processor 230, that is, the processor 230 may be configured to: acquiring a first characteristic diagram of the strip-removed image after up-sampling; acquiring a second feature map of the reference image after up-sampling and down-sampling; acquiring a third feature map of the reference image without up-sampling and down-sampling; acquiring a feature of the second feature map, wherein the similarity of the feature of the second feature map and the first feature map exceeds a second preset similarity to serve as a reference feature; acquiring the feature of which the similarity with the reference feature exceeds a third preset similarity in the third feature map to obtain an exchange feature map; merging the exchange characteristic diagram and the first characteristic diagram to obtain a fourth characteristic diagram; amplifying the fourth feature map by a preset multiple to obtain a fifth feature map; and taking the fifth feature map as a de-striped image and executing the steps in a circulating manner until the obtained fifth feature map is the target magnification, and taking the fifth feature map with the target magnification as a repaired image.
Specifically, the up-sampling may be understood as performing an enlargement process on the image to be restored or the reference image, and the down-sampling may be understood as performing a reduction process on the reference image.
More specifically, referring to fig. 9, step 051 includes:
0511, up-sampling the strip-removed image;
0512, inputting the strip-removed image after up sampling into the convolution neural network to extract the characteristic, obtaining the first characteristic picture;
step 052 includes:
0521, down-sampling the reference image;
0522, up-sampling the down-sampled reference image;
0523, inputting the up-sampled reference image into a convolutional neural network for feature extraction to obtain a second feature map;
step 053 comprises:
0531, inputting the reference image into the convolutional neural network for feature extraction to obtain a third feature map.
By performing up-sampling (amplification) processing on the de-striped image, the up-sampled de-striped image is input into a convolutional neural network for feature extraction to obtain a first feature map, the first feature map can be understood as an image obtained by amplifying a portrait area in the de-striped image, and the first feature map comprises various features in the portrait, such as five sense organs, skin, hair, contour and the like. Because the first feature map is a low definition feature map caused by directly amplifying the de-striped image, and the definition of the reference image is relatively high, the reference image needs to be downsampled (reduced) first, and the downsampled image needs to be upsampled to realize the fuzzification processing of the reference image, so that the similarity between the second feature map and the first feature map is improved. Features such as facial features, skin, hair, contours, etc. may also be included in the second profile. The reference image is directly input to the convolutional neural network for feature extraction to obtain a third feature map, and it should be noted that the convolutional neural network is a network after deep learning, and can perform feature extraction with high accuracy on the input image.
More specifically, the features in the second feature map and the features in the first feature map are compared, the similarity between the two features is determined, the similarity is compared with a second preset similarity, and if the similarity is greater than or equal to the second preset similarity, the feature in the second feature map is similar to the corresponding feature in the first feature map, so that the feature in the second feature map can be used as a reference feature. And comparing the third feature graph with the reference feature, judging the similarity of the third feature graph and the reference feature, comparing the similarity with a third preset similarity, and if the similarity is greater than or equal to the third preset similarity, obtaining a corresponding exchange feature graph. And merging the exchange characteristic diagram and the first characteristic diagram to obtain a fourth characteristic diagram, and amplifying the fourth characteristic diagram by a preset multiple to obtain a fifth characteristic diagram. And judging the magnification of the fifth feature map, and if the magnification is equal to the target magnification, taking the fifth feature map as a repaired image. It should be noted that the second predetermined similarity and the third predetermined similarity may be the same as the first predetermined similarity, and are not repeated herein.
Referring to fig. 10, in some embodiments, the image processing method further includes:
and 06, establishing a face image library.
Referring to fig. 10 and fig. 11, in some embodiments, the image processing apparatus 100 further includes a building module 16, and step 06 can be performed by the building module 16. That is, the building module 16 may be used to build a face image library.
Referring to fig. 3 and 10, in some embodiments, step 06 may be implemented by the processor 230, that is, or the processor 230 may be configured to create a face image library.
Specifically, the facial image library may include a plurality of images, each of which includes a facial image. The images in the facial image library may include facial images of a plurality of users. Taking the terminal 200 as a mobile phone as an example, the owner may store the facial image of the owner in the facial image library, or store the facial image of a family member or a friend of the owner in the facial image library. The images of each person in the face image library can be stored in one frame respectively, and the images of each person can also be stored in a plurality of frames.
Referring to fig. 3 and 12, in some embodiments, step 06 includes:
061, acquiring a plurality of shot images shot by the second camera 222;
062, obtaining the definition of each shot image;
063, storing the shot image with the definition greater than or equal to the preset definition threshold value to generate a face image library.
Referring to fig. 3, 12 and 13, in some embodiments, the establishing module 16 includes a sixth obtaining unit 161, a seventh obtaining unit 162 and a storage unit 163, wherein step 061 may be executed by the sixth obtaining unit 161; step 062 may be performed by the seventh obtaining unit 162; step 063 can be performed by storage unit 163. That is, the sixth acquiring unit 161 may be configured to acquire a plurality of captured images captured by the second camera 222; the seventh acquiring unit 162 may be configured to acquire the sharpness of each captured image; the storage unit 163 may be configured to store the captured image with the sharpness greater than or equal to a preset sharpness threshold to generate a face image library.
In some embodiments, step 061, step 062, and step 063 can be implemented by processor 230, that is, processor 230 can be configured to: acquiring a plurality of shot images shot by the second camera 222; acquiring the definition of each shot image; and storing the shot image with the definition greater than or equal to a preset definition threshold value to generate a human face image library.
Specifically, referring to fig. 14, step 062 may include:
0621, performing shaping low-pass filtering on the captured image to obtain a filtered image;
0622, acquiring high-frequency information in the shot image according to the shot image and the filtered image, wherein the high-frequency information refers to a part far away from zero frequency in the discrete cosine transform coefficient, and the part is used for describing detail information of the shot image;
0623, the sharpness of the photographed image is obtained based on the number of pixels of the high frequency information and the number of all pixels of the photographed image.
Referring to fig. 14 and 15, in some embodiments, the seventh obtaining unit 162 includes a first obtaining subunit 1621, a second obtaining subunit 1622, and a third obtaining subunit 1623. Wherein step 0621 may be performed by the first acquisition subunit 1621; step 0622 may be performed by the second acquisition subunit 1622; step 0623 may be performed by the third acquisition subunit 1623. That is, the first acquisition subunit 1621 may be configured to perform shaping low-pass filtering on the captured image to acquire a filtered image; the second obtaining subunit 1622 may be configured to obtain high-frequency information in the captured image according to the captured image and the filtered image, where the high-frequency information is a part of the discrete cosine transform coefficient far from zero frequency, and the part is used to describe detail information of the captured image; the third acquiring subunit 1623 may be configured to acquire the sharpness of the captured image based on the number of pixels of the high-frequency information and the number of all pixels of the captured image.
Referring to fig. 3 and 14, in some embodiments, step 0621, step 0622, and step 0623 may be implemented by processor 230, that is, processor 230 may be configured to: performing shaping low-pass filtering on the photographed image to obtain a filtered image; acquiring high-frequency information in the shot image according to the shot image and the filtered image, wherein the high-frequency information refers to a part far away from zero frequency in a discrete cosine transform coefficient, and the part is used for describing detail information of the shot image; and acquiring the definition of the shot image according to the number of the pixels of the high-frequency information and the number of all the pixels of the shot image.
Specifically, after a shot image is obtained, shaping low-pass filtering processing is performed on the shot image to obtain a filtered image, and then the filtered image is subtracted from the shot image to obtain high-frequency information in the shot image, wherein the high-frequency information refers to a part far away from zero frequency in a discrete cosine transform coefficient and is used for describing detail information of the shot image; after the high-frequency information is obtained, the number of pixels of the high-frequency information can be counted, and the clearer the shot image is, the larger the number of pixels of the high-frequency information is.
The definition of an image can be characterized by the proportion of the number of pixels of high-frequency information in the image in all pixels in the image, and the higher the proportion is, the higher the definition of the image is. For example, the number of pixels of the high-frequency information in one captured image is 20% of the number of all pixels of the captured image, and the sharpness of the captured image is represented by 20%. It follows that each sharpness corresponds to the number of pixels of high frequency information.
Referring to fig. 32, fig. 32 shows 8 captured images, taking the preset sharpness threshold as 20% as an example, the sharpness of the 1 st captured image is 23%, the sharpness of the 2 nd captured image is 30%, the sharpness of the 3 rd captured image is 24%, the sharpness of the 4 th captured image is 27%, the sharpness of the 5 th captured image is 18%, the sharpness of the 6 th captured image is 14%, the sharpness of the 7 th captured image is 6%, and the sharpness of the 8 th captured image is 25% respectively obtained from the 8 captured images through the above sharpness obtaining manner. And (3) because the definition of the 1 st, the 2 nd, the 3 rd and the 4 th pictures is greater than a preset definition threshold value, the 1 st, the 2 nd, the 3 rd and the 4 th pictures are shot to generate a face image library. The face image library is obtained by storing the image with higher definition, so that the definition of a reference image obtained by subsequently carrying out face recognition on the face image library is higher, and the image processing effect is improved.
In some embodiments, the number of the facial images of each person stored in the facial image library is only one, and when the second camera 222 first shoots the user a to obtain a first shot image (including the person image of a), and the first image definition is greater than the preset definition threshold, the first shot image can be stored in the facial image library. The subsequent second camera 222 shoots the user a again to obtain a second shot image (including the portrait of a), the degrees of sharpness of the first shot image and the second shot image are compared, if the degree of sharpness of the second shot image is higher than that of the first shot image, the second shot image is stored in the face image library, and the first shot image is deleted; and if the definition of the second shot image is lower than that of the first shot image, continuously keeping the first shot image stored in the face image library. Taking the owner as an example, please refer to fig. 33, a first human face image library in fig. 33 stores a human face image of the owner in advance; the second image in fig. 33 is another face image of the owner who captured the image in the subsequent imaging process. Comparing the definitions of the first image (left in fig. 33) and the second image (right in fig. 33) to obtain that the definition of the second image is higher than that of the first image, for example, if the definition of the first image obtained by the above definition obtaining mode is 21% and the definition of the second image obtained by the above definition obtaining mode is 26%, the second image is stored in the face image library in place of the first image. The shot images are compared with the face images in the face image library, so that the definition of the face images in the face image library is always the highest definition, and the image processing effect is improved. Moreover, only one image is prestored in each image of each person in the face image library, so that the time required for acquiring the reference image is shorter while the storage space is saved, and the overall speed of image processing is further increased. Of course, the images stored in the face image library may also be directly specified by the user, e.g., the user may prefer the first one to be stored even though the second one has a higher definition than the first one.
In some embodiments, the number of the face images of each person stored in the face image library may be multiple, and in the subsequent shooting process, only the face image with the definition greater than or equal to the preset definition threshold needs to be stored. The face image library generated in the manner stores more face images, and the reference image obtained in the face image library can ensure that the similarity between the reference image and the de-striped image is higher, and the subsequent processing effect on the de-striped image can be improved.
Referring to fig. 16 and 17, in some embodiments, the image processing method further includes:
07, sending the reference image and the strip-removed image to the cloud 300;
08, the cloud 300 sends the restored image to the terminal 200.
Step 05 comprises:
059, the cloud 300 processes the striped image according to the reference image to obtain a repaired image;
referring to fig. 16, 17 and 19, in some embodiments, the image processing apparatus 100 further includes a first communication module 17, wherein step 07 can be performed by the first communication module 17, that is, the first communication module 17 can be configured to send the reference image and the de-striped image to the cloud 300.
Referring to fig. 3, 16, 17 and 18, an image processing system 1000 according to an embodiment of the present disclosure includes a terminal 200 and a cloud 300, the terminal 200 further includes a communication component 260, such as an antenna assembly, the cloud 300 includes a processing chip 310 and a communication unit 320, and the communication component 260 and the communication unit 320 are in communication so that the terminal 200 and the cloud 300 are communicatively connected to each other to transmit data. Step 07 may be performed by the communication component 260, step 059 may be performed by the processing chip 310, and step 08 may be performed by the communication unit 320, that is, the communication component 260 may be configured to send the reference image and the striped image to the cloud 300; the processing chip 310 may be configured to process the de-striped image according to the reference image to obtain a repaired image; the communication unit 320 may be used to transmit the repair image to the terminal 200.
Specifically, after obtaining the streak-removed image and the reference image, the terminal 200 directly sends the two images to the cloud 300, and the cloud 300 processes the streak-removed image according to the reference image, where the processing may be a super-resolution algorithm processing. Since the hyper-resolution algorithm processing needs to occupy a certain memory during operation, after the reference image and the striped image are sent to the cloud 300, the cloud 300 performs the hyper-resolution algorithm processing on the striped image, so that the problem of memory preemption of the terminal 200 cannot be caused, and the normal use of the user cannot be influenced.
More specifically, the acquired striped image and reference image may be sent to the cloud 300 for processing during the shooting process of the first camera 221, and the striped image and reference image acquired in the album may also be sent to the cloud 300 for processing. The two processing modes can avoid the influence of the image processing process on the normal use of the user, and improve the user experience.
Referring to fig. 17 and 20 together, in some embodiments, the image processing method further includes:
09, sending the face image library and the strip-removed image to the cloud 300;
step 03 comprises:
031, the cloud 300 determines whether the striped image has a face;
step 4 comprises the following steps:
041, when a face exists in the streak-removed image, the cloud 300 finds out a shot image with a similarity greater than or equal to a first preset similarity with the face in the streak-removed image from the face image library to serve as a reference image;
referring to fig. 17, 20 and 21, in some embodiments, the image processing apparatus 100 further includes a second communication module 19, wherein step 09 can be implemented by the second communication module 19. That is, the second communication module 19 can be configured to send the face image library and the de-striped image to the cloud 300.
Referring to fig. 3, 17 and 21, in some embodiments, step 09 may be implemented by the communication component 260, and both step 031 and step 041 may be implemented by the processing chip 310; that is, the communication component 260 can be operative to: sending the face image library and the strip-removed image to the cloud 300; the processing chip 310 may be configured to: judging whether the strip-removed image has a human face or not; when a face exists in the strip-removed image, finding out a shot image with the similarity of the face in the strip-removed image being greater than or equal to a first preset similarity from the face image library to serve as a reference image.
Specifically, after the striped image is obtained, the striped image and the face image library are directly sent to the cloud 300 at the same time. The processing chip 310 in the cloud 300 performs face detection on the striped image, finds out an image that is the same person as the striped image from the face image library as a reference image, processes the striped image according to the reference image to obtain a repaired image, and finally, the communication unit 320 may send the repaired image to the terminal 200. The face detection of the strip-removed image in the terminal 200 and the comparison similarity of the terminal 200 to the face image library are avoided, so that the memory of the terminal 200 is occupied, and the user experience is further influenced.
Referring to fig. 22, in some embodiments, the image processing method further includes:
010, acquiring a repair parameter of the reference image;
011, correcting the preset restoration algorithm according to the restoration parameters of the reference image to obtain a customized restoration algorithm;
012, the portrait area in the restored image is processed according to the customized restoration algorithm to obtain the target image.
Referring to fig. 22 and 23, in some embodiments, the image processing apparatus 100 may further include a second obtaining module 110, a third processing module 111, and a fourth processing module 112, wherein step 010 may be performed by the second obtaining module 110; step 011 can be performed by the third processing module 111; step 012 may be performed by fourth processing module 112. That is, the second obtaining module 110 may be configured to obtain the repair parameters of the reference image; the third processing module 111 may be configured to modify the preset repair algorithm according to the repair parameter of the reference image to obtain a customized repair algorithm; the fourth processing module 112 is configured to process the portrait area in the restored image according to the customized restoration algorithm to obtain the target image.
Referring to fig. 3 and fig. 22, in some embodiments, step 010, step 011, and step 012 can be implemented by processor 230, that is, processor 230 can be configured to: acquiring a restoration parameter of a reference image; correcting a preset restoration algorithm according to the restoration parameters of the reference image to obtain a customized restoration algorithm; and processing the portrait area in the restored image according to the customized restoration algorithm to obtain the target image.
Specifically, a peeling parameter can be included in the customized repair algorithm, and the peeling parameter can be characterized by a peeling strength, and the peeling strength is generally represented by a number such as 1, 1.5, 2, 2.1, 3, 4, 5, 5.2, 6, 7, 8, 9, 9.5, 9.9, 10, and the like, wherein the larger the number, the larger the peeling strength, the smaller the number, and the smaller the peeling strength.
The preset repairing algorithm can comprise a buffing parameter, the preset repairing algorithm is regarded as a function F (ax), x is the buffing parameter in the function, a is the buffing coefficient corresponding to the buffing parameter in the function, and a in the function can be corrected according to the buffing parameter in the reference image to obtain a corrected buffing coefficient a'; finally, a customized repair algorithm F (a' x) is obtained. And finally, performing secondary restoration on the restored image according to the customized restoration algorithm and F (a' x) to obtain a target image which accords with the preference of the user.
Referring to fig. 24, in some embodiments, step 010 includes:
0101, acquiring a buffing parameter of a reference image;
step 011 includes:
0111, modifying the preset buffing algorithm according to the buffing parameters to obtain a customized buffing algorithm;
step 012 includes:
0121, processing the portrait area in the restored image according to the customized buffing algorithm to obtain the target image.
Referring to fig. 24 and fig. 25, in some embodiments, the second obtaining module 110 may include an eighth obtaining unit 1101, the third processing module 111 may include a second processing unit 1111, and the fourth processing module 120 may include a third processing unit 1121, wherein step 0101 may be performed by the eighth obtaining unit 1101; step 0111 may be performed by the second processing unit 1111; step 0121 may be performed by the third processing unit 1121. That is, the eighth acquiring unit 1101 may be configured to acquire the peeling parameters of the reference image; the second processing unit 1111 may be configured to modify the preset buffing algorithm according to the buffing parameters to obtain a customized buffing algorithm; the third processing unit 1121 may be configured to process the portrait area in the restored image according to a customized peeling algorithm to obtain a target image.
Referring to fig. 3 and fig. 24, in some embodiments, step 0101, step 0111 and step 0121 may be implemented by the processor 230, that is, the processor 230 may be configured to: acquiring a buffing parameter of a reference image; correcting the preset buffing algorithm according to the buffing parameters to obtain a customized buffing algorithm; and processing the portrait area in the restored image according to the customized buffing algorithm to obtain a target image.
For example, referring to fig. 34, fig. 34 includes 8 reference images, the buffing parameters of each reference image are obtained as 4, 5, 6, 8, 4, and 8, the frequency of occurrence of each buffing parameter is counted, the frequency of use of the buffing parameter 4 is analyzed as 0.25, the frequency of use of the buffing parameter 5 is 0.375, the frequency of use of the buffing parameter 6 is 0.125, and the frequency of use of the buffing parameter 8 is 0.25, the buffing coefficient in the preset buffing algorithm is modified according to the buffing parameter 5 with the highest frequency of use to obtain the customized buffing algorithm, and then the restored image is processed by portrait buffing according to the customized buffing algorithm to obtain the target image according with the preference of the user.
In other embodiments, when the reference image is a plurality of images, the skin-grinding parameters of the plurality of reference images are obtained, the using frequency of each skin-grinding parameter is counted, the plurality of skin-grinding parameters with the highest using frequency are weighted to obtain weighted skin-grinding parameters, and then the preset skin-grinding algorithm is corrected according to the weighted skin-grinding parameters to obtain the customized skin-grinding algorithm; and finally, processing the portrait area in the restored image according to the customized buffing algorithm to obtain a target image. For example, referring to fig. 35, fig. 35 includes 8 reference images, the buffing parameters of each reference image are obtained as 2, 5, 6, 7, and 8, the frequency of occurrence of each buffing parameter is counted, the frequency of use of the buffing parameter 2 is analyzed as 0.125, the frequency of use of the buffing parameter 5 is 0.375, the frequency of use of the buffing parameter 6 is 0.25, the frequency of use of the buffing parameter 7 is 0.125, and the frequency of use of the buffing parameter 8 is 0.125, then the two buffing parameters 5 and 6 with the highest frequency of use are weighted to obtain a weighted buffing parameter, and the preset buffing algorithm is modified according to the weighted buffing parameter to obtain the customized buffing algorithm. And then, performing portrait buffing processing on the restored image according to a customized buffing algorithm to obtain a target image which accords with the preference of a user. Of course, the weighting process may be performed by using the three peeling parameters with the highest frequency, or more than three parameters, which are not listed here.
Referring to fig. 26, in certain embodiments, step 0121 comprises:
01211, performing skin color partition on the portrait area in the restored image to obtain the skin area in the portrait area of the restored image;
01212, skin-type partition of skin area to get defect area in skin area;
01213, processing the defect region by a buffing algorithm according to the buffing parameters of the reference image to obtain a buffing image;
01214, restoring the texture of the dermabrasion image to get the target image.
Referring to fig. 26 and 27, in some embodiments, the second processing unit 1121 may include a first partition subunit 11211, a second partition subunit 11212, a first processing subunit 11213, and a second processing subunit 11214, wherein step 01211 may be performed by the first partition subunit 11211; step 01212 may consist of second partition subunit 11212; step 01213 may be performed by the first processing subunit 11213; step 01214 may be performed by the second processing sub-unit 11214. That is, the first segmentation subunit 11211 may be configured to perform skin color segmentation on the portrait area in the restored image to obtain a skin area in the portrait area of the restored image; the second zoning subunit 11212 may be configured to perform skin-type zoning on the skin area to obtain a defective area in the skin area; the first processing subunit 11213 may be configured to process the defect region by using a peeling algorithm according to the peeling parameter of the reference image, so as to obtain a peeled image; the second processing subunit 11214 may be configured to perform texture restoration on the buffed image to obtain a target image.
Referring to fig. 3 and fig. 26, in some embodiments, all of step 01211, step 01212, step 01213 and step 01214 may be implemented by the processor 230, that is, the processor 230 may be configured to: carrying out skin color partition on the portrait area in the restored image to obtain a skin area in the portrait area of the restored image; performing skin type partition on the skin area to obtain a defect area in the skin area; processing the defect area by adopting a buffing algorithm according to the buffing parameters of the reference image to obtain a buffing image; and performing texture recovery on the buffed image to obtain a target image.
Specifically, the skin color partition may extract the skin by inputting the repaired image into a skin extraction model, which may include a YCrCb model or a skin color ellipse model, for example, inputting the repaired image into the YCrCb model converts an RGB model in the repaired image into the YCrCb model, and the Cr component and the Cb component in the YCrCb model may be set according to different skin colors to extract a better skin region, for example, the Cr component of the yellow race is between about 133 and 173 and the Cb component is between about 77 and 127.
The skin partition can extract corresponding defect areas by adopting a high-contrast mode, and the defect areas refer to areas with unevenness in the skin area, for example, pockmarks, scars, pores and the like on the face can be the defect areas. The high contrast mode may refer to acquiring pixel points in the skin region, which have a large difference from the pixel values of the surrounding pixel points, and the region formed by these pixel points is the defect region. And after the defect area is obtained, processing the defect area according to a customized buffing algorithm to obtain a buffing image.
The texture recovery refers to replying texture to the buffing image, the texture recovery can be divided into 1 to 10 grades, and the texture recovery strength corresponds to the buffing strength. For example, when the peeling strength 5 is applied to the defective area, the texture recovery strength is also 5 when the texture is recovered, and the texture recovery increases the sharpening degree of the target image, thereby preventing the unreal image caused by excessive peeling.
Referring to fig. 1, fig. 3 and fig. 36, the present application further provides a computer-readable storage medium 2000, on which a computer program 2100 is stored, and when the computer program is executed by the processor 230, the steps of the image processing method according to any one of the above embodiments are implemented.
For example, in the case where the program is executed by the processor 230, the steps of the following image processing method are implemented:
01, acquiring an original image shot by a first camera 221, wherein the first camera 221 is arranged below a screen;
02, carrying out de-striping processing on the original image to obtain a de-striped image;
03, judging whether the strip-removed image has a human face or not;
04, when a face exists in the strip-removed image, finding out a shot image with the similarity greater than or equal to a first preset similarity with the face in the strip-removed image from the face image library as a reference image, wherein the shot image is shot by a second camera 222, and the definition of the shot image is higher than that of the strip-removed image; and
and 05, processing the de-striped image according to the reference image to obtain a repaired image.
The computer-readable storage medium 2000 may be disposed in the image processing apparatus 100 (shown in fig. 2) or the terminal 200, or may be disposed in the cloud server, and at this time, the image processing apparatus 100 or the terminal 200 can communicate with the cloud server to obtain the corresponding computer program 2100.
It will be appreciated that the computer program 2100 comprises computer program code. The computer program code may be in the form of source code, object code, an executable file or some intermediate form, etc. The computer-readable storage medium may include: any entity or device capable of carrying computer program code, recording medium, U-disk, removable hard disk, magnetic disk, optical disk, computer Memory, Read-Only Memory (ROM), Random Access Memory (RAM), software distribution medium, and the like.
Processor 230 may be referred to as a driver board. The driver board may be a Central Processing Unit (CPU), other general purpose Processor 230, a Digital Signal Processor 230 (DSP), an Application Specific Integrated Circuit (ASIC), a Field-Programmable Gate Array (FPGA) or other Programmable logic device, discrete Gate or transistor logic device, discrete hardware component, etc.
In the description herein, reference to the description of the term "one embodiment," "some embodiments," "an example," "a specific example," or "some examples," etc., means that a particular feature, structure, material, or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the application. In this specification, the schematic representations of the terms used above are not necessarily intended to refer to the same embodiment or example. Furthermore, the particular features, structures, materials, or characteristics described may be combined in any suitable manner in any one or more embodiments or examples. Furthermore, various embodiments or examples and features of different embodiments or examples described in this specification can be combined and combined by one skilled in the art without contradiction.
Furthermore, the terms "first", "second" and "first" are used for descriptive purposes only and are not to be construed as indicating or implying relative importance or implicitly indicating the number of technical features indicated. Thus, a feature defined as "first" or "second" may explicitly or implicitly include at least one such feature. In the description of the present application, "plurality" means at least two, e.g., two, three, etc., unless specifically limited otherwise.
Any process or method descriptions in flow charts or otherwise described herein may be understood as representing modules, segments, or portions of code which include one or more executable instructions for implementing specific logical functions or steps of the process, and the scope of the preferred embodiments of the present application includes other implementations in which functions may be executed out of order from that shown or discussed, including substantially concurrently or in reverse order, depending on the functionality involved, as would be understood by those reasonably skilled in the art of the present application.
Although embodiments of the present application have been shown and described above, it is understood that the above embodiments are exemplary and should not be construed as limiting the present application, and that variations, modifications, substitutions and alterations may be made to the above embodiments by those of ordinary skill in the art within the scope of the present application.

Claims (11)

1. An image processing method, comprising:
acquiring an original image shot by a first camera, wherein the first camera is arranged below a screen;
obtaining a stripe image corresponding to the screen, and subtracting the stripe image from the original image to obtain a fringe-removed image;
when a human face exists in the de-striped image, finding out a shot image with the similarity of the human face in the de-striped image being greater than a first preset similarity from a human face image library as a reference image, wherein the shot image is shot by a second camera which is arranged above a screen, and the definition of the shot image is higher than that of the de-striped image; and
and processing the de-striped image according to the reference image to obtain a repaired image.
2. The image processing method according to claim 1, further comprising:
and establishing the face image library.
3. The image processing method according to claim 2, wherein the establishing of the face image library comprises:
acquiring a plurality of shot images shot by the second camera;
acquiring the definition of each shot image;
and storing the shot image with the definition greater than a preset definition threshold value to generate the human face image library.
4. The image processing method according to claim 3, wherein said each of the obtained degrees of sharpness of the captured image comprises:
performing shaping low-pass filtering on the captured image to obtain a filtered image;
acquiring high-frequency information in the shot image according to the shot image and the filtering image, wherein the high-frequency information is a part far away from zero frequency in a discrete cosine transform coefficient, and the part is used for describing detail information of the shot image; and
and acquiring the definition of the shot image according to the number of the pixels of the high-frequency information and the number of all the pixels of the shot image.
5. The method according to claim 1, wherein the processing the de-striped image according to the reference image to obtain a repaired image comprises:
acquiring a first characteristic diagram of the strip-removed image after up-sampling;
acquiring a second feature map of the reference image after up-sampling and down-sampling;
acquiring a third feature map of the reference image;
acquiring a feature of the second feature map, which exceeds a second preset similarity with the first feature map, as a reference feature;
acquiring a feature of which the similarity with the reference feature exceeds a third preset similarity in a third feature map to obtain an exchange feature map;
merging the exchange characteristic diagram and the first characteristic diagram to obtain a fourth characteristic diagram;
amplifying the fourth feature map by a preset multiple to obtain a fifth feature map;
and taking the fifth feature map as a de-striped image and executing the steps in a circulating manner until the obtained fifth feature map is the target magnification, and taking the fifth feature map with the target magnification as the repaired image.
6. The image processing method according to claim 1, further comprising:
sending the reference image and the de-striped image to a cloud;
the processing the de-striped image according to the reference image to obtain a repaired image includes:
the cloud end processes the strip-removed image according to the reference image to obtain a repaired image;
the image processing method further comprises the following steps:
and the cloud sends the repaired image to a terminal.
7. The image processing method according to claim 2, further comprising:
sending the face image library and the strip-removed image to a cloud;
when the de-striped image has a face, finding out a shot image with a similarity greater than a first preset similarity with the face in the de-striped image from a face image library as a reference image, comprising:
the cloud searches a shot image with the similarity greater than a first preset similarity with the face in the strip-removed image from the face image library to serve as a reference image;
the processing the de-striped image according to the reference image to obtain a repaired image includes:
the cloud end processes the strip-removed image according to the reference image to obtain a repaired image;
the image processing method further comprises the following steps:
and the cloud sends the repaired image to a terminal.
8. The image processing method according to claim 1, wherein calibration data is stored in the terminal, the calibration data includes a de-striped image, and the de-striping processing is performed on the original image to obtain the de-striped image, including:
and obtaining the strip-removed image according to the original image and the strip image.
9. An image processing apparatus characterized by comprising:
the first acquisition module is used for acquiring an original image shot by a first camera, and the first camera is arranged below the screen;
the first processing module is used for acquiring a stripe image corresponding to the screen, and subtracting the stripe image from the original image to obtain a striped image;
a determining module, configured to find, when a face exists in the de-striped image, a shot image with a similarity to the face in the de-striped image greater than a first preset similarity from a face image library as a reference image, where the shot image is shot by a second camera, the second camera is disposed above a screen, and a definition of the shot image is higher than a definition of the de-striped image;
and the second processing module is used for processing the de-striped image according to the reference image to obtain a repaired image.
10. A terminal, characterized in that the terminal comprises a housing and a processor mounted on the housing, the processor being configured to implement the image processing method of any one of claims 1-8.
11. A computer-readable storage medium, on which a computer program is stored which, when being executed by a processor, carries out the image processing method of any one of claims 1 to 8.
CN201911252850.XA 2019-12-09 2019-12-09 Image processing method and device, terminal and computer readable storage medium Active CN111031241B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911252850.XA CN111031241B (en) 2019-12-09 2019-12-09 Image processing method and device, terminal and computer readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911252850.XA CN111031241B (en) 2019-12-09 2019-12-09 Image processing method and device, terminal and computer readable storage medium

Publications (2)

Publication Number Publication Date
CN111031241A CN111031241A (en) 2020-04-17
CN111031241B true CN111031241B (en) 2021-08-27

Family

ID=70205037

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911252850.XA Active CN111031241B (en) 2019-12-09 2019-12-09 Image processing method and device, terminal and computer readable storage medium

Country Status (1)

Country Link
CN (1) CN111031241B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111784617B (en) * 2020-06-09 2023-08-15 国家卫星气象中心(国家空间天气监测预警中心) Image processing method and device
CN113225451B (en) * 2021-04-28 2023-06-27 维沃移动通信(杭州)有限公司 Image processing method and device and electronic equipment
CN114268703A (en) * 2021-12-27 2022-04-01 安徽淘云科技股份有限公司 Imaging adjusting method and device during screen scanning, storage medium and equipment

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106778567A (en) * 2016-12-05 2017-05-31 望墨科技(武汉)有限公司 A kind of method that iris recognition is carried out by neutral net
CN108346149A (en) * 2018-03-02 2018-07-31 北京郁金香伙伴科技有限公司 image detection, processing method, device and terminal
CN209071332U (en) * 2018-10-31 2019-07-05 云谷(固安)科技有限公司 Display panel, display screen and display terminal
CN110223231A (en) * 2019-06-06 2019-09-10 天津工业大学 A kind of rapid super-resolution algorithm for reconstructing of noisy image
CN110266994A (en) * 2019-06-26 2019-09-20 广东小天才科技有限公司 A kind of video call method, video conversation apparatus and terminal

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7016532B2 (en) * 2000-11-06 2006-03-21 Evryx Technologies Image capture and identification system and process
CN110493390A (en) * 2019-07-23 2019-11-22 珠海格力电器股份有限公司 CCD camera assembly, mobile phone and mobile device under a kind of screen for mobile device

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106778567A (en) * 2016-12-05 2017-05-31 望墨科技(武汉)有限公司 A kind of method that iris recognition is carried out by neutral net
CN108346149A (en) * 2018-03-02 2018-07-31 北京郁金香伙伴科技有限公司 image detection, processing method, device and terminal
CN209071332U (en) * 2018-10-31 2019-07-05 云谷(固安)科技有限公司 Display panel, display screen and display terminal
CN110223231A (en) * 2019-06-06 2019-09-10 天津工业大学 A kind of rapid super-resolution algorithm for reconstructing of noisy image
CN110266994A (en) * 2019-06-26 2019-09-20 广东小天才科技有限公司 A kind of video call method, video conversation apparatus and terminal

Also Published As

Publication number Publication date
CN111031241A (en) 2020-04-17

Similar Documents

Publication Publication Date Title
Galdran Image dehazing by artificial multiple-exposure image fusion
EP1800259B1 (en) Image segmentation method and system
US7953251B1 (en) Method and apparatus for detection and correction of flash-induced eye defects within digital images using preview or other reference images
CN111031241B (en) Image processing method and device, terminal and computer readable storage medium
US20200311981A1 (en) Image processing method, image processing apparatus, image processing system, and learnt model manufacturing method
CN110910330B (en) Image processing method, image processing apparatus, electronic device, and readable storage medium
CN107172354B (en) Video processing method and device, electronic equipment and storage medium
CN111028170B (en) Image processing method, image processing apparatus, electronic device, and readable storage medium
CN112602088B (en) Method, system and computer readable medium for improving quality of low light images
CN111031239B (en) Image processing method and apparatus, electronic device, and computer-readable storage medium
CN110910331B (en) Image processing method, image processing apparatus, electronic device, and computer-readable storage medium
CN115063331B (en) Multi-scale block LBP operator-based ghost-free multi-exposure image fusion method
CN111105370B (en) Image processing method, image processing apparatus, electronic device, and readable storage medium
CN111083359B (en) Image processing method and apparatus, electronic device, and computer-readable storage medium
CN111062904B (en) Image processing method, image processing apparatus, electronic device, and readable storage medium
CN111105369A (en) Image processing method, image processing apparatus, electronic device, and readable storage medium
CN110992283A (en) Image processing method, image processing apparatus, electronic device, and readable storage medium
CN110992284A (en) Image processing method, image processing apparatus, electronic device, and computer-readable storage medium
CN110930338B (en) Image processing method, image processing apparatus, electronic device, and computer-readable storage medium
RU2383924C2 (en) Method for adaptive increase of sharpness of digital photographs during printing
CN111010509B (en) Image processing method, terminal, image processing system, and computer-readable storage medium
CN117710250B (en) Method for eliminating honeycomb structure imaged by fiberscope
CN115601409A (en) Image processing method, image processing device, storage medium and electronic equipment
CN112422825A (en) Intelligent photographing method, device, equipment and computer readable medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant