CN111105369A - Image processing method, image processing apparatus, electronic device, and readable storage medium - Google Patents

Image processing method, image processing apparatus, electronic device, and readable storage medium Download PDF

Info

Publication number
CN111105369A
CN111105369A CN201911253780.XA CN201911253780A CN111105369A CN 111105369 A CN111105369 A CN 111105369A CN 201911253780 A CN201911253780 A CN 201911253780A CN 111105369 A CN111105369 A CN 111105369A
Authority
CN
China
Prior art keywords
image
restored
face
feature
repaired
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201911253780.XA
Other languages
Chinese (zh)
Inventor
贾玉虎
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong Oppo Mobile Telecommunications Corp Ltd
Original Assignee
Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong Oppo Mobile Telecommunications Corp Ltd filed Critical Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority to CN201911253780.XA priority Critical patent/CN111105369A/en
Publication of CN111105369A publication Critical patent/CN111105369A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • G06T5/77
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/172Classification, e.g. identification
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30196Human being; Person
    • G06T2207/30201Face

Abstract

The application discloses an image processing method, an image processing device, an electronic device and a computer readable storage medium. The image processing method comprises the steps of obtaining an image to be restored, wherein the image to be restored comprises a human face; finding out an image with the definition greater than a first threshold value and the face most similar to the face in the image to be restored as a reference image; and processing the image to be repaired according to the reference image to obtain a repaired image. According to the image processing method, the image processing device, the electronic equipment and the computer readable storage medium, an image with the face most similar to the face in the image to be restored and with the definition larger than a first threshold value is screened from the photo album to serve as a reference image, the image to be restored is restored by utilizing the reference image to obtain the restored image, the image quality of the restored image is improved, and meanwhile the restoration effect of the face features in the restored image is improved.

Description

Image processing method, image processing apparatus, electronic device, and readable storage medium
Technical Field
The present disclosure relates to the field of image processing technologies, and in particular, to an image processing method, an image processing apparatus, an electronic device, and a computer-readable storage medium.
Background
In the shooting process, a plurality of frames of preview images cached in shooting can be synthesized, and finally, a frame of image with high definition is output. However, in an actual application scenario, problems such as shake of a mobile phone of a user, insufficient or excessive light of a shot portrait and the like cause that a buffered multi-frame preview image is blurred, and an image synthesized by the multi-frame preview image is also blurred undoubtedly, so that the quality of an image presented to the user is not high, and the user experience is poor.
Disclosure of Invention
The embodiment of the application provides an image processing method, an image processing device, an electronic device and a computer readable storage medium.
The image processing method comprises the following steps: acquiring an image to be restored, wherein the image to be restored comprises a human face; finding out an image with the definition being greater than a first threshold value and the face being most similar to the face in the image to be restored to serve as a reference image; and processing the image to be repaired according to the reference image to obtain a repaired image.
The image processing device comprises an acquisition module, a screening module and a processing module. The acquisition module is used for acquiring an image to be restored, and the image to be restored comprises a human face. The screening module is used for finding out an image with the definition larger than a first threshold value and the face most similar to the face in the image to be repaired in the photo album as a reference image. The processing module is used for processing the image to be repaired according to the reference image to obtain a repaired image.
The electronic equipment comprises a shell, an imaging device and a processor, wherein the imaging device and the processor are both arranged on the shell, the imaging device is used for shooting images, and the processor is used for: acquiring an image to be restored, wherein the image to be restored comprises a human face; finding out an image with the definition being greater than a first threshold value and the face being most similar to the face in the image to be restored to serve as a reference image; and processing the image to be repaired according to the reference image to obtain a repaired image.
The present application provides a computer-readable storage medium. The computer readable storage medium has stored thereon a computer program, the computer program being executable by a processor to perform: acquiring an image to be restored, wherein the image to be restored comprises a human face; finding out an image with the definition being greater than a first threshold value and the face being most similar to the face in the image to be restored to serve as a reference image; and processing the image to be repaired according to the reference image to obtain a repaired image.
According to the image processing method, the image processing device, the electronic equipment and the computer readable storage medium, an image with the face most similar to the face in the image to be restored and the definition greater than a first threshold value is screened from the album as a reference image, and the image to be restored is restored by using the reference image to obtain a restored image, on one hand, compared with the mode that an output image is synthesized by using a fuzzy preview frame, the definition of the restored image obtained by restoring the image to be restored by using the reference image is higher, namely the image quality is higher; on the other hand, the human face in the reference image is most similar to the human face in the image to be restored, so that the human face characteristics in the restored image obtained after restoration are more real, and the restoration effect is also best; on the other hand, repairing the image to be repaired by using the reference image is not limited to be performed in the shooting process, and may be performed in the post-editing process.
Additional aspects and advantages of embodiments of the present application will be set forth in part in the description which follows and, in part, will be obvious from the description, or may be learned by practice of the present application.
Drawings
The above and/or additional aspects and advantages of the present application will become apparent and readily appreciated from the following description of the embodiments, taken in conjunction with the accompanying drawings of which:
FIG. 1 is a schematic flow chart diagram of an image processing method according to some embodiments of the present application;
FIG. 2 is a schematic diagram of an image processing apparatus according to some embodiments of the present application;
FIG. 3 is a schematic structural diagram of an electronic device according to some embodiments of the present application;
FIG. 4 is a schematic flow chart diagram of an image processing method according to some embodiments of the present application;
FIG. 5 is a schematic diagram of an acquisition module in an image processing apparatus according to some embodiments of the present application;
FIG. 6 is a schematic flow chart diagram of an image processing method according to some embodiments of the present application;
FIG. 7 is a schematic diagram of an acquisition module in an image processing apparatus according to some embodiments of the present application;
FIG. 8 is a schematic flow chart diagram of an image processing method according to some embodiments of the present application;
FIG. 9 is a schematic diagram of an acquisition module in an image processing apparatus according to some embodiments of the present application;
FIG. 10 is a schematic flow chart diagram of an image processing method according to some embodiments of the present application;
FIG. 11 is a schematic diagram of a second acquisition unit in an image processing apparatus according to some embodiments of the present application;
FIG. 12 is a schematic flow chart diagram of an image processing method according to some embodiments of the present application;
FIG. 13 is a schematic diagram of a screening module in an image processing apparatus according to some embodiments of the present application;
FIG. 14 is a schematic flow chart diagram of an image processing method according to some embodiments of the present application;
FIG. 15 is a schematic diagram of a detection unit in an image processing apparatus according to some embodiments of the present application;
FIG. 16 is a schematic diagram of a model for extracting face feature vectors according to some embodiments of the present application;
FIG. 17 is a schematic flow chart diagram of an image processing method according to some embodiments of the present application;
FIG. 18 is a schematic diagram of a processing module in an image processing apparatus according to some embodiments of the present application;
FIG. 19 is a schematic diagram of generating content features of certain embodiments of the present application;
FIG. 20 is a schematic diagram of generating texture features according to some embodiments of the present application;
FIG. 21 is a schematic illustration of texture feature mapping to content features of certain embodiments of the present application;
FIG. 22 is a schematic diagram of the interaction of a computer-readable storage medium and a processor of certain embodiments of the present application.
Detailed Description
Reference will now be made in detail to embodiments of the present application, examples of which are illustrated in the accompanying drawings, wherein like or similar reference numerals refer to the same or similar elements or elements having the same or similar function throughout. The embodiments described below by referring to the drawings are exemplary only for the purpose of explaining the embodiments of the present application, and are not to be construed as limiting the embodiments of the present application.
Referring to fig. 1, the present application provides an image processing method, including:
01: acquiring an image to be restored, wherein the image to be restored comprises a human face;
02: finding out an image with the definition greater than a first threshold value and the face most similar to the face in the image to be restored from the album as a reference image; and
03: and processing the image to be repaired according to the reference image to obtain a repaired image.
Please refer to fig. 1 and fig. 2, the present application further provides an image processing apparatus 100, wherein the image processing apparatus 100 includes an obtaining module 11, a filtering module 12, and a processing module 13. The image processing apparatus 100 may be configured to implement the image processing method provided in the present application, and step 01 may be performed by the obtaining module 11, step 02 may be performed by the screening module 12, and step 03 may be performed by the processing module 13. That is, the obtaining module 11 may be configured to obtain an image to be repaired, where the image to be repaired includes a human face. The screening module 12 may be configured to find, in the album, an image with a sharpness greater than a first threshold and a face that is most similar to the face in the image to be restored, as a reference image. The processing module 13 may be configured to process the image to be repaired according to the reference image to obtain a repaired image.
Referring to fig. 1 and fig. 3, the present application further provides an electronic device 200, where the electronic device 200 includes a housing 210, an imaging device 220, and a processor 230. The imaging device 220 and the processor 230 are both mounted on the housing 210, the imaging device 220 is used for capturing images, the processor 230 can also implement the image processing method provided by the present application, and step 01, step 02 and step 03 can all be implemented by the processor 230. That is, processor 230 may be configured to: acquiring an image to be restored, wherein the image to be restored comprises a human face; finding out an image with the definition greater than a first threshold value and the face most similar to the face in the image to be restored from the album as a reference image; and processing the image to be repaired according to the reference image to obtain a repaired image.
In the image processing method, the image processing apparatus 100, the electronic device 200, and the computer-readable storage medium according to the embodiment of the present application, an image with a face most similar to a face in an image to be restored and a definition greater than a first threshold is selected from an album as a reference image, and the image to be restored is restored by using the reference image to obtain a restored image, on one hand, compared with an output image synthesized by using a blurred preview frame, a restored image obtained by restoring the image to be restored by using the reference image has a higher definition, that is, a higher image quality; on the other hand, the human face in the reference image is most similar to the human face in the image to be restored, so that the human face characteristics in the restored image obtained after restoration are more real, and the restoration effect is also best; on the other hand, repairing the image to be repaired by using the reference image is not limited to be performed in the shooting process, and may be performed in a post-editing process.
The album is an area for storing images in the electronic device 200, and a plurality of photos (images) are stored in the album, such as a landscape photo, a photo including a human face, an animal photo, and the like, and the album herein includes at least one image including a human face and having a definition greater than a first threshold.
Referring to fig. 1, 4 and 5, step 01 includes:
011: acquiring an original image with a portrait;
012: acquiring the definition of an original image; and
013: and determining the original image with the definition smaller than a second threshold as the image to be repaired, wherein the second threshold is smaller than the first threshold.
In some embodiments, the obtaining module 11 may further include a first obtaining unit 111, a second obtaining unit 112, and a determining unit 113, wherein step 011 may be performed by the first obtaining unit 111; step 012 may be performed by the second obtaining unit; step 013 can be performed by the determination unit. That is, the first acquiring unit 111 may be used to acquire an original image having a portrait; the second obtaining unit 112 may be configured to obtain the sharpness of the original image; the determining unit 113 may be configured to determine an original image with a resolution smaller than a second threshold as the image to be repaired, where the second threshold is smaller than the first threshold.
Referring to fig. 3, in some embodiments, step 011, step 012, and step 013 can be implemented by the processor 230, that is, the processor 230 can be configured to: acquiring an original image with a portrait; acquiring the definition of an original image; and determining the original image with the definition smaller than a second threshold as the image to be repaired, wherein the second threshold is smaller than the first threshold.
Specifically, the original image may refer to an image saved in an album or an image directly captured by the camera 221, and the number of the original images may be one or more, and the number of the original images may be two or more. The definition of each original image can be obtained first, and the definition of each original image is compared with a second threshold, when the definition is smaller than the second threshold, the definition of the original image is lower, and the original image is fuzzy, and needs to be repaired, so that the original image is determined as an image to be repaired; when the definition is greater than the second threshold, the definition of the original image is higher, and the original image does not need to be repaired; when the definition of the original image is equal to the second threshold, the original image may be determined as the image to be restored, or the original image may be determined as the image not to be restored. By comparing the definition of each original image with the second threshold value, only the original image with lower definition (lower than the second threshold value) is repaired, the workload of image repair is reduced, and the overall speed of image processing is accelerated.
Referring to fig. 4, 6 and 7, step 011 includes:
0111: and acquiring an original image with a portrait from the photo album in preset time and/or preset scenes.
In some embodiments, the first acquisition unit 111 may comprise a first acquisition sub-unit 1111, wherein step 0111 may be performed by the first acquisition sub-unit 1111; that is, the first obtaining subunit 1111 may be configured to obtain the original image with the portrait from the album at a predetermined time and/or in a preset scene.
Referring to fig. 3, in some embodiments, step 0111 may be implemented by the processor 230, that is, the processor 230 may be configured to: and acquiring an original image with a portrait from the photo album in preset time and/or preset scenes.
For obtaining an original image with a portrait from an album at a predetermined time, wherein the predetermined time may refer to a time when a user does not use a mobile phone, specifically, the predetermined time may include a rest time for entering sleep, for example, a night sleep time (such as, but not limited to, time periods 22: 00-5: 00), and for example, a lunch break time (such as, but not limited to, time periods 12: 30-2: 00); the predetermined time may also include working hours (such as, but not limited to, 8: 00-12: 00 and 14: 00-18: 00), when the user does not use the mobile phone; the predetermined time may further include a class time (such as, but not limited to, at least one time period of 8: 00-8: 40, 9: 00-9: 45, 10: 00-10: 45, 11: 00-11: 45, etc.), and the like. Since the image processing apparatus 100 or the electronic device 200 needs to occupy a certain running memory in the process of acquiring the original image with the portrait in the album, the user generally does not use the mobile phone during the sleep rest time, the working time or the class time, the image processing apparatus 100 or the electronic device 200 is also in the non-working state, and the problem of memory preemption is not caused when the original image with the portrait in the album is acquired at this time, compared with the case that the original image with the portrait in the album is acquired when the image processing apparatus 100 or the electronic device 200 is in the working state. The preset time can be one or more time periods preset by the system, and of course, the preset time can also be set by the user according to the self requirement.
For acquiring an original image with a portrait from an album in a preset scene, the preset scene may include a charging state, a standby state, a low power consumption running state, and the like. Since the time for the image processing apparatus 100 or the electronic device 200 to obtain the original image with the portrait in the album is relatively long and occupies a certain running memory, the step of obtaining is executed only in a preset scene, and the problem of preempting the memory can be avoided as much as possible. The low power consumption operation state may mean that the electronic device 200 is only running software with a small memory requirement for reading, watching news, and the like.
It should be noted that, acquiring the original image with the portrait from the album may be performed only in a predetermined time, may be performed only in a predetermined scene, or may be performed both in a predetermined time and in a predetermined scene. Therefore, the influence of the original image acquired in the album on the normal use of the user can be avoided to the greatest extent, and the user experience is improved.
Referring to fig. 4, 8 and 9, step 011 further includes:
0112: in the shooting process of the camera 221, an original image with a portrait shot by the camera 221 is acquired.
In some embodiments, the image processing apparatus 100 may be applied to an imaging apparatus 220, and the imaging apparatus 220 may capture an original image through a camera 221. The first acquisition unit 111 may comprise a second acquisition sub-unit 1112, wherein step 0112 may be performed by the second acquisition sub-unit 1112; that is, the second acquiring subunit 1112 may be configured to acquire an original image with a portrait captured by the camera 221 during the capturing process of the camera 221.
Referring to fig. 3, in some embodiments, the electronic device 200 may include an imaging apparatus 220, and the imaging apparatus 220 includes a camera 221. Step 0112 may be implemented by processor 230, that is, processor 230 may be configured to: in the shooting process of the camera 221, an original image with a portrait shot by the camera 221 is acquired.
Specifically, when the camera 221 of the imaging device 220 works, the shot original image with the portrait can be obtained in real time, and the subsequent restoration processing can be performed on the original image meeting the conditions to obtain the target image, so that when the user uses the imaging device 220 or the electronic device 200 for shooting, the obtained image (which can be directly presented to the user) has higher quality, and the user experience is improved.
Referring to fig. 4, 10 and 11, step 012 further includes:
0121: performing shaping low-pass filtering on the original image to obtain a first filtered image;
0122: acquiring first high-frequency information in the original image according to the original image and the first filtering image, wherein the first high-frequency information is a part far away from zero frequency in a discrete cosine transform coefficient, and the part is used for describing detail information of the original image; and
0123: and acquiring the definition of the original image according to the number of the pixels of the first high-frequency information and the number of all the pixels of the original image.
In some embodiments, the second obtaining unit 112 may include a third obtaining sub-unit 1121, a fourth obtaining sub-unit 1122 and a fifth obtaining sub-unit 1123, and step 0121 may be performed by the third obtaining sub-unit 1121, step 0122 may be performed by the fourth obtaining sub-unit 1122, and step 0123 may be performed by the fifth obtaining sub-unit 1123. That is, the third obtaining sub-unit 1121 may be configured to perform shaping low-pass filtering on the original image to obtain a first filtered image; the fourth obtaining subunit 1122 may be configured to obtain first high-frequency information in the original image according to the original image and the first filtered image, where the first high-frequency information is a part of the discrete cosine transform coefficient far from zero frequency, where the part is used to describe detail information of the original image; the fifth acquiring subunit 1123 may be configured to acquire the sharpness of the original image based on the number of pixels of the first high-frequency information and the number of all pixels of the original image.
Referring to fig. 3, in some embodiments, step 0121, step 0122, and step 0123 can all be implemented by the processor 230, that is, the processor 230 can be configured to: performing shaping low-pass filtering on the original image to obtain a first filtered image; acquiring high-frequency information in the original image according to the original image and the first filtering image, wherein the first high-frequency information is a part far away from zero frequency in a discrete cosine transform coefficient, and the part is used for describing detail information of the original image; and acquiring the definition of the original image according to the number of the pixels of the first high-frequency information and the number of all the pixels of the original image.
Specifically, the original image may be an original image with a portrait obtained from an album at a predetermined time and/or in a preset scene; or in the shooting process of the camera 221, an original image with a portrait shot by the camera 221 is acquired. After an original image is obtained, shaping low-pass filtering processing is carried out on the original image to obtain a first filtered image, and then the first filtered image is subtracted from the original image to obtain first high-frequency information in the original image, wherein the first high-frequency information refers to a part far away from zero frequency in a discrete cosine transform coefficient and is used for describing detail information of the original image; after the first high-frequency information is obtained, the number of pixels of the first high-frequency information can be counted, and the larger the number of pixels of the first high-frequency information is, the clearer the original image is.
The definition of an image can be characterized by the proportion of the number of pixels of high-frequency information in the image in all pixels in the image, and the higher the proportion is, the higher the definition of the image is. For example, the number of pixels of the first high frequency information in one original image is 20% of the number of all pixels of the original image, and the sharpness of the original image is represented by 20%. It follows that each sharpness corresponds to the number of pixels of a first high frequency information.
The second threshold is a critical value used for measuring whether the original image needs to be repaired or not, wherein the number of pixels of the first high-frequency information in the original image is a first preset number, and the ratio of the first preset number to the number of all pixels of the original image is the second threshold. For example, in an original image, if the number of pixels of the first high-frequency information is smaller than a first preset number, it indicates that the sharpness of the original image is smaller than a second threshold, and the original image needs to be repaired, that is, the original image can be used as an image to be repaired.
The first preset number and the second threshold may correspond to each other, and both the first preset number and the second threshold may be known values, and may be obtained according to a plurality of experiments and then stored in a storage element of the image processing apparatus 100 or the electronic device 200. Of course, a plurality of different first preset numbers may be preset in the image processing apparatus 100 or the electronic device 200, the second threshold corresponding to the first preset number may be automatically associated, and then the user may select different second thresholds according to different requirements.
Taking the second threshold as 15%, the number of all pixels of one original image is 1600 ten thousand, and the first preset number is 240 ten thousand as an example for explanation, when the number of pixels for acquiring the first high-frequency information is less than 240 thousand, it is determined that the definition of the original image is less than 15%, and the original image is taken as an image to be restored.
Referring to fig. 4, 12 and 13, step 02 includes:
021: screening out images containing human faces from the photo album to serve as primary screening images;
022: screening out an image with the definition larger than a first threshold value from the primary screening image to serve as a secondary screening image; and
023: and detecting the similarity between the face in each secondarily-screened image and the face in the image to be restored, and taking the secondarily-screened image with the highest similarity as a reference image.
In some embodiments, the screening module 12 further includes a first screening unit 121, a second screening unit 122, and a detection unit 123. Step 021 may be performed by the first screening unit 121, step 022 may be performed by the second screening unit 122, and step 023 may be performed by the detection unit 123. That is, the first filtering unit 121 may be configured to filter out an image including a human face from the album as a primary filtered image; the second screening unit 122 may be configured to screen an image with a resolution greater than a first threshold from the primary screened image, so as to serve as a secondary screened image; the detecting unit 123 may be configured to detect a similarity between a face in each of the secondarily-screened images and a face in the image to be restored, and use the secondarily-screened image with the highest similarity as the reference image.
Referring to fig. 3, in some embodiments, step 021, step 022 and step 023 may all be implemented by the processor 230, that is, the processor 230 may further be configured to: screening out images containing human faces from the photo album to serve as primary screening images; screening out an image with the definition larger than a first threshold value from the primary screening image to serve as a secondary screening image; and detecting the similarity between the face in each secondary screening image and the face in the image to be restored, and taking the secondary screening image with the highest similarity as a reference image.
Specifically, because the face skin colors in the images with the faces are respectively and relatively concentrated in the color space, whether the images contain the faces can be judged by judging whether the face skin colors in all the images in the album are concentrated in the color space, and the images with the faces are used as primary screening images. Or after a template with a standard face is designed in advance, the matching degree between all the images in the album and the standard template is calculated, whether the face exists in the images is judged according to whether the matching degree reaches a certain threshold value, and the images with the face are taken as primary screening images. Of course, other methods may be used to detect whether a human face exists in an image, and the method is not limited herein.
The method comprises the steps of obtaining the definition of primary screening images (namely images containing human faces), comparing the definition of each primary screening image with a first threshold, and when the definition is smaller than the first threshold, showing that the definition of the primary screening images is lower and is fuzzy and can not be used for repairing images to be repaired; when the definition is greater than the first threshold, it is indicated that the definition of the primary screening image is higher, and the primary screening image can be used for repairing an image to be repaired, so that the primary screening image is used as a secondary screening image. When the definition of the primary screening image is equal to the first threshold, the primary screening image may be determined as the secondary screening image, or the primary screening image may be determined as the non-secondary screening image. It should be noted that the first threshold is greater than the second threshold, that is, the definition of the secondary screening image is greater than that of the image to be restored.
Specifically, the obtaining of the sharpness of the primary screening image (that is, the image including the face image) specifically includes: shaping low-pass filtering the primary screening image to obtain a second filtering image, and subtracting the second filtering image from the primary screening image to obtain second high-frequency information in the primary screening image, wherein the second high-frequency information is a part far away from zero frequency in a discrete cosine transform coefficient and is used for describing detail information of the primary screening image; after the second high-frequency information is obtained, the number of pixels of the second high-frequency information can be counted, and the larger the number of pixels of the second high-frequency information is, the clearer the image is screened at one time.
The first threshold is a critical value used for measuring whether the one-time screened image can be used as a reference image or not, and is a ratio of a second preset number to the number of all pixels of the one-time screened image. For example, in one primary screening image, if the number of pixels of the second high-frequency information is less than the second preset number, it indicates that the sharpness of the primary screening image is less than the first threshold, and the primary screening image cannot be used as a reference image and should be excluded.
The second predetermined number may correspond to the first threshold, and the second predetermined number and the first threshold may be known values, and may be obtained through a plurality of experiments and then stored in a storage element of the image processing apparatus 100 or the electronic device 200. Of course, a plurality of different second preset numbers may be preset in the image processing apparatus 100 or the electronic device 200, the first threshold values corresponding to the second preset numbers may be automatically associated, and then the user may select different first threshold values according to different requirements.
Taking the first threshold value as 20%, the number of all pixels of one primary screening image is 1600 ten thousand, and the second preset number is 320 thousand as an example for explanation, when the number of the pixels for acquiring the second high-frequency information is less than 320 thousand, determining that the definition of the primary screening image is less than 20%, and excluding the primary screening image; and when the number of the pixels for acquiring the second high-frequency information is more than 320 ten thousand, determining that the definition of the primary screening image is more than 20%, and taking the primary screening image as a secondary screening image.
It should be noted that, compared with the method of synthesizing the output image by using the blurred preview frame, the method of screening the image with higher definition to repair the image to be repaired has higher definition of the obtained repaired image, that is, higher image quality;
the obtained secondary screening image may be one or more secondary screening images, and when the obtained secondary screening image is multiple, that is, when the sharpness of the multiple primary screening images in all the primary screening images is greater than the first threshold, step 023 is executed to detect the similarity between the face in each secondary screening image and the face in the image to be restored, and the secondary screening image with the highest similarity is used as the reference image.
When the obtained secondary screening image is one image, that is, only one primary screening image among all the primary screening images has a sharpness greater than the first threshold, in some embodiments, step 023 is still performed to detect the similarity between the face in each secondary screening image and the face in the image to be restored, and the secondary screening image with the highest similarity is used as the reference image. In other embodiments, step 023 is not executed, and only one secondary filtered image is directly used as a reference image, so that the image processing efficiency is improved, and the image processing speed is increased.
Referring to fig. 14 and 15, step 023 includes:
0231: respectively carrying out image preprocessing on the secondary screening image and the image to be restored;
0232: respectively performing face feature extraction on the preprocessed secondary screening image and the image to be restored by using the convolution layer and the pooling layer to obtain a first feature image corresponding to the secondary screening image and a second feature image corresponding to the image to be restored;
0233: classifying each feature in the first feature image and each feature in the second feature image by using a full-connection layer, and performing vectorization representation respectively;
0234: calculating the difference between the feature vector of each category in the first feature image and the feature vector of the corresponding category in the second feature image to obtain a plurality of differences corresponding to a plurality of categories; and
0235: and calculating a comprehensive difference between the secondary screening image and the reference image according to a plurality of differences corresponding to the plurality of categories, and taking the secondary screening image with the minimum comprehensive difference as the reference image.
In some embodiments, the detection unit 123 further includes a first processing subunit 1231, a second processing subunit 1232, a third processing subunit 1233, a fourth processing subunit 1234, and a fifth processing subunit 1235. Step 0231 may be performed by the first processing sub-unit 1231, step 0232 may be performed by the second processing sub-unit 1232, step 0233 may be performed by the third processing sub-unit 1233, step 0234 may be performed by the fourth processing sub-unit 1234 and step 0235 may be performed by the fifth processing sub-unit 1235. That is, the first processing subunit 1231 may be configured to perform image preprocessing on the secondary screening image and the image to be repaired, respectively. The second processing subunit 1232 may be configured to perform face feature extraction on the preprocessed secondary filtered image and the image to be restored by using the convolutional layer and the pooling layer, respectively, so as to obtain a first feature image corresponding to the secondary filtered image and a second feature image corresponding to the image to be restored. The third processing subunit 1233 may be configured to classify and vectorize, respectively, each feature in the first feature image and each feature in the second feature image by using the full-connected layer. The fourth processing subunit 1234 may be configured to calculate a difference between the feature vector of each category in the first feature image and the feature vector of the corresponding category in the second feature image to obtain a plurality of differences corresponding to the plurality of categories. The fifth processing subunit 1235 may be configured to calculate a comprehensive difference between the secondary screening image and the reference image according to a plurality of differences corresponding to the plurality of categories, and use the secondary screening image with the smallest comprehensive difference as the reference image.
Referring to fig. 3, in some embodiments, step 0231, step 0232, step 0233, step 0234 and step 0235 are all executable by processor 230, that is, processor 230 is further configured to: respectively carrying out image preprocessing on the secondary screening image and the image to be restored; respectively performing face feature extraction on the preprocessed secondary screening image and the image to be restored by using the convolution layer and the pooling layer to obtain a first feature image corresponding to the secondary screening image and a second feature image corresponding to the image to be restored; classifying each feature in the first feature image and each feature in the second feature image by using a full-connection layer, and performing vectorization representation respectively; calculating the difference between the feature vector of each category in the first feature image and the feature vector of the corresponding category in the second feature image to obtain a plurality of differences corresponding to a plurality of categories; and calculating a comprehensive difference between the secondary screening image and the reference image according to a plurality of differences corresponding to the plurality of categories, and taking the secondary screening image with the minimum comprehensive difference as the reference image.
Specifically, all the obtained secondary screening images and images to be repaired are respectively preprocessed, namely, Gaussian noise is filtered out from all the obtained secondary screening images and images to be repaired through a Gaussian filter, so that the images are smoother, and explosion points and burrs on the images are prevented from interfering with subsequent image processing. Performing face feature extraction on the secondary screening image and the image to be restored obtained after the preprocessing to obtain a first feature image corresponding to the secondary screening image and a second feature image corresponding to the image to be restored; and classifying each feature in one feature image and each feature in the second feature image, and respectively performing vectorization representation. Specifically, as shown in fig. 16, performing convolution and pooling on the preprocessed secondary screening image for multiple times to obtain multiple convolution layers and multiple pooling layers, extracting facial features of the secondary screening image by using the convolution layers and the pooling layers, and obtaining a first feature image corresponding to the secondary screening image; and the last convolution layer executes the last convolution on the feature images output by the convolution layer and the pooling layer, and outputs the first feature image obtained by the last convolution to the full-connection layer. The full-connection layer classifies each feature in the first feature image output by the last convolutional layer and expresses the feature by vectorization. Similarly, the process of extracting the feature vector of the image to be restored is the same as the process of extracting the secondary screening image, and is not described herein again.
After the feature vectors in the first feature images corresponding to the secondary screening images and the feature vectors of the second feature images corresponding to the images to be restored are obtained, the difference between the feature vector of each category in each first feature image and the feature vector of the corresponding category in each second feature image is calculated. For example, a feature vector representing the width of the eye in the first feature image and a feature vector representing the width of the eye in the second feature image are selected, and the difference between the two vectors is calculated; or selecting a feature vector representing the height of the nose bridge in the first feature image and a feature vector representing the height of the nose bridge in the second feature image, and calculating the difference between the two vectors.
And calculating the comprehensive difference between the secondary screening image and the image to be repaired according to the plurality of differences corresponding to the plurality of categories, and taking the secondary screening image with the minimum comprehensive difference as a reference image. In some embodiments, the euclidean distance may be used to calculate the composite distance, e.g., the categories of feature vectors include eyes, nose, mouth, ears, and the feature vector representing the eye in the first feature image is a and the feature vector representing the eye in the second feature image is a 0; the feature vector representing the nose in the first feature image is B, and the feature vector representing the nose in the second feature image is B0(ii) a The feature vector representing the mouth in the first feature image is C, and the feature vector representing the mouth in the second feature image is C0(ii) a The first characteristic image represents the characteristic vector of the ear as D, and the second characteristic image represents the characteristic vector of the ear as D0Then, calculating the comprehensive difference L as the arithmetic square root of the sum of squares of the differences between the feature vectors of the same category on the first feature image and the second feature image according to the euclidean distance, namely, using a mathematical formula to express as:
Figure BDA0002309746940000101
the smaller the calculated Euclidean distance value is, the smaller the comprehensive difference is, namely the more similar the human face on the secondary screening image is to the human face on the image to be restored, and therefore the secondary screening image with the minimum Euclidean distance is selected as the reference image. Of course, the cosine distance, mahalanobis distance or pearson correlation coefficient may also be used to calculate the synthetic gap, and is not limited herein.
It should be noted that, in some embodiments, image preprocessing may not be performed, and face feature extraction may be directly performed on all the secondary filtered images and the images to be repaired to obtain first feature images corresponding to the secondary filtered images and second feature images corresponding to the images to be repaired, and the subsequent processing steps are the same as those in the above embodiments and are not described herein again. Therefore, the overall speed of image processing can be increased, and user experience is improved
Referring to fig. 4, 17 and 18, step 03 includes
031: performing content generation processing on the image to be restored through a content generation network so as to reserve content characteristics in the image to be restored;
032: extracting texture features in the reference image by using a texture generation network; and
033: and mapping the texture features to the content features in the image to be repaired to obtain a repaired image.
In some embodiments, the processing module 13 further includes a first generating unit 131, a second generating unit 132, and a repairing unit 133. Step 031 may be performed by the first generating unit 131, step 032 may be performed by the second generating unit 132, and step 033 may be performed by the repairing unit 133. That is to say, the first generating unit 131 is configured to perform content generating processing on the image to be restored through the content generating network, so as to reserve content features in the image to be restored; the second generating unit 132 may be configured to extract texture features in the reference image using a texture generating network; the repairing unit 133 may be configured to map the texture feature onto a content feature in the image to be repaired to obtain a repaired image.
Referring to fig. 3, in some embodiments, step 031, step 032 and step 033 are all implemented by the processor 230, that is, the processor 230 may further be configured to: performing content generation processing on the image to be restored through a content generation network so as to reserve content characteristics in the image to be restored; extracting texture features in the reference image by using a texture generation network; and mapping the texture features to the content features in the image to be restored to obtain a restored image.
Specifically, content generation processing is performed on the image to be restored through a content generation network so as to reserve content features in the image to be restored. For example, fig. 19 is a schematic diagram of an image to be restored acquiring content features through a content generation network, the image to be restored is subjected to convolution processing four times to acquire a plurality of first feature images of the image to be restored, a fourth layer of convolution layer (i.e., a last layer of convolution layer) performs last convolution on the first feature image output by the third convolution layer, and outputs the feature image obtained by the last convolution to a full connection layer, a feature vector of the image to be restored can be acquired through the full connection layer, and then the acquired feature vector is subjected to deconvolution processing four times to acquire a content image with the content features of the image to be restored. The content image contains all content features in the image to be restored, such as the positions of the eyes, the positions of the eyebrows, and the like. But the outline characteristics of facial features on the content image are fuzzy, such as the shape of eye sockets, the thickness of eyebrows and the like. The number of convolutions and the number of deconvolution may be any natural number of 1 or more, and for example, the number of convolutions and the number of deconvolution may be 3, 5, 7, 8, or the like.
And carrying out texture generation processing on the reference image through a texture generation network so as to obtain texture feature information of the reference image. For example, fig. 20 is a schematic diagram of a reference image acquiring texture features through a texture generation network, the reference image is subjected to six times of convolution processing to acquire multiple second feature images of an image to be restored, a sixth layer of convolution layer (i.e., a last layer of convolution layer) performs last convolution on the second feature image output by the fifth convolution layer, and outputs a feature image obtained by the last convolution to a full connection layer, a feature vector of the image to be restored can be acquired through the full connection layer, and texture features of the reference image are acquired according to the feature vector. The texture features include contour information of five sense organs of the face of the reference image, such as the contour of eye sockets, the contour of eyebrows, and the like. The number of convolutions and the number of deconvolution may be any natural number of 1 or more, and for example, the number of convolutions and the number of deconvolution may be 3, 5, 7, 8, or the like.
Referring to fig. 21, all texture features are mapped onto content features of the image to be repaired, so as to obtain a repaired image with clear facial feature of five sense organs. For example, texture information obtained on the reference image representing the orbital shape is mapped to the content feature, the position of the eye, such that the eye contour sharpness is increased; texture information which represents the shape of the eyebrow and is acquired from the reference image is mapped to the content feature, namely the position of the eyebrow, so that the definition of the eyebrow outline is increased. It should be noted that the content features on the repair image are all from the image to be repaired, that is, none of the content features that are not present on the image to be repaired will not be present in the repair image. For example, if there is I1 in the reference image but there is no I1 in the image to be repaired, the restored image obtained after the repair has no I1 because the content features of the restored image are all from the image to be repaired.
The fuzzy image is judged by automatically selecting a high-definition reference face which is most similar to the face to be repaired from the photo album as the reference image, compared with the traditional image enhancement method, the reference image-based repairing method can well reconstruct the fuzzy facial contour features, can effectively improve the definition of the facial image and simultaneously enhance the definition of the facial contour features.
Referring to fig. 4 and fig. 22, the present application further provides a computer readable storage medium 300, on which a computer program 310 is stored, and when the computer program is executed by the processor 230, the steps of the image processing method according to any of the above embodiments are implemented.
For example, in the case where the program is executed by a processor, the steps of the following image processing method are implemented:
01: acquiring an image to be restored, wherein the image to be restored comprises a human face;
02: finding out an image with the definition greater than a first threshold value and the face most similar to the face in the image to be restored from the album as a reference image; and
03: and processing the image to be repaired according to the reference image to obtain a repaired image.
The computer-readable storage medium 300 may be disposed in the image processing apparatus 100 or the electronic device 200, or disposed in the cloud server, and at this time, the image processing apparatus 100 or the electronic device 200 can communicate with the cloud server to obtain the corresponding computer program 310.
It will be appreciated that the computer program 310 comprises computer program code. The computer program code may be in the form of source code, object code, an executable file or some intermediate form, etc. The computer-readable storage medium may include: any entity or device capable of carrying computer program code, recording medium, U-disk, removable hard disk, magnetic disk, optical disk, computer Memory, Read-Only Memory (ROM), Random Access Memory (RAM), software distribution medium, and the like.
Processor 230 may be referred to as a driver board. The driver board may be a Central Processing Unit (CPU), other general purpose processor 230, a Digital signal processor 230 (DSP), an Application Specific Integrated Circuit (ASIC), an off-the-shelf Programmable Gate Array (FPGA) or other Programmable logic device, discrete Gate or transistor logic, discrete hardware components, etc.
In the description herein, reference to the description of the terms "one embodiment," "some embodiments," "an illustrative embodiment," "an example," "a specific example" or "some examples" or the like means that a particular feature, structure, material, or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the application. In this specification, schematic representations of the above terms do not necessarily refer to the same embodiment or example. Furthermore, the particular features, structures, materials, or characteristics described may be combined in any suitable manner in any one or more embodiments or examples. Furthermore, various embodiments or examples and features of different embodiments or examples described in this specification can be combined and combined by one skilled in the art without contradiction.
Any process or method descriptions in flow charts or otherwise described herein may be understood as representing modules, segments, or portions of code which include one or more executable instructions for implementing specific logical functions or steps of the process, and the scope of the preferred embodiments of the present application includes other implementations in which functions may be executed out of order from that shown or discussed, including substantially concurrently or in reverse order, depending on the functionality involved, as would be understood by those reasonably skilled in the art of the present application.
Although embodiments of the present application have been shown and described above, it is to be understood that the above embodiments are exemplary and not to be construed as limiting the present application, and that changes, modifications, substitutions and alterations can be made to the above embodiments by those of ordinary skill in the art within the scope of the present application.

Claims (11)

1. An image processing method, characterized in that the image processing method comprises:
acquiring an image to be restored, wherein the image to be restored comprises a human face;
finding out an image with the definition being greater than a first threshold value and the face being most similar to the face in the image to be restored from the album as a reference image; and
and processing the image to be repaired according to the reference image to obtain a repaired image.
2. The image processing method according to claim 1, wherein the acquiring the image to be restored comprises:
acquiring an original image with a portrait;
acquiring the definition of the original image; and
and determining the original image with the definition smaller than a second threshold as an image to be repaired, wherein the second threshold is smaller than the first threshold.
3. The image processing method according to claim 2, wherein the acquiring an original image having a portrait includes:
and acquiring an original image with a portrait from the photo album in preset time and/or preset scenes.
4. The image processing method according to claim 2, wherein the acquiring an original image having a portrait includes:
in the shooting process of the camera, an original image with a portrait shot by the camera is obtained.
5. The image processing method according to any one of claims 2 to 4, wherein the obtaining of the sharpness of the original image comprises:
performing shaping low-pass filtering on the original image to obtain a filtered image;
acquiring high-frequency information in the original image according to the original image and the filtered image, wherein the high-frequency information is a part far away from zero frequency in a discrete cosine transform coefficient, and the part is used for describing detail information of the original image; and
and acquiring the definition of the original image according to the number of the pixels of the high-frequency information and the number of all the pixels of the original image.
6. The image processing method according to any one of claims 2 to 4, wherein finding out an image in the album, in which the face is most similar to the face in the image to be restored and the definition of the image is greater than a preset definition, as a reference image comprises:
screening out images containing human faces from the photo album to serve as primary screening images;
screening out images with the definition larger than the first threshold value from the primary screening images to serve as secondary screening images; and
and detecting the similarity between the face in each secondary screening image and the face in the image to be restored, and taking the secondary screening image with the highest similarity as the reference image.
7. The image processing method according to claim 6, wherein the detecting a similarity between a face in each of the secondary filtered images and a face in the image to be restored, and using the secondary filtered image with the highest similarity as the reference image comprises:
respectively carrying out image preprocessing on the secondary screening image and the image to be repaired;
respectively performing face feature extraction on the preprocessed secondary screening image and the image to be restored by utilizing a convolutional layer and a pooling layer to obtain a first feature image corresponding to the secondary screening image and a second feature image corresponding to the image to be restored;
classifying each feature in the first feature image and each feature in the second feature image by using a full-connection layer, and performing vectorization representation respectively;
calculating the difference between the feature vector of each category in the first feature image and the feature vector of the corresponding category in the second feature image to obtain a plurality of differences corresponding to a plurality of categories; and
and calculating a comprehensive difference between the secondary screening image and the reference image according to a plurality of differences corresponding to a plurality of categories, and taking the secondary screening image with the minimum comprehensive difference as the reference image.
8. The image processing method according to any one of claims 1 to 4, wherein the processing the image to be restored according to the reference image to obtain a restored image comprises:
performing content generation processing on the image to be restored through a content generation network so as to reserve content characteristics in the image to be restored;
extracting texture features in the reference image by using a texture generation network; and
and mapping the texture features to content features in the image to be repaired to obtain the repaired image.
9. An image processing apparatus characterized by comprising:
the system comprises an acquisition module, a restoration module and a restoration module, wherein the acquisition module is used for acquiring an image to be restored, and the image to be restored comprises a human face;
the screening module is used for finding out an image with the definition larger than a first threshold value and the face most similar to the face in the image to be restored in the photo album as a reference image; and
and the processing module is used for processing the image to be repaired according to the reference image so as to obtain a repaired image.
10. An electronic device, characterized in that the electronic device comprises a housing, an imaging device and a processor, wherein the imaging device and the processor are both mounted on the housing, the imaging device is used for taking images, and the processor is used for implementing the image processing method according to any one of claims 1-8.
11. A computer-readable storage medium, on which a computer program is stored which, when being executed by a processor, carries out the image processing method of any one of claims 1 to 8.
CN201911253780.XA 2019-12-09 2019-12-09 Image processing method, image processing apparatus, electronic device, and readable storage medium Pending CN111105369A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911253780.XA CN111105369A (en) 2019-12-09 2019-12-09 Image processing method, image processing apparatus, electronic device, and readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911253780.XA CN111105369A (en) 2019-12-09 2019-12-09 Image processing method, image processing apparatus, electronic device, and readable storage medium

Publications (1)

Publication Number Publication Date
CN111105369A true CN111105369A (en) 2020-05-05

Family

ID=70422584

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911253780.XA Pending CN111105369A (en) 2019-12-09 2019-12-09 Image processing method, image processing apparatus, electronic device, and readable storage medium

Country Status (1)

Country Link
CN (1) CN111105369A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2021109680A1 (en) * 2019-12-06 2021-06-10 中兴通讯股份有限公司 Facial image processing method and apparatus, computer device, and medium

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100165122A1 (en) * 2008-12-31 2010-07-01 Stmicroelectronics S.R.L. Method of merging images and relative method of generating an output image of enhanced quality
CN105389780A (en) * 2015-10-28 2016-03-09 维沃移动通信有限公司 Image processing method and mobile terminal
CN107944399A (en) * 2017-11-28 2018-04-20 广州大学 A kind of pedestrian's recognition methods again based on convolutional neural networks target's center model
CN109360170A (en) * 2018-10-24 2019-02-19 北京工商大学 Face restorative procedure based on advanced features
CN109919830A (en) * 2019-01-23 2019-06-21 复旦大学 It is a kind of based on aesthetic evaluation band refer to human eye image repair method

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100165122A1 (en) * 2008-12-31 2010-07-01 Stmicroelectronics S.R.L. Method of merging images and relative method of generating an output image of enhanced quality
CN105389780A (en) * 2015-10-28 2016-03-09 维沃移动通信有限公司 Image processing method and mobile terminal
CN107944399A (en) * 2017-11-28 2018-04-20 广州大学 A kind of pedestrian's recognition methods again based on convolutional neural networks target's center model
CN109360170A (en) * 2018-10-24 2019-02-19 北京工商大学 Face restorative procedure based on advanced features
CN109919830A (en) * 2019-01-23 2019-06-21 复旦大学 It is a kind of based on aesthetic evaluation band refer to human eye image repair method

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
郑绍华等: "面向无参考的眼底成像清晰度实时评价方法" *
郑绍华等: "面向无参考的眼底成像清晰度实时评价方法", 电子测量与仪器学报, 31 March 2013 (2013-03-31), pages 242 - 243 *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2021109680A1 (en) * 2019-12-06 2021-06-10 中兴通讯股份有限公司 Facial image processing method and apparatus, computer device, and medium

Similar Documents

Publication Publication Date Title
Jin et al. Unsupervised night image enhancement: When layer decomposition meets light-effects suppression
CN108335279B (en) Image fusion and HDR imaging
US7933454B2 (en) Class-based image enhancement system
Buades et al. A note on multi-image denoising
CN108241645B (en) Image processing method and device
US20110268359A1 (en) Foreground/Background Segmentation in Digital Images
US20130170755A1 (en) Smile detection systems and methods
CN110807759B (en) Method and device for evaluating photo quality, electronic equipment and readable storage medium
JP2002245471A (en) Photograph finishing service for double print accompanied by second print corrected according to subject contents
Pan et al. MIEGAN: Mobile image enhancement via a multi-module cascade neural network
CN111008935B (en) Face image enhancement method, device, system and storage medium
CN111028170B (en) Image processing method, image processing apparatus, electronic device, and readable storage medium
CN112602088B (en) Method, system and computer readable medium for improving quality of low light images
Yu et al. Identifying photorealistic computer graphics using convolutional neural networks
CN111031241B (en) Image processing method and device, terminal and computer readable storage medium
CN115063331B (en) Multi-scale block LBP operator-based ghost-free multi-exposure image fusion method
CN112036209A (en) Portrait photo processing method and terminal
CN112508815A (en) Model training method and device, electronic equipment and machine-readable storage medium
Florea et al. Directed color transfer for low-light image enhancement
CN111105368B (en) Image processing method and apparatus, electronic device, and computer-readable storage medium
CN110910331B (en) Image processing method, image processing apparatus, electronic device, and computer-readable storage medium
CN111105369A (en) Image processing method, image processing apparatus, electronic device, and readable storage medium
CN111105370B (en) Image processing method, image processing apparatus, electronic device, and readable storage medium
Kınlı et al. Modeling the lighting in scenes as style for auto white-balance correction
CN111062904B (en) Image processing method, image processing apparatus, electronic device, and readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination