CN110363729B - Image processing method, terminal equipment and computer readable storage medium - Google Patents

Image processing method, terminal equipment and computer readable storage medium Download PDF

Info

Publication number
CN110363729B
CN110363729B CN201910693817.4A CN201910693817A CN110363729B CN 110363729 B CN110363729 B CN 110363729B CN 201910693817 A CN201910693817 A CN 201910693817A CN 110363729 B CN110363729 B CN 110363729B
Authority
CN
China
Prior art keywords
image
sub
region
shadow
area
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910693817.4A
Other languages
Chinese (zh)
Other versions
CN110363729A (en
Inventor
常慧
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Vivo Mobile Communication Co Ltd
Original Assignee
Vivo Mobile Communication Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Vivo Mobile Communication Co Ltd filed Critical Vivo Mobile Communication Co Ltd
Priority to CN201910693817.4A priority Critical patent/CN110363729B/en
Publication of CN110363729A publication Critical patent/CN110363729A/en
Application granted granted Critical
Publication of CN110363729B publication Critical patent/CN110363729B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • G06T5/94
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/80Camera processing pipelines; Components thereof

Abstract

The invention relates to the technical field of image processing, and provides an image processing method, terminal equipment and a computer readable storage medium, which are used for solving the problem of poor image quality in the photographing process. The method comprises the following steps: acquiring a first image acquired by a camera according to a first acquisition parameter; identifying a first shadow region in the first image, the first shadow region corresponding to the first object; processing a first shadow area in the first image to obtain a second image; acquiring a third image which is acquired by the camera according to the N acquisition parameters and is matched with the second image; and determining a target image based on the N third images, wherein the second shadow region in the target image corresponds to the first image and is smaller than the first shadow region in area. The target image is matched with the second image, so that the shooting content can be accurately embodied, and the area of the second shadow area is smaller than that of the first shadow area, so that the shadow area is reduced, namely, the shadow area can be reduced, and the shooting content can be accurately embodied.

Description

Image processing method, terminal equipment and computer readable storage medium
Technical Field
The present invention relates to the field of image processing technologies, and in particular, to an image processing method, a terminal device, and a computer-readable storage medium.
Background
With the rapid development of computer technology, the terminal device has more and more powerful functions, and the terminal device can provide services in various aspects, thereby providing convenience for life, work and the like of people. For example, photographing is a common service provided by a terminal device, and a user can take a picture through the terminal device.
At present, in the process of photographing a target object through a terminal device, the terminal device is prone to have a projection on the target object, and then an image photographed by a camera has a shadow. For example, when a target object needs to be photographed, the photographing range of the terminal device may be changed during the process of moving the terminal device, so that the photographed image may not accurately represent the photographed content.
Disclosure of Invention
The embodiment of the invention provides an image processing method, terminal equipment and a computer readable storage medium, which aim to solve the problem that images shot by a camera in the prior art cannot simultaneously meet the requirements of reducing shadow area and accurately embodying shot content.
In order to solve the technical problem, the invention is realized as follows:
in a first aspect, an embodiment of the present invention provides an image processing method, which is applied to a terminal device including a camera, and the method includes:
acquiring a first image acquired by the camera according to a first acquisition parameter;
identifying a first shadow region in a first image, the first shadow region corresponding to a first object;
processing a first shadow area in the first image to obtain a second image;
acquiring a third image which is acquired by the camera according to N acquisition parameters and is matched with the second image, wherein N is a positive integer;
determining a target image based on the N third images;
wherein an area of a second shadow region in the target image is smaller than an area of the first shadow region, the second shadow region corresponding to the first object.
In a second aspect, an embodiment of the present invention further provides a terminal device, which includes a camera, where the terminal device includes:
the first image acquisition module is used for acquiring a first image acquired by the camera according to first acquisition parameters;
a first shadow determination module to identify a first shadow region in a first image, the first shadow region corresponding to a first object;
the shadow processing module is used for processing a first shadow area in the first image to obtain a second image;
the third image acquisition module is used for acquiring a third image which is acquired by the camera according to N acquisition parameters and is matched with the second image, wherein N is a positive integer;
a target image determination module for determining a target image based on the N third images;
wherein an area of a second shadow region in the target image is smaller than an area of the first shadow region, the second shadow region corresponding to the first object.
In a third aspect, an embodiment of the present invention further provides a terminal device, including: the image processing method comprises a memory and a processor, wherein the memory stores a computer program, and the processor realizes the steps in the image processing method provided by the embodiment of the invention when executing the computer program.
In a fourth aspect, an embodiment of the present invention further provides a computer-readable storage medium, on which a computer program is stored, where the computer program, when executed by a processor, implements the steps in the image processing method provided in the embodiment of the present invention.
In the image processing method provided by the embodiment of the application, a first image acquired by a camera according to first acquisition parameters is obtained, a first shadow region in the first image is identified, the first shadow region in the first image is processed to obtain a second image, and a target image is determined based on N third images which are acquired by the camera according to N acquisition parameters and matched with the second image. The target image is matched with the second image, so that the shooting content can be accurately embodied, the area of the second shadow region corresponding to the first image is smaller than that of the first shadow region, and the shadow area of the obtained target image is reduced relative to that of the first image, namely, the shadow area of the target image obtained by the image processing method provided by the embodiment of the application can be simultaneously reduced, and the shooting content can be accurately embodied.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings needed to be used in the description of the embodiments of the present invention will be briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art that other drawings can be obtained according to these drawings without inventive exercise.
FIG. 1 is a flow chart of an image processing method provided by an embodiment of the invention;
fig. 2 is a structural diagram of a terminal device provided in an embodiment of the present invention;
FIG. 3 is a second flowchart of an image processing method according to an embodiment of the present invention;
FIG. 4 is a diagram of an application scenario of an image processing method according to an embodiment of the present invention;
fig. 5 is one of schematic diagrams of a terminal device provided in an embodiment of the present invention;
fig. 6 is a schematic diagram of a hardware structure of a terminal device according to an embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, not all, embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
Referring to fig. 1, in one embodiment, there is provided an image processing method applied to a terminal device including a camera, the method including:
step 101: and acquiring a first image acquired by the camera according to the first acquisition parameter.
Different acquisition parameters, the range of the camera acquisition and the content of the acquisition are different, for example, in an example, the first acquisition parameter may include a first angle and a first position, different angles or/and different positions, and the acquired images are different, it is understood that the first angle is the first angle of the camera, and the first position is the first position of the camera relative to the main body of the terminal device. In an embodiment, a camera is used to acquire an image under a first acquisition parameter in advance, a first image is determined, and then the first image is acquired in the process of image processing. It is understood that the first image is an expected image, that is, the content in the first image is the content that the user expects to take.
Step 102: a first shadow region in the first image is identified.
In order to acquire contents including a content which a user expects to need to shoot, shadows may appear in a first image acquired by acquiring the image under a first acquisition parameter, so that shadow identification needs to be performed on the first image, that is, shadow identification can be performed on the first image by using a shadow identification algorithm, and a first shadow area corresponding to the first object is determined. For example, the first object may be a camera for taking a first image, and when the camera captures an image of the target object, a shadow is generated on the target object, and the captured first image may generate a shadow corresponding to the camera, and then the shadow recognition may be performed on the first image, so as to determine a first shadow region corresponding to the camera. For another example, the first object may be a terminal device that captures a first image, and in the process of capturing an image of the target object by using a camera of the terminal device, the terminal device generates a shadow on the target object, and then the captured first image generates a shadow corresponding to the terminal device, and then the first image may be subjected to shadow recognition to determine a first shadow region corresponding to the terminal device.
Step 103: and processing the first shadow area in the first image to obtain a second image.
After the first shadow area is determined, image processing can be performed on the first shadow in the first image to obtain a second image, and then the second image corresponding to the first image is adopted in the image matching process.
Step 104: and acquiring a third image which is acquired by the camera according to the N acquisition parameters and is matched with the second image.
Wherein N is a positive integer. In the shooting process, the camera can be controlled to collect third images under different collection parameters, the third images correspond to the collection parameters one by one, and N third images collected under N collection parameters are all matched with the second images. In one example, the third image may be a preview image, that is, during shooting, an image preview may be performed in a screen of the terminal device, that is, the image is a preview image.
Step 105: the target image is determined based on the N third images.
The area of the second shadow area in the target image is smaller than that of the first shadow area, and the second shadow area corresponds to the first object.
The target image is one of the N third images, is matched with the second image, and the area of the second shadow area in the target image is smaller than that of the first shadow area, namely, the shadow area is reduced.
In the image processing method provided by the embodiment of the application, a first image acquired by a camera according to first acquisition parameters is obtained, a first shadow region in the first image is identified, the first shadow region in the first image is processed to obtain a second image, and a target image is determined based on N third images which are acquired by the camera according to N acquisition parameters and matched with the second image. The target image is matched with the second image, so that the shooting content can be accurately embodied, the area of the second shadow region corresponding to the first image is smaller than that of the first shadow region, and the shadow area of the obtained target image is reduced relative to that of the first image, namely, the shadow area of the target image obtained by the image processing method provided by the embodiment of the application can be simultaneously reduced, and the shooting content can be accurately embodied.
In the embodiment of the present invention, the terminal Device may include, but is not limited to, a Mobile phone, a Tablet Personal Computer (Tablet Personal Computer), a Laptop Computer (Laptop Computer), a Personal Digital Assistant (PDA), a Mobile Internet Device (MID), a Wearable Device (Wearable Device), or the like.
In one embodiment, the acquisition parameters include angle and position. The angle is an angle of the camera, and the position is a position of the camera with respect to a main body of the terminal apparatus (the terminal apparatus includes the main body and the camera mounted to the main body). As shown in fig. 2, the terminal device includes a camera 201, a first rotatable joint 202, a second rotatable joint 203, a telescopic rod 204, and a main body 205, the camera 201 is mounted to one end of the telescopic rod 204 through the first rotatable joint 202, and the other end of the telescopic rod 204 is mounted to the main body 205 through the second rotatable joint 203. In this way, the camera 201 can be rotated by the first rotatable joint 202, thereby changing the angle. In addition, the telescopic rod 204 can move away from or close to the main body 205 relative to the main body 205 so as to change the position of the camera 201, and under the action of the second rotatable joint 203, the telescopic rod 204 can rotate relative to the main body 205 so as to change the position of the camera 201. By adjusting the angle or position, the range collected by the camera 201 changes, and the collected images are different.
In one embodiment, the target image is the third image with the smallest area of the second shaded region in the N third images.
That is, the third image with the smallest area of the second shadow region is determined from the N third images as the target image, so that the area of the second shadow region corresponding to the first object in the obtained target image can be ensured to be the smallest, the shadow area of the first object is further reduced, and the quality of the target image is improved.
In one embodiment, acquiring a third image acquired by the camera with the N acquisition parameters and matched with the second image comprises: respectively calculating the similarity of the second image and M preview images, wherein M is more than or equal to N, and the M preview images are preview images acquired by the camera according to M acquisition parameters; and obtaining N third images with the similarity larger than a preset similarity threshold in the M preview images. That is, in the present embodiment, please refer to fig. 3, which provides an image processing method applied to a terminal device including a camera, the method includes:
step 301: and acquiring a first image acquired by the camera according to the first acquisition parameter.
Step 302: a first shadow region in the first image is identified, the first shadow region corresponding to the first object.
Step 303: and processing the first shadow area in the first image to obtain a second image.
The steps 301 to 303 correspond to the steps 102 to 103 one to one, and are not described herein again.
Step 304: and respectively calculating the similarity of the second image and the M preview images.
M is larger than or equal to N, M preview images are preview images acquired by the camera according to M acquisition parameters, the preview images correspond to the acquisition parameters one by one, namely each preview image corresponds to one acquisition parameter. The camera is controlled to rotate, so that the camera can acquire images under M acquisition parameters respectively and preview the images, and M preview images can be acquired. Then, the similarity between the second image and the M preview images can be respectively calculated, and the M similarities can be calculated due to the M preview images. The similarity may represent a degree of similarity between images, and the higher the similarity is, the closer the contents between the images are understood to be. Since the image may be represented as a pixel value matrix, in one example, the pixel value matrix of the image may be represented as a vector, and the similarity between the images is obtained by calculating a cosine distance of the vector of the image.
Step 305: and obtaining N third images with the similarity larger than a preset similarity threshold in the M preview images.
After the similarity between the second image and the M preview images is obtained through calculation, the N third images can be determined from the M preview images according to the similarity. Specifically, a preview image, of the M preview images, with a similarity greater than a preset similarity threshold with the second image is taken as the third image, so that it can be ensured that the similarity between the third image and the second image is greater than the preset similarity threshold, the third image and the second image are closer to each other, the third image more accurately reflects the shot content, and further, a guarantee is provided for subsequently determining the target image, the determined target image is closer to the second image, even if the target image more accurately reflects the shot content.
Step 306: the target image is determined based on the N third images.
The N third images are images acquired by the camera according to N acquisition parameters and matched with the second image, the area of a second shadow region in the target image is smaller than that of the first shadow region, N is a positive integer, and the second shadow region corresponds to the first object.
Step 306 corresponds to step 105, and is not described herein again.
In one embodiment, the second image comprises T sub-image regions, T being an integer greater than 1;
in this embodiment, the calculating the similarity between the second image and the M preview images includes:
for an I-th preview image acquired by a camera, determining a first sub-area corresponding to each sub-image area in the I-th preview image;
the distance between the position coordinate of the sub-image area in the second image and the position coordinate of the corresponding first sub-area in the I-th preview image is smaller than or equal to a preset distance, I is an integer and is not larger than M, and the I-th preview image corresponds to the I-th acquisition parameter;
calculating the similarity between each sub-image area and the corresponding first sub-area in the I-th preview image;
the similarity between the second image and the I-th preview image comprises the similarity between each sub-image area in the second image and the corresponding first sub-area in the I-th preview image.
In this embodiment, acquiring N third images of which the similarity is greater than a preset similarity threshold in the M preview images includes:
and acquiring N third images from the M preview images, wherein the similarity between each sub-image area in the second image and the corresponding first sub-area in the third image is greater than a preset similarity threshold.
The camera is controlled to rotate or move, so that the angle or the position of the camera can be changed, and even if the camera is in different acquisition parameters, image acquisition is carried out. The preview image can be acquired by the camera under M acquisition parameters, and the corresponding preview image is acquired by the camera under each acquisition parameter. In this embodiment, the similarity between the second image and the currently acquired preview image can be calculated every time one preview image is acquired, and until the M-th preview image is acquired, the similarity between the second image and the M-th preview image is calculated, and then the similarity between the second image and the M preview images can be obtained.
That is, in the process of calculating the similarity between the second image and the M preview images in this embodiment, the similarity between the second image and the I-th preview image can be calculated when the I-th preview image is currently acquired by the camera. Specifically, since the second image includes T sub-image regions, the similarity between each sub-image region and the I-th preview image may be compared, that is, the similarity between each sub-image region and the first sub-region corresponding to the sub-image region in the I-th preview image is first calculated. In this way, the similarity between each sub-image region and the first sub-region corresponding to the sub-image region in the M preview images can be obtained, and then N third images, the similarity of which with each sub-image region in the second image is greater than the preset similarity threshold, can be determined from the M preview images, that is, the similarity of each sub-image region in the second image and the corresponding first sub-region in the third image is greater than the preset similarity threshold.
In addition, the distance between the position coordinate of the sub-image area in the second image and the position coordinate of the corresponding first sub-area in the I-th preview image is smaller than or equal to the preset distance, so that the similarity calculation between the sub-image area and the sub-area in the I-th preview image near the position coordinate corresponding to the sub-image area can be ensured, and the accuracy of the similarity calculation is improved. For example, the preset distance may be 0, and the position coordinate of the sub-image area in the second image is the same as the position coordinate of the corresponding first sub-area in the I-th preview image. For another example, the preset distance may be 2, and if the preset distance is short, the distance between the position coordinate of the sub-image region in the second image and the position coordinate of the corresponding first sub-region in the I-th preview image may be smaller than 2.
In an example, a distance between a position coordinate of the sub-image region in the second image and a position coordinate of the corresponding first sub-region in the ith preview image is smaller than or equal to a preset distance, which can be understood as that the distance between the position coordinate of the first pixel point in the sub-image region in the second image and the position coordinate of the second pixel point in the corresponding first sub-region in the ith preview image is smaller than or equal to the preset distance, the first pixel point can be any one pixel point in the sub-image region, and the position coordinate of the first pixel point in the sub-image region is the same as the position coordinate of the second pixel point in the first sub-region. For example, the matrix AI of pixel values for the I-th preview image is
Figure GDA0003020354590000091
The matrix of pixel values B of the first image is
Figure GDA0003020354590000092
A pixel value matrix B1 of a sub-picture area is
Figure GDA0003020354590000093
The pixel value matrix of a first sub-area corresponding to the sub-image area is
Figure GDA0003020354590000094
If the first pixel point is B23 and its position coordinate in the sub-image region is (1, 1), the second pixel point is AI23 and its coordinate position in the first sub-region is (1, 1), which is the same as the position coordinate of the first pixel point B23 in the sub-image region. The distance between the position coordinate of the first pixel point B23 in the first image, i.e., (2, 3), and the coordinate position of the second pixel point AI23 in the I-th preview image, i.e., (2, 3), is 0. The pixel value matrix of the other first sub-area corresponding to the sub-image area is
Figure GDA0003020354590000101
The second pixel point is AI33 whose coordinate position in the first sub-area is (1, 1), which is the same as the position coordinate of the first pixel point B23 in the sub-image area. The distance between the position coordinate of the first pixel point B23 in the first image, i.e., (2, 3), and the coordinate position of the second pixel point AI33 in the I-th preview image, i.e., (3, 3), is 1.
In an example, the number of the first sub-regions corresponding to each sub-image region may be at least two, and each sub-image region and the I-th preview image have at least two first sub-regions, which correspond to each other with a similarity, that is, each sub-image region corresponds to at least two similarities, the similarities correspond to the first sub-regions one to one, and the second similarities correspond to the first target self-image regions one to one.
In one example, a first similarity of each sub-image region in the second image with the corresponding first sub-region in the third image is greater than a preset similarity threshold. The similarity between the second image and the ith preview image comprises a first similarity corresponding to each sub-image area in the second image, and the first similarity is the maximum value of the similarities between the sub-image areas and the corresponding at least two first areas. For example, the sub-image region a corresponds to two similarities, i.e., a similarity S1 and a similarity S2, where S1> S2, the first similarity of the sub-image region a is S1, and the similarity between the second image and the I-th preview image includes a similarity S1. In this way, by performing the above process for each sub-image region of the second image, it may be determined that the similarity between the second image and the I-th preview image includes the first similarity corresponding to each sub-image region in the second image.
In this embodiment, when the first similarity corresponding to each sub-image region in the second image is greater than the preset similarity threshold, the second image is matched with the preview image, so that N third images matched with the second image can be screened from the M preview images. In this embodiment, similarity calculation is performed on each sub-image region and the I-th preview image, and the N screened third images are preview images in which the first similarity corresponding to each sub-image region in the second image is greater than a preset similarity threshold value among the M preview images, so that the third images are closer to the second image, the third images more accurately represent shooting contents, and further guarantee is provided for subsequently determining a target image, the determined target image is closer to the second image, even if the target image more accurately represents the shooting contents.
In one embodiment, processing the first shadow region in the first image to obtain the second image comprises: performing region division on the first image to obtain P sub-image regions; and filtering the sub-regions corresponding to the first shadow region in the P sub-image regions to obtain T sub-image regions.
Namely, the first image is subjected to region division, and sub-regions corresponding to the first shadow region are passed to obtain T sub-image regions, namely the second image is obtained. The method can be used for the subsequent similarity calculation process, and the similarity calculation is carried out in different areas, so that the accuracy of the similarity can be improved.
In one embodiment, the way of calculating the similarity between the second image and the M preview images respectively includes: and respectively calculating the similarity between a second target sub-image region in the second image and a third target sub-image region of the M preview images to obtain the similarity between the second image and the M preview images, wherein the second target sub-image region is the other region except the first shadow region in the second image, the position coordinate of the second target sub-image region in the second image is the same as the position coordinate of the third target sub-image region in the preview images, and the areas of the second target sub-image region and the third target sub-image region are the same.
That is, the first shadow area does not participate in the calculation of the similarity, and is not divided into a plurality of sub-areas for similarity calculation, but the similarity between the other areas except the first shadow area in the second image and the third target sub-image area of the preview image is calculated, so that the calculation amount can be reduced, and the efficiency of determining the target image is improved.
In one embodiment, the attribute of the first shadow area in the first image is adjusted to obtain a second image, the first area in the second image is transparent, and the position of the first area in the second image is the same as the position of the first shadow area in the first image.
The region of the second image, which is the region with the first shadow region, can be changed into transparent region, and the transparent region in the second image does not participate in the subsequent similarity calculation process, so that the calculation amount is reduced.
The following describes the procedure of the image processing method in a specific embodiment.
Take the first object as a camera for example. The shot object (target object) is tiled on the desktop, and the camera is positioned above the shot object, such as shooting a file tiled on the desktop. At this angle, the camera first takes a first image, which may be referred to as an expectation plot. The shadow cast by the camera exists on the shot object under the bright light, namely the shadow of the camera exists in the first image.
The camera 201 can be adjusted through the first rotatable joint 202, the first rotatable joint 203 and the telescopic rod 204 in fig. 2, so as to adjust the position and angle of the camera 201. The shadow portion of the camera 201 in the first image can be roughly confirmed by moving the position of the camera 201 in conjunction with the change in the shadow in the preview image.
And then, the expected image can be processed, a shadow recognition algorithm is combined to find out a first shadow area of the camera, the first shadow area is partially removed to obtain a new expected image, namely a second image is obtained, and the second image is used for matching in the subsequent matching process, namely, as long as the parts in the second image can be matched in the preview image and the similarity of the parts meets the requirement, the preview image is considered to meet the requirement.
If the second image includes a plurality of pictures, that is, includes T sub-image regions, each picture may be required to be matched with a region having a similarity greater than a preset similarity threshold in the preview image of the camera. If it is a picture, the first shaded portion is not compared.
Through the first rotatable joint 202, the first rotatable joint 203 and the telescopic rod 204 in fig. 2, in combination with the similarity comparison between the images, the angle and the position of the camera are automatically adjusted, so that the preview image is close to the second image to the maximum extent, and finally, a proper position and angle are found to minimize the shadow area of the preview image and the similarity with the second image is greater than a preset similarity threshold. And then determining the preview image as a target image to realize the shooting of the image. And after the target image is obtained, the first image and the second image can be deleted, so that the storage pressure is reduced.
For example, when a document is photographed under lighting, the camera will cast a shadow on the document, resulting in a large shadow of the document, and by moving and rotating the camera, the shadow of the camera on the document can be minimized while the desired content (i.e., matching the first image) is captured. By the method of the embodiment, the shadow area in the photo can be reduced without affecting the image effect.
As shown in fig. 4, the first object is taken as the terminal device 400 as an example. The subject 401 (target object) is laid on the desktop, and the terminal apparatus 400 is placed above the subject 401, for example, to photograph a file laid on the desktop. At this angle and position (state 1 in fig. 4), the camera 402 in the terminal device 400 first takes a first image, which may be referred to as an expectation plot. There is a shadow cast by the terminal device 400 on the photographed object in bright light, i.e., there is a shadow of the terminal device 400 in the first image.
Then, the expected image can be processed, the first shadow area of the terminal device 400 is found by combining with a shadow recognition algorithm, the first shadow area is partially removed to obtain a new expected image, namely, the second image is obtained, and the second image is used for matching in the subsequent matching process, namely, as long as the parts in the second image can be matched in the preview image and the similarity of the parts meets the requirement, the preview image is considered to meet the requirement.
If the second image includes a plurality of pictures, that is, includes T sub-image regions, each picture may be required to be matched with a region having a similarity greater than a preset similarity threshold in the preview image of the camera 402. If it is a picture, the first shaded portion is not compared.
A prompt in a first direction (a direction from a first side of the terminal device to a second side, the first side being the side of the terminal device 400 where the camera 402 is installed, the first side being opposite to the second side) may pop up on the screen to prompt the mobile terminal device 400 until a preview image of the camera 402 is currently unshaded. The preview image and the second image differ significantly at this time.
Through the first rotatable joint, the second rotatable joint and the telescopic rod of the terminal device 400, in combination with the similarity comparison between the images, the angle and the position of the camera 402 are automatically adjusted, so that the preview image is close to the second image to the maximum extent, and finally, a proper position and angle are found, so that the shadow area of the preview image is minimum, and the similarity with the second image is greater than a preset similarity threshold, as shown in state 2 in fig. 4. And then determining the preview image as a target image to realize the shooting of the image. And after the target image is obtained, the first image and the second image can be deleted, so that the storage pressure is reduced.
For example: when a file is shot under light, a large part of the terminal device 400 itself will be projected onto the file, resulting in a large shadow of the file, and if the shadow area is adjusted by the mobile terminal device 400 alone, the difference between the content of the shot file and the flat shot is large. By controlling the stretching out and rotating of the camera, the shadow on the file can be made to be the maximum size of the camera while the expected content is shot, and the shadow area can be further reduced by adjusting the angle of the camera. By the method provided by the embodiment, the shadow area in the image can be reduced without influencing the image effect.
As shown in fig. 5, the present invention further provides a terminal device 500 of an embodiment, including a camera, where the terminal device includes:
a first image obtaining module 501, configured to obtain a first image acquired by a camera according to a first acquisition parameter;
a first shadow determination module 502 for identifying a first shadow region in the first image, the first shadow region corresponding to the first object;
a shadow processing module 503, configured to process a first shadow area in the first image to obtain a second image;
a third image obtaining module 504, configured to obtain a third image that is acquired by the camera with N acquisition parameters and is matched with the second image, where N is a positive integer;
a target image determination module 505 for determining a target image based on the N third images;
and the area of the second shadow area in the target image is smaller than that of the first shadow area, and the second shadow area corresponds to the first object.
In one embodiment, the acquisition parameters include angle and position.
In one embodiment, the target image is the third image with the smallest area of the second shaded region in the N third images.
In one embodiment, the third image acquisition module further comprises:
the similarity calculation module is used for calculating the similarity between the second image and M preview images respectively, wherein M is more than or equal to N, and the M preview images are preview images acquired by the camera according to M acquisition parameters;
and the third image determining module is used for acquiring N third images of which the similarity is greater than a preset similarity threshold in the M preview images.
In one embodiment, the second image comprises T sub-image regions, T being an integer greater than 1;
a similarity calculation module comprising:
the first sub-area determining module is used for determining a first sub-area corresponding to each sub-image area in the I preview image for the I preview image acquired by the camera, wherein the distance between the position coordinate of the sub-image area in the second image and the position coordinate of the corresponding first sub-area in the I preview image is smaller than or equal to a preset distance, I is an integer and is not more than M, and the I preview image corresponds to the I acquisition parameter;
the first similarity calculation module is used for calculating the similarity between each sub-image region and the corresponding first sub-region in the ith preview image, wherein the similarity between the second image and the ith preview image comprises the similarity between each sub-image region in the second image and the corresponding first sub-region in the ith preview image.
In this embodiment, the third image obtaining module is configured to obtain N third images from the M preview images, where a similarity between each sub-image region in the second image and a corresponding first sub-region in the third image is greater than a preset similarity threshold.
In one embodiment, a shadow processing module, comprising:
the area division module is used for carrying out area division on the first image to obtain P sub-image areas;
and the filtering module is used for filtering the sub-regions corresponding to the first shadow region in the P sub-image regions to obtain T sub-image regions.
In one embodiment, the similarity calculation module is configured to calculate similarities between a second target sub-image region in the second image and third target sub-image regions of the M preview images, respectively, to obtain similarities between the second image and the M preview images;
the second target sub-image region is the other region of the second image except the first shadow region, the position coordinate of the second target sub-image region in the second image is the same as the position coordinate of the third target sub-image region in the preview image, and the areas of the second target sub-image region and the third target sub-image region are the same.
The technical features in the terminal device provided in the embodiment of the present invention correspond to the technical features in the image processing method, and each process of the image processing method is implemented by the terminal device, and the same effect can be obtained.
Fig. 6 is a schematic diagram of a hardware structure of a terminal device for implementing various embodiments of the present invention, where the terminal device 600 includes, but is not limited to: radio frequency unit 601, network module 602, audio output unit 603, input unit 604, sensor 606, display unit 606, user input unit 607, interface unit 608, memory 609, processor 610, and power 611. Those skilled in the art will appreciate that the terminal device configuration shown in fig. 6 does not constitute a limitation of the terminal device, and that the terminal device may include more or fewer components than shown, or combine certain components, or a different arrangement of components. In the embodiment of the present invention, the terminal device includes, but is not limited to, a mobile phone, a tablet computer, a notebook computer, a palm computer, a vehicle-mounted mobile terminal, a wearable device, a pedometer, and the like.
The processor 610 is configured to obtain a first image acquired by the camera according to the first acquisition parameter; identifying a first shadow region in the first image, the first shadow region corresponding to the first object;
processing a first shadow area in the first image to obtain a second image; acquiring a third image which is acquired by the camera according to N acquisition parameters and is matched with the second image, wherein N is a positive integer; determining a target image based on the N third images; and the area of the second shadow area in the target image is smaller than that of the first shadow area, and the second shadow area corresponds to the first object.
The target image is matched with the second image, so that the shooting content can be accurately embodied, and the area of the second shadow area in the target image is smaller than that of the first shadow area, so that the shadow area is reduced.
It should be understood that, in the embodiment of the present invention, the radio frequency unit 601 may be used for receiving and sending signals during a message sending and receiving process or a call process, and specifically, receives downlink data from a base station and then processes the received downlink data to the processor 610; in addition, the uplink data is transmitted to the base station. In general, radio frequency unit 601 includes, but is not limited to, an antenna, at least one amplifier, a transceiver, a coupler, a low noise amplifier, a duplexer, and the like. Further, the radio frequency unit 601 may also communicate with a network and other devices through a wireless communication system.
The terminal device provides the user with wireless broadband internet access through the network module 602, such as helping the user send and receive e-mails, browse webpages, access streaming media, and the like.
The audio output unit 603 may convert audio data received by the radio frequency unit 601 or the network module 602 or stored in the memory 609 into an audio signal and output as sound. Also, the audio output unit 603 can also provide audio output related to a specific function performed by the terminal apparatus 600 (e.g., a call signal reception sound, a message reception sound, etc.). The audio output unit 603 includes a speaker, a buzzer, a receiver, and the like.
The input unit 604 is used to receive audio or video signals. The input Unit 604 may include a Graphics Processing Unit (GPU) 6041 and a microphone 6042, and the Graphics processor 6041 processes image data of a still picture or video obtained by an image capturing terminal device (such as a camera) in a video capturing mode or an image capturing mode. The processed image frames may be displayed on the display unit 606. The image frames processed by the graphic processor 6041 may be stored in the memory 609 (or other storage medium) or transmitted via the radio frequency unit 601 or the network module 602. The microphone 6042 can receive sound, and can process such sound into audio data. The processed audio data may be converted into a format output transmittable to a mobile communication base station via the radio frequency unit 601 in case of the phone call mode.
The terminal device 600 also includes at least one sensor 606, such as light sensors, motion sensors, and other sensors. Specifically, the light sensor includes an ambient light sensor that can adjust the luminance of the display panel 6091 according to the brightness of ambient light, and a proximity sensor that can turn off the display panel 6091 and/or the backlight when the terminal apparatus 600 is moved to the ear. As one of the motion sensors, the accelerometer sensor can detect the magnitude of acceleration in each direction (generally three axes), detect the magnitude and direction of gravity when stationary, and can be used to identify the terminal device posture (such as horizontal and vertical screen switching, related games, magnetometer posture calibration), vibration identification related functions (such as pedometer, tapping), and the like; the sensors 606 may also include fingerprint sensors, pressure sensors, iris sensors, molecular sensors, gyroscopes, barometers, hygrometers, thermometers, infrared sensors, etc., which are not described in detail herein.
The display unit 606 is used to display information input by the user or information provided to the user. The Display unit 606 may include a Display panel 6061, and the Display panel 6061 may be configured by a Liquid Crystal Display (LCD), an Organic Light-Emitting Diode (OLED), or the like.
The user input unit 607 may be used to receive input numeric or character information and generate key signal inputs related to user settings and function control of the terminal device. Specifically, the user input unit 607 includes a touch panel 6071 and other input devices 6072. Touch panel 6071, also referred to as a touch screen, may collect touch operations by a user on or near it (e.g., operations by a user on or near touch panel 6071 using a finger, stylus, or any suitable object or accessory). The touch panel 6071 may include two parts of a touch detection terminal device and a touch controller. The touch detection terminal equipment detects the touch direction of a user, detects a signal brought by touch operation and transmits the signal to the touch controller; the touch controller receives touch information from the touch detection terminal device, converts the touch information into touch point coordinates, sends the touch point coordinates to the processor 610, receives a command sent by the processor 610, and executes the command. In addition, the touch panel 6071 can be implemented by various types such as a resistive type, a capacitive type, an infrared ray, and a surface acoustic wave. The user input unit 607 may include other input devices 6072 in addition to the touch panel 6071. Specifically, the other input devices 6072 may include, but are not limited to, a physical keyboard, function keys (such as volume control keys, switch keys, etc.), a track ball, a mouse, and a joystick, which are not described herein again.
Further, the touch panel 6071 can be overlaid on the display panel 6061, and when the touch panel 6071 detects a touch operation on or near the touch panel 6071, the touch operation is transmitted to the processor 610 to determine the type of the touch event, and then the processor 610 provides a corresponding visual output on the display panel 6061 according to the type of the touch event. Although in fig. 6, the touch panel 6071 and the display panel 6061 are two independent components to implement the input and output functions of the terminal device, in some embodiments, the touch panel 6071 and the display panel 6061 may be integrated to implement the input and output functions of the terminal device, and this is not limited here.
The interface unit 608 is an interface for connecting an external terminal device to the terminal device 600. For example, the external terminal device may include a wired or wireless headset port, an external power supply (or battery charger) port, a wired or wireless data port, a memory card port, a port for connecting a terminal device having an identification module, an audio input/output (I/O) port, a video I/O port, an earphone port, and the like. The interface unit 608 may be used to receive input (e.g., data information, power, etc.) from an external terminal device and transmit the received input to one or more elements within the terminal device 600 or may be used to transmit data between the terminal device 600 and the external terminal device.
The memory 609 may be used to store software programs as well as various data. The memory 609 may mainly include a program storage area and a data storage area, wherein the program storage area may store an operating system, an application program required by at least one function (such as a sound playing function, an image playing function, etc.), and the like; the storage data area may store data (such as audio data, a phonebook, etc.) created according to the use of the cellular phone, and the like. Further, the memory 609 may include high speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other volatile solid state storage device.
The processor 610 is a control center of the terminal device, connects various parts of the entire terminal device by using various interfaces and lines, and performs various functions of the terminal device and processes data by running or executing software programs and/or modules stored in the memory 609 and calling data stored in the memory 609, thereby performing overall monitoring of the terminal device. Processor 610 may include one or more processing units; preferably, the processor 610 may integrate an application processor, which mainly handles operating systems, user interfaces, application programs, etc., and a modem processor, which mainly handles wireless communications. It will be appreciated that the modem processor described above may not be integrated into the processor 610.
The terminal device 600 may further include a power supply 611 (such as a battery) for supplying power to various components, and preferably, the power supply 611 may be logically connected to the processor 610 through a power management system, so as to implement functions of managing charging, discharging, and power consumption through the power management system.
In addition, the terminal device 600 includes some functional modules that are not shown, and are not described in detail herein.
An embodiment of the present invention further provides a terminal device, which includes a processor 610 and a memory 609, where the memory 609 stores a computer program that can be run on the processor 610, and when the computer program is executed by the processor 610, the computer program implements each process in the foregoing image processing method embodiment, and can achieve the same technical effect, and in order to avoid repetition, details are not described here again.
The embodiment of the present invention further provides a computer-readable storage medium, where a computer program is stored on the computer-readable storage medium, and when the computer program is executed by a processor, the computer program implements each process of the embodiment of the image processing method, and can achieve the same technical effect, and in order to avoid repetition, details are not repeated here. The computer-readable storage medium may be a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or terminal apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or terminal apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or terminal that comprises the element.
Through the above description of the embodiments, those skilled in the art will clearly understand that the method of the above embodiments can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but in many cases, the former is a better implementation manner. Based on such understanding, the technical solutions of the present invention may be embodied in the form of a software product, which is stored in a storage medium (such as ROM/RAM, magnetic disk, optical disk) and includes instructions for enabling a terminal device (such as a mobile phone, a computer, a server, an air conditioner, or a network device) to execute the method according to the embodiments of the present invention.
The technical features of the above embodiments can be arbitrarily combined, and for the sake of brevity, all possible combinations of the technical features in the above embodiments are not described, but should be considered as the scope of the present specification as long as there is no contradiction between the combinations of the technical features.
While the present invention has been described with reference to the embodiments shown in the drawings, the present invention is not limited to the embodiments, which are illustrative and not restrictive, and it will be apparent to those skilled in the art that various changes and modifications can be made therein without departing from the spirit and scope of the invention as defined in the appended claims.

Claims (14)

1. An image processing method is applied to a terminal device comprising a camera, and is characterized in that the method comprises the following steps:
acquiring a first image acquired by the camera according to a first acquisition parameter;
identifying a first shadow region in a first image, the first shadow region corresponding to a first object;
processing a first shadow area in the first image to obtain a second image;
acquiring a third image which is acquired by the camera according to N acquisition parameters and is matched with the second image, wherein N is a positive integer;
determining a target image based on the N third images;
wherein an area of a second shadow region in the target image is smaller than an area of the first shadow region, the second shadow region corresponding to the first object.
2. The method of claim 1, wherein said acquiring a third image acquired by the camera with N acquisition parameters and matched to the second image comprises:
respectively calculating the similarity of the second image and M preview images, wherein M is more than or equal to N, and the M preview images are preview images acquired by the camera according to M acquisition parameters;
and acquiring the N third images with the similarity larger than a preset similarity threshold in the M preview images.
3. The method of claim 2, wherein the second image comprises T sub-image regions, wherein T is an integer greater than 1;
respectively calculating the similarity of the second image and the M preview images, wherein the similarity comprises the following steps:
for an I-th preview image acquired by the camera, determining a first sub-area corresponding to each sub-image area in the I-th preview image, wherein the distance between the position coordinate of the sub-image area in the second image and the position coordinate of the corresponding first sub-area in the I-th preview image is smaller than or equal to a preset distance, I is an integer and is not greater than M, and the I-th preview image corresponds to an I-th acquisition parameter;
calculating the similarity between each sub-image region and the corresponding first sub-region in the I preview image, wherein the similarity between the second image and the I preview image comprises the similarity between each sub-image region in the second image and the corresponding first sub-region in the I preview image;
the obtaining the N third images with the similarity greater than a preset similarity threshold in the M preview images includes:
and acquiring the N third images from the M preview images, wherein the similarity between each sub-image area in the second image and the corresponding first sub-area in the third image is greater than the preset similarity threshold.
4. The method of claim 3, wherein processing the first shadow region in the first image to obtain a second image comprises:
performing region division on the first image to obtain P sub-image regions;
and filtering the sub-regions corresponding to the first shadow region in the P sub-image regions to obtain the T sub-image regions.
5. The method of claim 2, wherein the manner of calculating the similarity between the second image and the M preview images respectively comprises:
respectively calculating the similarity between a second target sub-image area in the second image and a third target sub-image area of the M preview images to obtain the similarity between the second image and the M preview images;
the second target sub-image region is the other region of the second image except the first shadow region, the position coordinate of the second target sub-image region in the second image is the same as the position coordinate of the third target sub-image region in the preview image, and the areas of the second target sub-image region and the third target sub-image region are the same.
6. The method according to claim 1 or 5, wherein the processing the first shadow region in the first image to obtain a second image comprises:
and performing attribute adjustment on the first shadow area in the first image to obtain the second image, wherein the first area in the second image is transparent, and the position of the first area in the second image is the same as the position of the first shadow area in the first image.
7. A terminal device including a camera, characterized in that the terminal device includes:
the first image acquisition module is used for acquiring a first image acquired by the camera according to first acquisition parameters;
a first shadow determination module to identify a first shadow region in a first image, the first shadow region corresponding to a first object;
the shadow processing module is used for processing a first shadow area in the first image to obtain a second image;
the third image acquisition module is used for acquiring a third image which is acquired by the camera according to N acquisition parameters and is matched with the second image, wherein N is a positive integer;
a target image determination module for determining a target image based on the N third images;
wherein an area of a second shadow region in the target image is smaller than an area of the first shadow region, the second shadow region corresponding to the first object.
8. The terminal device of claim 7, wherein the third image acquisition module further comprises:
the similarity calculation module is used for calculating the similarity between the second image and M preview images respectively, wherein M is larger than or equal to N, and the M preview images are preview images acquired by the camera according to M acquisition parameters;
and the third image determining module is used for acquiring the N third images of which the similarity is greater than a preset similarity threshold in the M preview images.
9. The terminal device of claim 8, wherein the second image comprises T sub-image regions, wherein T is an integer greater than 1;
the similarity calculation module includes:
a first sub-region determining module, configured to determine, for an I-th preview image acquired by the camera, a first sub-region corresponding to each sub-image region in the I-th preview image, where a distance between a position coordinate of the sub-image region in the second image and a position coordinate of the corresponding first sub-region in the I-th preview image is smaller than or equal to a preset distance, I is an integer and I is not greater than M, and the I-th preview image corresponds to an I-th acquisition parameter;
a first similarity calculation module, configured to calculate a similarity between each sub-image region and the corresponding first sub-region in the I-th preview image, where the similarity between the second image and the I-th preview image includes a similarity between each sub-image region in the second image and the corresponding first sub-region in the I-th preview image;
the third image obtaining module is configured to obtain the N third images from the M preview images, where a similarity between each sub-image region in the second image and a corresponding first sub-region in the third image is greater than the preset similarity threshold.
10. The terminal device of claim 9, wherein the shadow processing module comprises:
the area division module is used for carrying out area division on the first image to obtain P sub-image areas;
and the filtering module is used for filtering the sub-regions corresponding to the first shadow region in the P sub-image regions to obtain T sub-image regions.
11. The terminal device according to claim 8, wherein the similarity calculating module is configured to calculate similarities between a second target sub-image region in the second image and third target sub-image regions of the M preview images, respectively, so as to obtain similarities between the second image and the M preview images;
the second target sub-image region is the other region of the second image except the first shadow region, the position coordinate of the second target sub-image region in the second image is the same as the position coordinate of the third target sub-image region in the preview image, and the areas of the second target sub-image region and the third target sub-image region are the same.
12. The terminal device according to claim 7 or 11, wherein the shadow processing module is configured to perform attribute adjustment on a first shadow area in the first image to obtain the second image, a first area in the second image is transparent, and a position of the first area in the second image is the same as a position of the first shadow area in the first image.
13. A terminal device, comprising: a memory storing a computer program and a processor implementing the steps in the image processing method according to any of the claims 1-6 when executing the computer program.
14. A computer-readable storage medium, on which a computer program is stored which, when being executed by a processor, carries out the steps of the image processing method according to any one of claims 1 to 6.
CN201910693817.4A 2019-07-30 2019-07-30 Image processing method, terminal equipment and computer readable storage medium Active CN110363729B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910693817.4A CN110363729B (en) 2019-07-30 2019-07-30 Image processing method, terminal equipment and computer readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910693817.4A CN110363729B (en) 2019-07-30 2019-07-30 Image processing method, terminal equipment and computer readable storage medium

Publications (2)

Publication Number Publication Date
CN110363729A CN110363729A (en) 2019-10-22
CN110363729B true CN110363729B (en) 2021-07-20

Family

ID=68222623

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910693817.4A Active CN110363729B (en) 2019-07-30 2019-07-30 Image processing method, terminal equipment and computer readable storage medium

Country Status (1)

Country Link
CN (1) CN110363729B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111353419B (en) * 2020-02-26 2023-08-11 北京百度网讯科技有限公司 Image comparison method, device, electronic equipment and storage medium
CN111833283B (en) * 2020-06-23 2024-02-23 维沃移动通信有限公司 Data processing method and device and electronic equipment
CN112532879B (en) * 2020-11-26 2022-04-12 维沃移动通信有限公司 Image processing method and device

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7920179B2 (en) * 2008-08-05 2011-04-05 Sony Ericsson Mobile Communications Ab Shadow and reflection identification in image capturing devices
WO2012113732A1 (en) * 2011-02-25 2012-08-30 Fraunhofer-Gesellschaft Zur Foerderung Der Angewandten Forschung E.V. Determining model parameters based on transforming a model of an object
US10142598B2 (en) * 2014-05-30 2018-11-27 Sony Corporation Wearable terminal device, photographing system, and photographing method
US10708526B2 (en) * 2015-04-22 2020-07-07 Motorola Mobility Llc Method and apparatus for determining lens shading correction for a multiple camera device with various fields of view
CN105704374B (en) * 2016-01-29 2019-04-05 努比亚技术有限公司 A kind of image conversion apparatus, method and terminal
CN107222681B (en) * 2017-06-30 2018-11-30 维沃移动通信有限公司 A kind of processing method and mobile terminal of image data

Also Published As

Publication number Publication date
CN110363729A (en) 2019-10-22

Similar Documents

Publication Publication Date Title
CN108513070B (en) Image processing method, mobile terminal and computer readable storage medium
CN108307109B (en) High dynamic range image preview method and terminal equipment
CN109688322B (en) Method and device for generating high dynamic range image and mobile terminal
CN109461117B (en) Image processing method and mobile terminal
CN108989672B (en) Shooting method and mobile terminal
CN110505400B (en) Preview image display adjustment method and terminal
CN108449541B (en) Panoramic image shooting method and mobile terminal
CN107730460B (en) Image processing method and mobile terminal
CN107248137B (en) Method for realizing image processing and mobile terminal
CN107846583B (en) Image shadow compensation method and mobile terminal
CN108924414B (en) Shooting method and terminal equipment
CN108040209B (en) Shooting method and mobile terminal
CN110363729B (en) Image processing method, terminal equipment and computer readable storage medium
CN110602389B (en) Display method and electronic equipment
CN107749046B (en) Image processing method and mobile terminal
JP7467667B2 (en) Detection result output method, electronic device and medium
CN110213485B (en) Image processing method and terminal
CN111031234B (en) Image processing method and electronic equipment
CN111031253B (en) Shooting method and electronic equipment
CN109819166B (en) Image processing method and electronic equipment
CN108174110B (en) Photographing method and flexible screen terminal
CN108924422B (en) Panoramic photographing method and mobile terminal
CN108881721B (en) Display method and terminal
CN110602390B (en) Image processing method and electronic equipment
CN110769154B (en) Shooting method and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant