CN108494996B - Image processing method, device, storage medium and mobile terminal - Google Patents
Image processing method, device, storage medium and mobile terminal Download PDFInfo
- Publication number
- CN108494996B CN108494996B CN201810455796.8A CN201810455796A CN108494996B CN 108494996 B CN108494996 B CN 108494996B CN 201810455796 A CN201810455796 A CN 201810455796A CN 108494996 B CN108494996 B CN 108494996B
- Authority
- CN
- China
- Prior art keywords
- image
- occlusion
- area
- region
- shielding
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/80—Camera processing pipelines; Components thereof
- H04N23/81—Camera processing pipelines; Components thereof for suppressing or minimising disturbance in the image signal generation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/214—Generating training patterns; Bootstrap methods, e.g. bagging or boosting
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/80—Camera processing pipelines; Components thereof
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Data Mining & Analysis (AREA)
- Signal Processing (AREA)
- Multimedia (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Evolutionary Biology (AREA)
- Evolutionary Computation (AREA)
- Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Bioinformatics & Computational Biology (AREA)
- Artificial Intelligence (AREA)
- Life Sciences & Earth Sciences (AREA)
- Image Analysis (AREA)
Abstract
The embodiment of the application discloses an image processing method, an image processing device, a storage medium and a mobile terminal. The method comprises the following steps: when the shielding detection event is triggered, acquiring a shot image of the camera; carrying out occlusion detection on the shot image, and determining a first occlusion area in the shot image; when the characteristic value of the first shielding area is smaller than a preset characteristic threshold value, acquiring a target decoration image; the characteristic value of the first shielding region comprises at least one of the area of the first shielding region, the total number of pixels of the first shielding region and the proportion of the first shielding region in the shot image; the first occlusion region is embellished based on the target embellishment image. By adopting the technical scheme, the embodiment of the application can modify the shielding area through the modifier when the shielding area is small, so that the influence of the shielding area on the attractiveness of the shot image can be eliminated, and the quality of the shot image can be effectively improved.
Description
Technical Field
The embodiment of the application relates to the field of image processing, in particular to an image processing method, an image processing device, a storage medium and a mobile terminal.
Background
With the rapid development of electronic technology and the increasing living standard of people, terminal equipment has become an essential part of people's life. Most terminals at present have a photographing and shooting function, and the photographing or shooting function is deeply loved by users and is more and more widely applied. The user records the point drops in life through the shooting and camera shooting functions of the terminal, and the point drops are stored in the terminal, so that the point drops are convenient to recall, appreciate and check in the future.
However, in some cases, in the process of taking a picture or a video by a user, a part of the camera is shielded by a shielding object, so that the quality of the taken picture is poor, and the attractiveness of the taken image is affected. Therefore, it becomes important to improve the quality of the captured image.
Disclosure of Invention
The embodiment of the application provides an image processing method, an image processing device, a storage medium and a mobile terminal, which can effectively improve the quality of a shot image.
In a first aspect, an embodiment of the present invention provides an image processing method, including:
when the shielding detection event is triggered, acquiring a shot image of the camera;
carrying out occlusion detection on the shot image, and determining a first occlusion area in the shot image;
when the characteristic value of the first shielding area is smaller than a preset characteristic threshold value, acquiring a target decoration image; the characteristic value of the first shielding region comprises at least one of the area of the first shielding region, the total number of pixels of the first shielding region and the proportion of the first shielding region in the shot image;
the first occlusion region is embellished based on the target embellishment image.
In a second aspect, an embodiment of the present invention provides an image processing apparatus, including:
the shot image acquisition module is used for acquiring a shot image of the camera when the shielding detection event is triggered;
an occlusion region determining module, configured to perform occlusion detection on the captured image, and determine a first occlusion region in the captured image;
the modified image obtaining module is used for obtaining a target modified image when the characteristic value of the first shielding area is smaller than a preset characteristic threshold value; the characteristic value of the first shielding region comprises at least one of the area of the first shielding region, the total number of pixels of the first shielding region and the proportion of the first shielding region in the shot image;
an occlusion region embellishment module to embellish the first occlusion region based on the target embellishment image.
In a third aspect, the present application provides a computer-readable storage medium, on which a computer program is stored, which when executed by a processor implements an image processing method according to the present application.
In a fourth aspect, an embodiment of the present application provides a mobile terminal, including a memory, a processor, and a computer program stored on the memory and executable on the processor, where the processor executes the computer program to implement an image processing method according to an embodiment of the present application.
According to the image processing scheme provided by the embodiment of the invention, when the shielding detection event is triggered, the shot image of the camera is obtained, the shielding detection is carried out on the shot image, the first shielding area in the shot image is determined, the target decoration image is obtained when the characteristic value of the first shielding area is smaller than the preset characteristic threshold value, wherein the characteristic value of the first shielding area comprises at least one of the area of the first shielding area, the total number of pixels of the first shielding area and the proportion of the first shielding area in the shot image, and then the shielding area is decorated based on the target decoration image. Through the technical scheme provided by the embodiment of the application, when the shielding area is small, the shielding area can be modified through the modifier, so that the influence of the shielding area on the attractiveness of the shot image can be eliminated, and the quality of the shot image can be effectively improved.
Drawings
Fig. 1 is a schematic flowchart of an image processing method according to an embodiment of the present invention;
FIG. 2 is a flowchart illustrating another image processing method according to an embodiment of the present invention;
FIG. 3 is a flowchart illustrating another image processing method according to an embodiment of the present invention;
fig. 4 is a block diagram of an image processing apparatus according to an embodiment of the present invention;
fig. 5 is a schematic structural diagram of a mobile terminal according to an embodiment of the present application;
fig. 6 is a schematic structural diagram of a mobile terminal according to an embodiment of the present invention.
Detailed Description
The technical scheme of the invention is further explained by the specific implementation mode in combination with the attached drawings. It is to be understood that the specific embodiments described herein are merely illustrative of the invention and are not limiting of the invention. It should be further noted that, for the convenience of description, only some of the structures related to the present invention are shown in the drawings, not all of the structures.
Before discussing exemplary embodiments in more detail, it should be noted that some exemplary embodiments are described as processes or methods depicted as flowcharts. Although a flowchart may describe the steps as a sequential process, many of the steps can be performed in parallel, concurrently or simultaneously. In addition, the order of the steps may be rearranged. The process may be terminated when its operations are completed, but may have additional steps not included in the figure. The processes may correspond to methods, functions, procedures, subroutines, and the like.
Fig. 1 is a flowchart of an image processing method according to an embodiment of the present invention, where the embodiment is applicable to the case of image occlusion detection, and the method may be executed by an image processing apparatus, where the apparatus may be implemented by software and/or hardware, and may be generally integrated in a mobile terminal. As shown in fig. 1, the method includes:
For example, the mobile terminal in the embodiment of the present application may include mobile devices such as a mobile phone and a tablet computer.
When the occlusion detection event is triggered, a shot image of the camera is acquired, thereby starting the occlusion detection event.
For example, in order to perform occlusion detection at an appropriate timing, a condition that an occlusion detection event is triggered may be set in advance. Optionally, monitoring whether an occlusion detection instruction is received; when the occlusion detection instruction is received, it is determined that an occlusion detection event is triggered, so that the real requirements of the user on occlusion detection can be more accurately met. It can be understood that, when an occlusion detection instruction input by a user is received, it indicates that it is detected that the current user actively opens the occlusion detection permission, and at this time, an occlusion detection event is triggered. Optionally, in order to apply the occlusion detection to a more valuable application occasion so as to save additional power consumption caused by the occlusion detection, the application occasion and the application scene of the occlusion detection may be analyzed or researched, a reasonable preset scene is set, and when the mobile terminal is detected to be in the preset scene, an occlusion detection event is triggered. Illustratively, the exposure level of a captured image is acquired; and when the exposure is greater than a preset exposure threshold, determining that a shielding detection event is triggered. It is understood that when the exposure of the captured image is large, it is very likely that the user will reduce the exposure of the image as much as possible by using clothes or hands, etc. in order to avoid the overexposure during the capturing stage. Therefore, when the exposure level of the shot image is greater than the preset exposure threshold, the trigger occlusion detection event is triggered. For another example, when the ambient light brightness at the position of the mobile terminal is greater than the preset brightness threshold, the blocking detection event is triggered. It can be understood that when the ambient light brightness is large, it is easy to cause overexposure of the photographed image, and in order to reduce the ambient light brightness and reduce the possibility of the overexposure, the user usually uses clothes or hands to reduce the effect of the overexposed ambient light on the photographed image. However, in this process, the camera is easily partially shielded without the user noticing it. It should be noted that, the embodiment of the present application does not limit the specific representation form of the occlusion detection event being triggered.
In the embodiment of the application, when the occlusion detection event is triggered, the shot image of the camera is acquired. It can be understood that, when a user needs to take a picture, the shooting function of the terminal is turned on, for example, a camera application in the terminal is turned on, that is, a camera of the terminal is turned on, and a subject to be shot is shot through the camera to generate a shot image. The shot image may be at least one frame of image in a video image shot by a camera, or at least one frame of image in a plurality of images shot by the camera continuously, or a single image shot by the camera, which is not limited in the embodiment of the present application. In addition, the camera can be a 2D camera, and can also be a 3D camera. The 3D camera may also be referred to as a 3D sensor. The 3D camera is different from a general camera (i.e., a 2D camera) in that the 3D camera can acquire not only a planar image but also depth information of a photographed object, i.e., three-dimensional position and size information. When the camera is a 2D camera, the acquired shot image of the camera is a 2D shot image; when the camera is a 3D camera, the acquired shot image is a 3D shot image.
And 102, carrying out occlusion detection on the shot image, and determining a first occlusion area in the shot image.
In this embodiment of the present application, performing occlusion detection on a captured image, and determining a first occlusion region in the captured image may include: and analyzing the shot image based on an image recognition technology, and determining a first occlusion area in the shot image according to an analysis result. Illustratively, the blur degree of the shot image is analyzed, and an image area with a larger blur degree in the shot image is determined as the first occlusion area. It is understood that the degree of blur of the captured image reflects the image quality of the captured image, and that the higher the degree of blur, the worse the corresponding image quality, whereas the lower the degree of blur, the higher the corresponding image quality. It can be understood that, when a blocking area exists in a shot image, the blocking object in the blocking area is usually out of the focal range of the camera, that is, when the blocking object is shot by the camera, the blocking object cannot be aligned to the focal range of the camera, and the ambiguity of the image area corresponding to the blocking object is high, that is, the ambiguity of the blocking area is large, and the blocking area lacks obvious texture features or sharp edge features, which further affects the ambiguity of the whole shot preview image. Therefore, an image area with a higher degree of blur, such as an image area with a degree of blur greater than a preset threshold value of blur, may be determined as the first occlusion area.
Optionally, the captured image is input into a pre-trained occlusion region determination model, and a first occlusion region in the captured image is determined according to an output result of the occlusion region determination model. The embodiment of the present application does not limit the manner of determining the first occlusion region in the captured image.
And 103, acquiring a target decoration image when the characteristic value of the first shielding area is smaller than a preset characteristic threshold value.
The characteristic value of the first shielding region comprises at least one of the area of the first shielding region, the total number of pixels of the first shielding region and the proportion of the first shielding region in the shot image.
In the embodiment of the present application, the feature value of the first occlusion region reflects the size of the first occlusion region in the captured image. The larger the characteristic value of the first shielding area is, the larger the image proportion of the first shielding area in the shot image is. For example, when the feature value of the first occlusion region is the area of the first occlusion region, the feature value of the first occlusion region is smaller than the preset feature threshold, which can be understood that the area of the first occlusion region is smaller than the preset area threshold, that is, the area of the first occlusion region is sufficiently small. When the feature value of the first occlusion region is the total number of pixels of the first occlusion region, and the feature value of the first occlusion region is smaller than the preset feature threshold, it can be understood that the total number of pixels of the first occlusion region is smaller than the preset pixel number threshold, that is, it indicates that the number of pixels included in the image corresponding to the first occlusion region is sufficiently small. When the feature value of the first occlusion region is the proportion of the first occlusion region in the shot image, the feature value of the first occlusion region is smaller than the preset feature threshold, which can be understood as that the proportion of the first occlusion region in the shot image is smaller than the preset proportion threshold, that is, the proportion of the first occlusion region in the shot image is sufficiently small.
And when the characteristic value of the first shielding area is smaller than a preset characteristic threshold value, acquiring a target decoration image. The target modifier can be understood as an image for modifying the first shielding area. The target decoration image may include: at least one of the decoration images such as cartoon characters, stickers, characters, facial expression packages, decoration borders and the like, which is not limited in the embodiment of the present application.
And 104, modifying the first occlusion region based on the target modification image.
In the embodiment of the application, the first shielding area is covered by the target decoration image so as to beautify the shot image. For example, if the target modification image is an ocean image, the first occlusion area is covered by the ocean image, that is, the ocean image is added at the position of the first occlusion area in the captured image, so that the first occlusion area cannot be displayed, and the effect of beautifying the captured image is achieved.
The image processing method provided in the embodiment of the invention includes acquiring a shot image of a camera when an occlusion detection event is triggered, performing occlusion detection on the shot image, determining a first occlusion region in the shot image, acquiring a target decoration image when a feature value of the first occlusion region is smaller than a preset feature threshold, wherein the feature value of the first occlusion region includes at least one of an area of the first occlusion region, a total number of pixels of the first occlusion region and a proportion of the first occlusion region in the shot image, and then decorating the occlusion region based on the target decoration image. Through the technical scheme provided by the embodiment of the application, when the shielding area is small, the shielding area can be modified through the modifier, so that the influence of the shielding area on the attractiveness of the shot image can be eliminated, and the quality of the shot image can be effectively improved.
In some embodiments, when the feature value of the first occlusion region is smaller than a preset feature threshold, acquiring a target decoration image includes: when the characteristic value of the first shielding area is smaller than a preset characteristic threshold value, acquiring a pixel jump value of a surrounding area of the first shielding area; and when the pixel jump value is larger than a preset jump threshold value, acquiring a target decoration image. The advantage that sets up like this lies in, under the prerequisite of guaranteeing to shoot image integrality, not only can eliminate and shelter from the influence that the region is pleasing to the eye of image to shooting, can further improve the quality of shooting the image moreover.
In the embodiment of the present application, when the feature value of the first occlusion region is smaller than the preset feature threshold, it indicates that the image proportion of the first occlusion region in the whole captured image is sufficiently small, and at this time, the pixel jump value of the surrounding region of the first occlusion region is obtained. The surrounding area may include image areas which are distributed around the first shielding area and have the same shape and area as the first shielding area, or image areas which are distributed around the first shielding area and have the same external regular pattern as the first shielding area. For example, if the first occlusion region is irregular, an image region that is completely the same as the circumscribed rectangle or circumscribed circle of the first occlusion region in area and shape, taken from the periphery of the first occlusion region, is used as the surrounding region of the first occlusion region. Of course, the peripheral region may be an image region having a larger area, which is the same shape as the first occlusion region or the circumscribed regular pattern of the first occlusion region. The surrounding area may be an image area having a slightly smaller area than the first occlusion area or the first occlusion area, which is in the same shape as the first occlusion area or the circumscribed regular pattern of the first occlusion area. The number of the peripheral regions may be one or more. For example, the first occlusion region is distributed in the upper right corner of the captured image, and a peripheral region corresponding to the first occlusion region is respectively cut from the peripheral regions on the left and below the first occlusion region. As another example, a plurality of surrounding regions having different areas may be taken from the periphery of the first occlusion region. When there are a plurality of peripheral regions, the shape and area size of each peripheral region may be the same or different. In addition, the number, shape, and size of the peripheral region of the first occlusion region are not limited in the embodiments of the present application.
The pixel jump value reflects the change of the pixel value of the image corresponding to the surrounding area. The pixel jump value may include a maximum value of pixel value differences of adjacent pixels in the image corresponding to the surrounding area, or may include a mean value of pixel value differences of adjacent pixels in the image corresponding to the surrounding area. The larger the pixel jump value is, the more obvious the color change of the image corresponding to the surrounding area is, whereas the smaller the pixel jump value is, the smaller the color change of the image corresponding to the surrounding area is, for example, the image corresponding to the surrounding area is a single color image or an image close to a single color. When the pixel jump value is greater than the preset jump threshold, it indicates that the image color (i.e., the pixel value) corresponding to the peripheral area around the first occlusion area changes greatly, and at this time, the explicit relationship between the pixel value of the first occlusion area and the pixel value of the peripheral area cannot be roughly determined, or the approximate range of the pixel value of the first occlusion area cannot be determined, and the first occlusion area cannot be processed through the peripheral area of the first occlusion area. At this time, in order to eliminate the influence of the first shielding area on the captured image, the target modifier may be acquired, and the first shielding area is modified based on the target modifier, thereby achieving an effect of beautifying the captured image.
Optionally, the method further includes: and when the pixel jump value is smaller than a preset threshold value, determining a repair block from the surrounding area, and repairing the first occlusion area based on the repair block. The advantage of setting up like this lies in, under the prerequisite of guaranteeing to shoot image integrality, can make the image of shooing more be close to the image of shooing when the camera is not sheltered from, further improves the quality of shooing the image.
For example, when the pixel jump value is smaller than the preset jump threshold, it indicates that the image color (i.e., the pixel value) corresponding to the surrounding area around the first occlusion area changes less, or is a single color, and at this time, it indicates that the image color (i.e., the pixel value) corresponding to the first occlusion area is not very different from the color of the surrounding area, and the repair block may be determined from the surrounding area, and the first occlusion area may be repaired based on the repair block.
For example, when the surrounding area is an image area having the same shape and area size as the first occlusion area, the surrounding area may be directly used as a repair block, and the first occlusion area is repaired by the repair block, that is, the surrounding area covers the first occlusion area. When the surrounding area is an image area which is completely the same as the external regular pattern of the first occlusion area, an image area which is completely the same as the shape and area of the first occlusion area can be intercepted from the surrounding area to be used as a repair block to repair the first occlusion area, or an image block with a preset size can be randomly intercepted from the surrounding area to be used as a repair block to repair the first occlusion area through a plurality of repair blocks. When the area of the surrounding area is smaller than the area of the first occlusion area, an image block with the smallest pixel transition value may be cut out from the surrounding area as a repair block, or an image block adjacent to the first occlusion area may be cut out from the surrounding area as a repair block, and the first occlusion area is repaired based on the repair block. Wherein, repairing the first occlusion region based on the repair block may include: and replacing the pixel value of the image corresponding to the first occlusion area by the pixel value of the image corresponding to the repair block.
In some embodiments, when the feature value of the first occlusion region is smaller than a preset feature threshold, acquiring a target decoration image includes: when the characteristic value of the first shielding area is smaller than a preset characteristic threshold value, identifying a main body image of the shot picture; determining a category of the subject image; and determining a target decoration image matched with the main body image according to the class of the main body image. The advantage of setting up like this is that can decorate the sheltering from the region in the picture through the decoration image that more matches with the main part image of shooing the picture, not only can eliminate the influence that sheltering from the region to the pleasing to the eye of shooing the picture, can further improve the quality of shooing the picture moreover.
In the embodiment of the present application, when the feature value of the first occlusion region is smaller than the preset feature threshold, it indicates that the image proportion of the first occlusion region in the whole captured image is small enough, and at this time, if the first occlusion region is decorated by the decoration image, the visual effect and the aesthetic property of the whole captured image are not affected. And identifying a subject image in the shot image, wherein the subject image comprises a main shot object of the camera during shooting and an image presented in the shot image. For example, the subject image may include different subjects such as a museum, a child, a puppy, a flower sea, and a tree, and the subject image is an image corresponding to the subject image. And determining the class of the main image according to the identified main image, and determining a target decoration image matched with the main image according to the class of the main image. For example, when the subject image is a puppy, it is determined that the subject image belongs to an "animal" image, that is, the category of the subject image is "animal", an animal image that more matches the subject image may be used as the target decoration image, for example, a happy cartoon image may be used as the target decoration image. For example, when the subject image is a flower sea, and it is determined that the subject image belongs to a "landscape type" image, a landscape type image more matching the subject image may be used as the target decoration image, such as an image of a rose. As another example, when the subject image is a child, and it is determined that the subject image belongs to the "character class" image, a cartoon character or an animation character more matching the subject image may be used as the target decoration image, such as an ottman or a viny bear.
Determining a target decoration image matched with the subject image according to the category of the subject image may include: and according to the determined type of the main body image, searching a modified image matched with the main body image from a preset corresponding relation list of the main body image and the modified image as a target modified image. Decorating the first occlusion region based on the determined target decoration image may include: and covering the first shading area with the target decoration image so as to beautify the shot image. Of course, when the first shielding area is located at the periphery of the shot image and the area of the first shielding area is small enough, a photo frame which is more matched with the main body image can be added for the shot image, so that the photo frame covers the first shielding area as much as possible, the influence of the first shielding area on the shot image can be eliminated, and the shot image can be beautified.
In some embodiments, when the feature value of the first occlusion region is smaller than a preset feature threshold, acquiring a target decoration image includes: when the characteristic value of the first shielding area is smaller than a preset characteristic threshold value, determining a main body area in the shot image; judging whether the first occlusion area and the main body area are overlapped; and when the first shielding area is not overlapped with the main body area, acquiring a target decoration image. The advantage of setting up like this lies in, can guarantee to shelter from under the prerequisite that the region does not influence the main part image in the shooting image, also guarantee the prerequisite of the integrality of the main part image of shooting promptly, through decorating the image, decorate the sheltering from the region in the shooting image, can further improve the quality of shooting the image.
In the embodiment of the present application, when the feature value of the first shielding region is smaller than the preset feature threshold, it indicates that the image proportion of the first shielding region in the entire captured image is small enough, but if the first shielding region is located right above the main image of the captured image at this time, that is, the main image of the captured image is incomplete due to the presence of the first shielding region, if the first shielding region is continuously decorated with the decoration image, although the influence of the first shielding region on the captured image can be eliminated, the integrity of the main image in the captured image cannot be ensured, and the entire captured image looks more obtrusive, and is not harmonious and beautiful enough. Therefore, when the feature value of the first occlusion region is smaller than the preset feature threshold, the subject region in the captured image is determined, and when there is no overlap between the first occlusion region and the subject region, the target decoration image is acquired. The subject area is an image area corresponding to the subject image in the captured image, that is, the image corresponding to the subject area includes the subject image of the captured image. Illustratively, a subject image in the captured image is recognized based on an image recognition technique, wherein the subject image includes a main subject of the camera at the time of capturing, and an image appearing in the captured image. For example, the subject image may include different subjects such as a museum, a child, a puppy, a flower sea, and a tree, and the subject image is an image corresponding to the subject image. The image area corresponding to the main image may be used as the main area, or the circumscribed rule image area corresponding to the main image may be used as the main area. When the first shielding region is not overlapped with the main body image, it is described that the first shielding region affects the aesthetic property of the shot image, but does not affect the integrity of the main body image, that is, the first shielding region does not cover a part of the main body image, and at this time, the shielding region can be decorated by the decoration image, so as to achieve the effect of beautifying the shot image.
In some embodiments, performing occlusion detection on the captured image, determining a first occlusion region in the captured image, comprises: inputting the shot image into a pre-trained occlusion region determination model; wherein the occlusion region determination model is generated based on a characteristic rule of an occlusion region presented in an image; and determining a first occlusion area in the shot image according to an output result of the occlusion area determination model. The method has the advantages that the occlusion detection is carried out on the shot image through the pre-constructed occlusion region determining model, and the occlusion region in the shot image can be accurately and quickly determined.
In the embodiment of the application, the occlusion region determining model can be understood as a learning model which can quickly determine the occlusion region in the shot image after the shot image is input, that is, a learning model which can quickly judge the specific distribution region of the occlusion region in the shot image. The occlusion region determination model may include any one of machine learning models such as a neural network model, a decision tree model, and a random forest model. The occlusion region determination model may be generated by training a sample training set, in which the sample training set includes a sample image in a sample library, and the sample image includes an occlusion region. Illustratively, the occlusion region determination model is generated based on a characteristic law of the occlusion region present in the image. It can be understood that the characteristics presented by the occlusion region and the non-occlusion region in one image are different, so that the characteristic rule presented by the occlusion region in the image can be learned to generate the occlusion region determination model. Wherein the feature that the occlusion region presents in the image may include: at least one of a size of the occlusion region in the image, a position of the occlusion region in the image, a shape of the occlusion region in the image, a brightness of the occlusion region, a color of the occlusion region, a degree of blur of the occlusion region, and a texture of the occlusion region. When the shielding detection event is triggered, a shot image of the camera is obtained, the obtained shot image is input into the shielding area determining model, the shielding area determining model can analyze the characteristic information of the shot image, and can determine the shielding area in the shot image according to the analysis result, namely determine which specific partial image area in the shot image is the first shielding area.
For example, after the captured image is input to the occlusion region determination model, the occlusion region determination model analyzes the captured image to determine that an occlusion region exists in the captured image, and the occlusion region determination model may output the captured image marked with the first occlusion region. That is, at this time, the output result of the occlusion region specifying model is also the captured image, but the first occlusion region is marked in the captured image. After the shot image is input into the occlusion region determining model, the occlusion region determining model determines that no occlusion region exists in the shot image through analysis, and then the occlusion region determining model can output the image which is completely the same as the input shot image, namely the output shot image does not contain any mark.
In some embodiments, before inputting the captured image into a pre-trained occlusion region determination model, further comprising: acquiring a sample image, wherein the sample image comprises an image with a second occlusion area; marking the second occlusion area in the sample image, and taking the sample image marked with the second occlusion area as a training sample set; and training a preset machine learning model by using the training sample set so as to learn the characteristic rule of the second occlusion region, and obtaining an occlusion region determination model. The advantage of setting up like this is that, regard as the sample source of sheltering from regional definite model with the sample image of sheltering from the region to shelter from in the sample image and carry out the mark, can improve the precision of confirming the model training to sheltering from the region greatly.
In an embodiment of the present application, a sample image is obtained, wherein the sample image includes an image in which the second occlusion region exists. The second occlusion area in the sample image can be determined based on an image processing technology, and can also be determined according to a user's selection operation. And marking the second occlusion area in the sample image, namely marking the image area corresponding to the second occlusion area in the corresponding second sample image. And taking the second sample image marked with the second occlusion area as a training sample set, and training a preset machine learning model by using the training sample set so as to learn the characteristic rule of the second occlusion area and obtain an occlusion area determination model. Illustratively, the preset machine learning model learns a series of information such as the shape, color, brightness, ambiguity and texture information of a second occlusion region in the training sample and the position of the second occlusion region in the sample image, and generates an occlusion region determination model according to a characteristic rule of the second occlusion region in the sample image. The preset machine learning model can comprise any one of a neural network model, a decision tree model, a random forest model and a naive Bayes model. The embodiment of the application does not limit the preset machine learning model.
The occlusion region determination model is acquired before a shot image is input into a pre-trained occlusion region determination model. It should be noted that the mobile terminal may obtain the sample image, use the second sample image labeled with the second occlusion region as a training sample set, train a preset machine learning model by using the training sample set, and directly generate the occlusion region determination model. The mobile terminal can also directly call the occlusion region determination model generated by training of other mobile terminals. Of course, the server may also train the training sample set based on a preset machine learning model to obtain the occlusion region determination model. And when the mobile terminal needs to determine the occlusion area in the shot image, calling the trained occlusion area determination model from the server.
Fig. 2 is a schematic flowchart of an image processing method according to an embodiment of the present application. As shown in fig. 2, the method includes:
Monitoring whether an occlusion detection instruction is received or not; when an occlusion detection instruction is received, determining that an occlusion detection event is triggered; or acquiring the exposure of the shot image; and when the exposure is greater than a preset exposure threshold, determining that the shielding detection event is triggered.
The characteristic value of the first shielding region comprises at least one of the area of the first shielding region, the total number of pixels of the first shielding region and the proportion of the first shielding region in the shot image.
And 204, judging whether the pixel jump value is larger than a preset jump threshold value, if so, executing a step 205, otherwise, executing a step 208.
And step 206, determining a target decoration image matched with the main body image according to the category of the main body image.
And step 207, modifying the first occlusion region based on the target modified image.
And 208, determining a repairing block from the surrounding area, and repairing the first sheltered area based on the repairing block so as to beautify the shot image.
According to the image processing method provided by the embodiment of the application, when the area of the first shielding region is smaller than a preset threshold value, the pixel jump value of the surrounding region of the first shielding region is obtained, when the pixel jump value is larger than the preset jump threshold value, the target decoration image matched with the main body image is obtained, the first shielding region is decorated based on the target decoration image, when the pixel jump value is smaller than the preset jump threshold value, the repair block is determined from the surrounding region, and the first shielding region is repaired based on the repair block. By adopting the technical scheme, the quality of the shot image can be further improved on the premise of ensuring the integrity of the shot image.
Fig. 3 is a schematic flowchart of an image processing method according to an embodiment of the present application. As shown in fig. 3, the method includes:
Wherein the sample image comprises an image in which the second occlusion region exists.
And 302, marking the second occlusion area in the sample image, and taking the sample image marked with the second occlusion area as a training sample set.
Wherein the characteristics of the occlusion region presented in the image include: at least one of a size of the occlusion region in the image, a position of the occlusion region in the image, a shape of the occlusion region in the image, a brightness of the occlusion region, a color of the occlusion region, a degree of blur of the occlusion region, and a texture of the occlusion region.
And step 304, when the shielding detection event is triggered, acquiring a shot image of the camera.
Monitoring whether an occlusion detection instruction is received or not; when an occlusion detection instruction is received, determining that an occlusion detection event is triggered; or acquiring the exposure of the shot image; and when the exposure is greater than a preset exposure threshold, determining that the shielding detection event is triggered.
The occlusion region determination model is generated based on a characteristic rule presented by the occlusion region in the image;
and step 306, determining a first occlusion area in the shot image according to the output result of the occlusion area determination model.
And 307, when the area of the first shielding region is smaller than a preset threshold value, determining a main body region in the shot image.
And 310, decorating the first shading area based on the target decoration image so as to beautify the shot image.
According to the image processing method provided by the embodiment of the application, the shot image is input into the pre-trained occlusion region determining model, the first occlusion region in the shot image is determined according to the output result of the occlusion region determining model, and when the first occlusion region is not overlapped with the main body region, the first occlusion region is decorated based on the target decoration image. By adopting the technical scheme, the shot image can be shielded and detected through the pre-constructed shielding region determining model, the shielding region in the shot image can be accurately and quickly determined, and the shielding region in the shot image can be modified through the modifying image on the premise of ensuring the integrity of the shot main image, so that the quality of the shot image can be further improved.
Fig. 4 is a block diagram of an image processing apparatus, which may be implemented by software and/or hardware, and is generally integrated in a mobile terminal, and may improve the quality of a captured image by performing an image processing method according to an embodiment of the present disclosure. As shown in fig. 4, the apparatus includes:
a captured image obtaining module 401, configured to obtain a captured image of the camera when the occlusion detection event is triggered;
an occlusion region determining module 402, configured to perform occlusion detection on the captured image, and determine a first occlusion region in the captured image;
a modified image obtaining module 403, configured to obtain a target modified image when a feature value of the first occlusion region is smaller than a preset feature threshold; the characteristic value of the first shielding region comprises at least one of the area of the first shielding region, the total number of pixels of the first shielding region and the proportion of the first shielding region in the shot image;
an occlusion region retouching module 404 configured to retouch the first occlusion region based on the target retouch image.
The image processing device provided by the embodiment of the application acquires a shot image of a camera when a shielding detection event is triggered, performs shielding detection on the shot image, determines a first shielding region in the shot image, acquires a target decoration image when a characteristic value of the first shielding region is smaller than a preset characteristic threshold value, wherein the characteristic value of the first shielding region includes at least one of the area of the first shielding region, the total number of pixels of the first shielding region and the proportion of the first shielding region in the shot image, and then decorates the shielding region based on the target decoration image. Through the technical scheme provided by the embodiment of the application, when the shielding area is small, the shielding area can be modified through the modifier, so that the influence of the shielding area on the attractiveness of the shot image can be eliminated, and the quality of the shot image can be effectively improved.
Optionally, the modified image obtaining module is configured to:
when the characteristic value of the first shielding area is smaller than a preset characteristic threshold value, acquiring a pixel jump value of a surrounding area of the first shielding area;
and when the pixel jump value is larger than a preset jump threshold value, acquiring a target decoration image.
Optionally, the modified image obtaining module is configured to:
when the characteristic value of the first shielding area is smaller than a preset characteristic threshold value, identifying a main body image of the shot picture;
determining a category of the subject image;
and determining a target decoration image matched with the main body image according to the class of the main body image.
Optionally, the modified image obtaining module is configured to:
when the characteristic value of the first shielding area is smaller than a preset characteristic threshold value, determining a main body area in the shot image;
judging whether the first occlusion area and the main body area are overlapped;
and when the first shielding area is not overlapped with the main body area, acquiring a target decoration image.
Optionally, the occlusion region determining module is configured to:
inputting the shot image into a pre-trained occlusion region determination model; wherein the occlusion region determination model is generated based on a characteristic rule of an occlusion region presented in an image;
and determining a first occlusion area in the shot image according to an output result of the occlusion area determination model.
Optionally, the apparatus further comprises:
a sample image obtaining module, configured to obtain a sample image before the captured image is input into a pre-trained occlusion region determination model, where the sample image includes an image in which a second occlusion region exists;
the occlusion region labeling module is used for labeling the second occlusion region in the sample image and taking the sample image labeled with the second occlusion region as a training sample set;
and the occlusion region determination model training module is used for training a preset machine learning model by using the training sample set so as to learn the characteristic rule of the second occlusion region, and thus the occlusion region determination model is obtained.
Optionally, the occlusion detection event is triggered, including:
monitoring whether an occlusion detection instruction is received; determining that an occlusion detection event is triggered when the occlusion detection instruction is received; or
Acquiring the exposure of a shot image; and when the exposure is greater than a preset exposure threshold, determining that a shielding detection event is triggered.
Embodiments of the present application also provide a storage medium containing computer-executable instructions, which when executed by a computer processor, perform a method of image processing, the method comprising:
when the shielding detection event is triggered, acquiring a shot image of the camera;
carrying out occlusion detection on the shot image, and determining a first occlusion area in the shot image;
when the characteristic value of the first shielding area is smaller than a preset characteristic threshold value, acquiring a target decoration image; the characteristic value of the first shielding region comprises at least one of the area of the first shielding region, the total number of pixels of the first shielding region and the proportion of the first shielding region in the shot image;
the first occlusion region is embellished based on the target embellishment image.
Storage medium-any of various types of memory devices or storage devices. The term "storage medium" is intended to include: mounting media such as CD-ROM, floppy disk, or tape devices; computer system memory or random access memory such as DRAM, DDRRAM, SRAM, EDORAM, Lanbas (Rambus) RAM, etc.; non-volatile memory such as flash memory, magnetic media (e.g., hard disk or optical storage); registers or other similar types of memory elements, etc. The storage medium may also include other types of memory or combinations thereof. In addition, the storage medium may be located in a first computer system in which the program is executed, or may be located in a different second computer system connected to the first computer system through a network (such as the internet). The second computer system may provide program instructions to the first computer for execution. The term "storage medium" may include two or more storage media that may reside in different locations, such as in different computer systems that are connected by a network. The storage medium may store program instructions (e.g., embodied as a computer program) that are executable by one or more processors.
Of course, the storage medium provided in the embodiments of the present application contains computer-executable instructions, and the computer-executable instructions are not limited to the image processing operations described above, and may also perform related operations in the image processing method provided in any embodiment of the present application.
The embodiment of the application provides a mobile terminal, and the image processing device provided by the embodiment of the application can be integrated in the mobile terminal. Fig. 5 is a schematic structural diagram of a mobile terminal according to an embodiment of the present application. The mobile terminal 500 may include: the image processing system comprises a memory 501, a processor 502 and a computer program stored on the memory and executable by the processor, wherein the processor 502 implements the image processing method according to the embodiment of the present application when executing the computer program.
The mobile terminal provided by the embodiment of the application can decorate the sheltered area through the decoration when the sheltered area is small, not only can eliminate the influence of the sheltered area on the attractiveness of the shot image, but also can effectively improve the quality of the shot image.
Fig. 6 is a schematic structural diagram of another mobile terminal provided in an embodiment of the present application, where the mobile terminal may include: a housing (not shown), a memory 601, a Central Processing Unit (CPU) 602 (also called a processor, hereinafter referred to as CPU), a circuit board (not shown), and a power circuit (not shown). The circuit board is arranged in a space enclosed by the shell; the CPU602 and the memory 601 are disposed on the circuit board; the power supply circuit is used for supplying power to each circuit or device of the mobile terminal; the memory 601 is used for storing executable program codes; the CPU602 executes a computer program corresponding to the executable program code by reading the executable program code stored in the memory 601 to implement the steps of:
when the shielding detection event is triggered, acquiring a shot image of the camera;
carrying out occlusion detection on the shot image, and determining a first occlusion area in the shot image;
when the characteristic value of the first shielding area is smaller than a preset characteristic threshold value, acquiring a target decoration image; the characteristic value of the first shielding region comprises at least one of the area of the first shielding region, the total number of pixels of the first shielding region and the proportion of the first shielding region in the shot image;
the first occlusion region is embellished based on the target embellishment image.
The mobile terminal further includes: peripheral interface 603, RF (Radio Frequency) circuitry 605, audio circuitry 606, speakers 611, power management chip 608, input/output (I/O) subsystem 609, other input/control devices 610, touch screen 612, other input/control devices 610, and external port 604, which communicate via one or more communication buses or signal lines 607.
It should be understood that the illustrated mobile terminal 600 is merely one example of a mobile terminal and that the mobile terminal 600 may have more or fewer components than shown, may combine two or more components, or may have a different configuration of components. The various components shown in the figures may be implemented in hardware, software, or a combination of hardware and software, including one or more signal processing and/or application specific integrated circuits.
The following describes the mobile terminal for image processing provided in this embodiment in detail, and the mobile terminal is exemplified by a mobile phone.
A memory 601, the memory 601 being accessible by the CPU602, the peripheral interface 603, and the like, the memory 601 may include high speed random access memory, and may also include non-volatile memory, such as one or more magnetic disk storage devices, flash memory devices, or other volatile solid state storage devices.
A peripheral interface 603, said peripheral interface 603 may connect input and output peripherals of the device to the CPU602 and the memory 601.
An I/O subsystem 609, the I/O subsystem 609 may connect input and output peripherals on the device, such as a touch screen 612 and other input/control devices 610, to the peripheral interface 603. The I/O subsystem 609 may include a display controller 6091 and one or more input controllers 6092 for controlling other input/control devices 610. Where one or more input controllers 6092 receive electrical signals from or transmit electrical signals to other input/control devices 610, the other input/control devices 610 may include physical buttons (push buttons, rocker buttons, etc.), dials, slide switches, joysticks, click wheels. It is noted that the input controller 6092 may be connected to any one of: a keyboard, an infrared port, a USB interface, and a pointing device such as a mouse.
A touch screen 612, which touch screen 612 is an input interface and an output interface between the user's mobile terminal and the user, displays visual output to the user, which may include graphics, text, icons, video, and the like.
The display controller 6091 in the I/O subsystem 609 receives electrical signals from the touch screen 612 or transmits electrical signals to the touch screen 612. The touch screen 612 detects a contact on the touch screen, and the display controller 6091 converts the detected contact into an interaction with a user interface object displayed on the touch screen 612, that is, to implement a human-computer interaction, where the user interface object displayed on the touch screen 612 may be an icon for running a game, an icon networked to a corresponding network, or the like. It is worth mentioning that the device may also comprise a light mouse, which is a touch sensitive surface that does not show visual output, or an extension of the touch sensitive surface formed by the touch screen.
The RF circuit 605 is mainly used to establish communication between the mobile phone and the wireless network (i.e., network side), and implement data reception and transmission between the mobile phone and the wireless network. Such as sending and receiving short messages, e-mails, etc. Specifically, the RF circuit 605 receives and transmits RF signals, which are also called electromagnetic signals, and the RF circuit 605 converts electrical signals into electromagnetic signals or vice versa and communicates with a mobile communication network and other devices through the electromagnetic signals. RF circuitry 605 may include known circuitry for performing these functions including, but not limited to, an antenna system, an RF transceiver, one or more amplifiers, a tuner, one or more oscillators, a digital signal processor, a CODEC (CODEC) chipset, a Subscriber Identity Module (SIM), and so forth.
The audio circuit 606 is mainly used to receive audio data from the peripheral interface 603, convert the audio data into an electric signal, and transmit the electric signal to the speaker 611.
The speaker 611 is used to convert the voice signal received by the handset from the wireless network through the RF circuit 605 into sound and play the sound to the user.
And a power management chip 608 for supplying power and managing power to the hardware connected to the CPU602, the I/O subsystem, and the peripheral interface.
The image processing device, the storage medium and the mobile terminal provided in the above embodiments can execute the image processing method provided in any embodiment of the present application, and have corresponding functional modules and beneficial effects for executing the method. For details of the image processing method provided in any of the embodiments of the present application, reference may be made to the following description.
It is to be noted that the foregoing is only illustrative of the preferred embodiments of the present invention and the technical principles employed. It will be understood by those skilled in the art that the present invention is not limited to the particular embodiments described herein, but is capable of various obvious changes, rearrangements and substitutions as will now become apparent to those skilled in the art without departing from the scope of the invention. Therefore, although the present invention has been described in greater detail by the above embodiments, the present invention is not limited to the above embodiments, and may include other equivalent embodiments without departing from the spirit of the present invention, and the scope of the present invention is determined by the scope of the appended claims.
Claims (10)
1. An image processing method, comprising:
when the shielding detection event is triggered, acquiring a shot image of the camera;
carrying out occlusion detection on the shot image, and determining a first occlusion area in the shot image;
when the characteristic value of the first shielding area is smaller than a preset characteristic threshold value, acquiring a target decoration image; the characteristic value of the first shielding region comprises at least one of the area of the first shielding region, the total number of pixels of the first shielding region and the proportion of the first shielding region in the shot image; the target decoration image comprises at least one of cartoon characters, stickers, characters, expression packages and decoration frames;
the first occlusion region is embellished based on the target embellishment image.
2. The method according to claim 1, wherein when the feature value of the first occlusion region is smaller than a preset feature threshold, acquiring a target decoration image comprises:
when the characteristic value of the first shielding area is smaller than a preset characteristic threshold value, acquiring a pixel jump value of a surrounding area of the first shielding area;
and when the pixel jump value is larger than a preset jump threshold value, acquiring a target decoration image.
3. The method according to claim 1, wherein when the feature value of the first occlusion region is smaller than a preset feature threshold, acquiring a target decoration image comprises:
when the characteristic value of the first shielding area is smaller than a preset characteristic threshold value, identifying a main body image of the shot picture;
determining a category of the subject image;
and determining a target decoration image matched with the main body image according to the class of the main body image.
4. The method according to claim 1, wherein when the feature value of the first occlusion region is smaller than a preset feature threshold, acquiring a target decoration image comprises:
when the characteristic value of the first shielding area is smaller than a preset characteristic threshold value, determining a main body area in the shot image;
judging whether the first occlusion area and the main body area are overlapped;
and when the first shielding area is not overlapped with the main body area, acquiring a target decoration image.
5. The method of claim 1, wherein performing occlusion detection on the captured image to determine a first occlusion region in the captured image comprises:
inputting the shot image into a pre-trained occlusion region determination model; wherein the occlusion region determination model is generated based on a characteristic rule of an occlusion region presented in an image;
and determining a first occlusion area in the shot image according to an output result of the occlusion area determination model.
6. The method of claim 5, further comprising, prior to inputting the captured image into a pre-trained occlusion region determination model:
acquiring a sample image, wherein the sample image comprises an image with a second occlusion area;
marking the second occlusion area in the sample image, and taking the sample image marked with the second occlusion area as a training sample set;
and training a preset machine learning model by using the training sample set so as to learn the characteristic rule of the second occlusion region, and obtaining an occlusion region determination model.
7. The method of any of claims 1-6, wherein an occlusion detection event is triggered, comprising:
monitoring whether an occlusion detection instruction is received; determining that an occlusion detection event is triggered when the occlusion detection instruction is received; or
Acquiring the exposure of a shot image; and when the exposure is greater than a preset exposure threshold, determining that a shielding detection event is triggered.
8. An image processing apparatus characterized by comprising:
the shot image acquisition module is used for acquiring a shot image of the camera when the shielding detection event is triggered;
an occlusion region determining module, configured to perform occlusion detection on the captured image, and determine a first occlusion region in the captured image;
the modified image obtaining module is used for obtaining a target modified image when the characteristic value of the first shielding area is smaller than a preset characteristic threshold value; the characteristic value of the first shielding region comprises at least one of the area of the first shielding region, the total number of pixels of the first shielding region and the proportion of the first shielding region in the shot image; the target decoration image comprises at least one of cartoon characters, stickers, characters, expression packages and decoration frames;
an occlusion region embellishment module to embellish the first occlusion region based on the target embellishment image.
9. A computer-readable storage medium, on which a computer program is stored which, when being executed by a processor, carries out the image processing method according to any one of claims 1 to 7.
10. A mobile terminal comprising a memory, a processor and a computer program stored on the memory and executable on the processor, wherein the processor implements the image processing method according to any one of claims 1 to 7 when executing the computer program.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810455796.8A CN108494996B (en) | 2018-05-14 | 2018-05-14 | Image processing method, device, storage medium and mobile terminal |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810455796.8A CN108494996B (en) | 2018-05-14 | 2018-05-14 | Image processing method, device, storage medium and mobile terminal |
Publications (2)
Publication Number | Publication Date |
---|---|
CN108494996A CN108494996A (en) | 2018-09-04 |
CN108494996B true CN108494996B (en) | 2021-01-15 |
Family
ID=63353878
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201810455796.8A Active CN108494996B (en) | 2018-05-14 | 2018-05-14 | Image processing method, device, storage medium and mobile terminal |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN108494996B (en) |
Families Citing this family (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US11147228B2 (en) * | 2019-02-15 | 2021-10-19 | Syngenta Crop Protection Ag | Soybean cultivar EC1661470 |
CN110598217B (en) * | 2019-09-19 | 2023-10-20 | 广东小天才科技有限公司 | Click-to-read content identification method and device, home teaching machine and storage medium |
CN111145135B (en) * | 2019-12-30 | 2021-08-10 | 腾讯科技(深圳)有限公司 | Image descrambling processing method, device, equipment and storage medium |
CN112256891A (en) * | 2020-10-26 | 2021-01-22 | 北京达佳互联信息技术有限公司 | Multimedia resource recommendation method and device, electronic equipment and storage medium |
CN113688937A (en) * | 2021-09-07 | 2021-11-23 | 北京沃东天骏信息技术有限公司 | Image processing method and device and storage medium |
CN115880168A (en) * | 2022-09-30 | 2023-03-31 | 北京字跳网络技术有限公司 | Image restoration method, device, equipment, computer readable storage medium and product |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2006109398A1 (en) * | 2005-03-15 | 2006-10-19 | Omron Corporation | Image processing device and method, program, and recording medium |
CN101266685A (en) * | 2007-03-14 | 2008-09-17 | 中国科学院自动化研究所 | A method for removing unrelated images based on multiple photos |
CN105763812A (en) * | 2016-03-31 | 2016-07-13 | 北京小米移动软件有限公司 | Intelligent photographing method and device |
CN106454093A (en) * | 2016-10-18 | 2017-02-22 | 北京小米移动软件有限公司 | Image processing method, image processing device and electronic equipment |
CN106454085A (en) * | 2016-09-30 | 2017-02-22 | 维沃移动通信有限公司 | Image processing method and mobile terminal |
-
2018
- 2018-05-14 CN CN201810455796.8A patent/CN108494996B/en active Active
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2006109398A1 (en) * | 2005-03-15 | 2006-10-19 | Omron Corporation | Image processing device and method, program, and recording medium |
CN101266685A (en) * | 2007-03-14 | 2008-09-17 | 中国科学院自动化研究所 | A method for removing unrelated images based on multiple photos |
CN105763812A (en) * | 2016-03-31 | 2016-07-13 | 北京小米移动软件有限公司 | Intelligent photographing method and device |
CN106454085A (en) * | 2016-09-30 | 2017-02-22 | 维沃移动通信有限公司 | Image processing method and mobile terminal |
CN106454093A (en) * | 2016-10-18 | 2017-02-22 | 北京小米移动软件有限公司 | Image processing method, image processing device and electronic equipment |
Also Published As
Publication number | Publication date |
---|---|
CN108494996A (en) | 2018-09-04 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN108566516B (en) | Image processing method, device, storage medium and mobile terminal | |
CN108494996B (en) | Image processing method, device, storage medium and mobile terminal | |
CN110929651B (en) | Image processing method, image processing device, electronic equipment and storage medium | |
CN108551552B (en) | Image processing method, device, storage medium and mobile terminal | |
CN109523485B (en) | Image color correction method, device, storage medium and mobile terminal | |
CN107992794B (en) | A kind of biopsy method, device and storage medium | |
CN110020622B (en) | Fingerprint identification method and related product | |
CN108683845B (en) | Image processing method, device, storage medium and mobile terminal | |
CN108712606B (en) | Reminding method, device, storage medium and mobile terminal | |
CN112262563B (en) | Image processing method and electronic device | |
CN109741281B (en) | Image processing method, image processing device, storage medium and terminal | |
CN107820020A (en) | Method of adjustment, device, storage medium and the mobile terminal of acquisition parameters | |
CN109348135A (en) | Photographic method, device, storage medium and terminal device | |
CN109685746A (en) | Brightness of image method of adjustment, device, storage medium and terminal | |
US11030733B2 (en) | Method, electronic device and storage medium for processing image | |
CN109089043B (en) | Shot image preprocessing method and device, storage medium and mobile terminal | |
CN108848313B (en) | Multi-person photographing method, terminal and storage medium | |
CN108681402A (en) | Identify exchange method, device, storage medium and terminal device | |
CN111480333A (en) | Light supplementing photographing method, mobile terminal and computer readable storage medium | |
CN109218621B (en) | Image processing method, device, storage medium and mobile terminal | |
CN108765380A (en) | Image processing method, device, storage medium and mobile terminal | |
CN107292817B (en) | Image processing method, device, storage medium and terminal | |
CN108491780B (en) | Image beautification processing method and device, storage medium and terminal equipment | |
CN110796673B (en) | Image segmentation method and related product | |
CN110363702B (en) | Image processing method and related product |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |