WO2020038065A1 - Image processing method, terminal, and computer storage medium - Google Patents

Image processing method, terminal, and computer storage medium Download PDF

Info

Publication number
WO2020038065A1
WO2020038065A1 PCT/CN2019/090079 CN2019090079W WO2020038065A1 WO 2020038065 A1 WO2020038065 A1 WO 2020038065A1 CN 2019090079 W CN2019090079 W CN 2019090079W WO 2020038065 A1 WO2020038065 A1 WO 2020038065A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
regions
region
recognition
processed
Prior art date
Application number
PCT/CN2019/090079
Other languages
French (fr)
Chinese (zh)
Inventor
胡允侃
纪德威
Original Assignee
中兴通讯股份有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 中兴通讯股份有限公司 filed Critical 中兴通讯股份有限公司
Publication of WO2020038065A1 publication Critical patent/WO2020038065A1/en

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/80Camera processing pipelines; Components thereof
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/04Context-preserving transformations, e.g. by using an importance map
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/70Denoising; Smoothing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/13Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/194Segmentation; Edge detection involving foreground-background segmentation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20024Filtering details
    • G06T2207/20032Median filtering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging

Definitions

  • the present invention relates to, but is not limited to, the field of image processing technologies, and in particular, to an image processing method, a terminal, and a computer storage medium.
  • an object other than the object in the image is subjected to a blurring process so that an object that has not been blurred is highlighted.
  • a situation of excessive blurring may occur, causing the user to fail to recognize the blurred object.
  • embodiments of the present invention desire to provide an image processing method, a terminal, and a computer storage medium, so as to avoid blurring when a specific object in an image is subjected to blurring, which may cause users to fail to recognize the blurring process Of the object.
  • An image processing method includes:
  • a target image is acquired based on the first image and the second image.
  • a terminal includes: a processor, a memory, and a communication bus;
  • the communication bus is used to implement a communication connection between the processor and the memory
  • the processor is configured to execute a program of an image processing method in a memory to implement the following steps:
  • a target image is acquired based on the first image and the second image.
  • a computer storage medium stores one or more programs, and the one or more programs can be executed by one or more processors to implement the steps of the image processing method described above.
  • the image processing method, the terminal, and the computer storage medium provided in the embodiments of the present invention obtain a first image to be processed and identify the first image to obtain multiple image regions; determine the region to be processed from the multiple image regions, and The object at the area to be processed is blurred to obtain a second image; the target image is obtained based on the first image and the second image; thus, since the embodiment of the present invention is to blur the area to be processed of the first image, the first After the second image, the second image and the first image after blurring the area to be processed are combined, so as to avoid the blurring of the specific object in the image, which will cause the user to fail to recognize the blurring process. Of the object.
  • FIG. 1 is a schematic flowchart of an image processing method according to an embodiment of the present invention
  • FIG. 2 is a schematic structural diagram of a terminal according to an embodiment of the present invention.
  • FIG. 3 is a schematic diagram of an image change process according to an embodiment of the present invention.
  • FIG. 4 is a schematic flowchart of another image processing method according to an embodiment of the present invention.
  • FIG. 5 is a schematic flowchart of another image processing method according to an embodiment of the present invention.
  • FIG. 6 is a schematic flowchart of an image processing method according to another embodiment of the present invention.
  • FIG. 7 is a schematic flowchart of another image processing method according to another embodiment of the present invention.
  • FIG. 8 is a schematic flowchart of an implementation manner of an image processing method according to an embodiment of the present invention.
  • FIG. 9 is a schematic structural diagram of another terminal according to an embodiment of the present invention.
  • an embodiment of the present invention or “the foregoing embodiment” mentioned throughout the specification means that a particular feature, structure, or characteristic related to the embodiment is included in at least one embodiment of the present invention. Therefore, "in the embodiments of the present invention” or “in the foregoing embodiments” appearing throughout the specification does not necessarily refer to the same embodiment. Furthermore, the particular features, structures, or characteristics may be combined in any suitable manner in one or more embodiments.
  • the size of the sequence numbers of the above processes does not mean the order of execution. The execution order of each process should be determined by its function and internal logic, and should not deal with the implementation process of the embodiment of the present invention Constitute any limitation.
  • the sequence numbers of the foregoing embodiments of the present invention are only for description, and do not represent the superiority or inferiority of the embodiments.
  • the image processing method of any embodiment of the present invention is applied to a terminal.
  • the terminal may be a mobile phone, a computer, a camera, or a tablet computer.
  • the present invention does not limit the terminal, as long as the terminal can implement The image processing function of any embodiment of the invention is sufficient.
  • An embodiment of the present invention provides an image processing method, which is applied to a terminal. As shown in FIG. 1, the method includes the following steps:
  • Step 101 The terminal acquires a first image to be processed, and recognizes the first image to obtain multiple image regions.
  • the terminal may include an image processing module, a camera module, and an image display module.
  • the image processing module may further include an image preprocessing module and an image scene enhancement module.
  • the image processing module may be a processor
  • the camera module may be an image collector, such as a camera
  • the image display module may be a display screen.
  • the step of recognizing the first image to obtain multiple image regions and the execution subject of the following steps 102 and 103 are image scene enhancement modules.
  • the image collector collects the first image and sends the first image to the processor, and the processor obtains the first image sent by the image collector; or the processor can call and process The processor acquires the first image through the first image stored in the memory connected to the communication bus.
  • the user may choose to enable the preset function to implement the image processing method in this embodiment.
  • the first image may include multiple objects, and each object corresponds to an image area.
  • Objects can be anything in the image, such as portraits, human eyes, big trees, and so on.
  • Recognizing the first image to obtain multiple image regions may include: identifying edges of objects in the first image to obtain multiple image regions.
  • the recognition result may also be output on the display screen of the terminal.
  • the edge of the object may be output on the display screen in a dashed or solid line.
  • the user can clearly know the result of the image recognition, which is convenient for the user's subsequent operations.
  • the display screen may display a plurality of image regions with a preset distance between each two adjacent image regions.
  • the processor may recognize edges of each object in the first image to obtain multiple image regions, and output all recognition results on a display screen. In this way, each object region in the image can be identified to avoid missing objects in the first image and thus missing key information in the image.
  • the processor may recognize edges of each object in the first image to obtain multiple image regions, and obtain a ratio of an area of each image region to the first image area in the multiple image regions, and The recognition result of the image area whose ratio of the area of the image area to the first image area is greater than a preset value is displayed on the display screen. For example, an image with a portrait may only show the recognition result of the head of the portrait on the display screen, but not the recognition result of the eyes of the portrait.
  • an image with a big tree may display the recognition of the big tree only on the display As a result, the recognition result of each leaf is not displayed. In this way, the recognition result of each object in the image by the processor can be prevented from being displayed on the display screen, which not only causes display confusion but also makes it difficult for the user to select an image area that is too small.
  • Recognizing the edges of objects in the first image to obtain multiple image regions has multiple implementations: for example, using edge detection, multi-gradient detection, or a combination of edge detection and multi-gradient detection The edges are identified to obtain multiple image regions.
  • the method of edge detection can be an edge detection method based on Roberts operator, an edge detection method based on Sobel operator, an edge detection method based on Prewitt operator, an edge detection method based on Laplace operator, a combination of Gaussian and Laplacian (Laplacian of Gaussian, LOG) any one of the edge detection method of the operator, the edge detection method based on the Canny operator, the wavelet analysis method, the fuzzy algorithm, and the artificial neural network method.
  • the specific steps of the multi-gradient detection method are: extract the luminance component Y, color component Cb (blue chrominance component), Cr (red chrominance component), and depth component D of each pixel in the first image, and calculate different components Down the gradient image, and fuse the calculated gradient image for different directions ( ⁇ can take 0, ⁇ / 8, ⁇ / 4, 3 ⁇ / 8, ..., 7 ⁇ / 8), and take different after fusion
  • can take 0, ⁇ / 8, ⁇ / 4, 3 ⁇ / 8, ..., 7 ⁇ / 8
  • the maximum value of the gradient value in the direction is used as the final gradient of the pixel (luminance gradient G y , color gradient G cb , G cr , depth gradient G d ), and the four gradients are fused to obtain a fused gradient, and based on The fused gradients yield multiple image regions.
  • Step 102 The terminal determines a region to be processed from a plurality of image regions, and performs a blurring process on an object at the region to be processed to obtain a second image.
  • the region to be processed is specifically an image region that needs to be blurred out of the multiple image regions.
  • the region to be processed may be one image region among a plurality of image regions, or may be at least two image regions. For example, when M image regions are identified, the region to be processed may be m image regions, where 1 ⁇ m ⁇ M.
  • the processor may identify an image region having a preset feature from a plurality of image regions, such as a portrait region, an animal region, or a plant region, and use the identified image region having the preset feature as a region to be processed.
  • the processor may obtain an operation instruction, and use an image area corresponding to the operation instruction as the area to be processed, or use an image area other than the image area corresponding to the operation instruction as the area to be processed.
  • Obtaining the second image by subjecting the object at the processing area to blurring may be: using the morphological filtering method to subject the subject at the processor area to blurring to obtain the second image.
  • the morphological filtering method may be processing the area to be processed by using at least one operation method among corrosion operation, expansion operation, open operation, and closed operation. It should be understood that the size of the second image is the same as the size of the first image.
  • Step 103 The terminal obtains a target image based on the first image and the second image.
  • the target image may be acquired by fusing the first image and the second image.
  • acquiring the target image based on the first image and the second image may include: fusing the image of the region corresponding to the region to be processed of the invention in the first image of the invention and the second image of the invention to obtain the target image of the invention.
  • the processor may fuse the image region after the blurring processing in the second image and the region to be processed in the first image to obtain a fused image region, and combine the fused image region with the first image.
  • the reference region other than the region to be processed or the image region in the second image that is not subjected to the blurring processing is combined to obtain a target image.
  • the image fusion method may be an image fusion method based on pixel grayscale, an image fusion method based on principal component analysis (PCA) transformation, and a brightness, hue, and saturation (HIS)
  • PCA principal component analysis
  • HIS brightness, hue, and saturation
  • the image grayscale-based image fusion method may select a large image fusion method for a pixel grayscale value, a small image fusion method for a pixel grayscale value, or a weighted average image fusion method.
  • the processor After the processor obtains the target image, it can output the target image to the display screen of the terminal.
  • the first image is recognized to obtain four image areas: area A1, area B1, area C1, and area D1, that is, the first image I includes area A1, area B1, area C1, and area D1.
  • the region C1 and the region D1 are reference regions other than the region to be processed in the first image, and the processor continues to blur the objects in the region A1 and the region B1 to be processed to obtain a second image II.
  • the second image includes area A2, area B2, area C1, and area D1.
  • Area A2 and area B2 are areas after blurring area A1 and area B1, respectively, and then area A1, area B1, and area A2, area B2 performs fusion to obtain a fused image region, and combines the fused image region with regions C1 and D1 to obtain a target image III.
  • the embodiment of the present invention blurs the area to be processed of the first image, and obtains the second image, combines the second image after blurring the area to be processed with the first image, thereby avoiding specific changes in the image.
  • the object is blurred, there may be situations where the user is unable to recognize the blurred object due to excessive blur.
  • an embodiment of the present invention provides an image processing method, which is applied to a terminal. As shown in FIG. 4, the method includes the following steps:
  • Step 201 The terminal acquires a first image to be processed.
  • the image collector collects the first image and sends the first image to the processor, and the processor obtains the first image sent by the image collector; or the processor can call and process The processor acquires the first image through the first image stored in the memory connected to the communication bus.
  • the user may choose to enable the preset function to implement the image processing method in this embodiment.
  • Step 202 The terminal uses a first detection method to recognize the edges of the objects in the first image, and obtains multiple first recognition areas.
  • the first detection method is used to identify edges of objects in the first image.
  • the first detection method may be an edge detection method.
  • the method of edge detection can be an edge detection method based on Roberts operator, an edge detection method based on Sobel operator, an edge detection method based on Prewitt operator, an edge detection method based on Laplace operator, a combination of Gaussian and Laplacian (Laplacian of Gaussian, LOG) any one of the edge detection method of the operator, the edge detection method based on the Canny operator, the wavelet analysis method, the fuzzy algorithm, and the artificial neural network method.
  • Step 203 The terminal uses a second detection method to recognize the edge of the object in the first image, and obtains a plurality of second recognition regions.
  • the second detection method is used to identify edges of objects in the first image.
  • the first detection method and the second detection method are two different detection methods.
  • the second detection method may be a multi-gradient detection method.
  • the second detection method may also be an edge detection method different from the first detection method.
  • the processor may execute step 202 and then step 203; or may execute step 203 and then step 202; or step 202 and step 203 may be performed simultaneously.
  • Step 204 The terminal determines a plurality of image regions based on the plurality of first identification regions and the plurality of second identification regions.
  • Obtaining a plurality of image regions based on the first recognition result and the second recognition result includes: using a Bayesian probability problem to fuse the first recognition result and the second recognition result to obtain a plurality of image regions.
  • the Bayesian probability problem is used to fuse a plurality of first recognition areas obtained by using the first detection method and a plurality of second recognition areas obtained by using the second detection method, which can improve the accuracy of identifying and obtaining a plurality of image areas.
  • multiple image regions can be determined by: determining an accuracy rate of each first recognition region based on a display parameter of a pixel point in each first recognition region; and based on a pixel point in each second recognition region
  • the display parameters of determine the accuracy rate of each second recognition area; based on the plurality of first recognition areas, the plurality of second recognition areas, the accuracy rate of each first recognition area, and the accuracy rate of each second recognition area, Identify multiple image areas.
  • the display parameter may be at least one parameter such as a brightness parameter, a depth parameter, and a color parameter.
  • a brightness parameter such as a brightness parameter, a depth parameter, and a color parameter.
  • the specific method of determining multiple image regions will be described below using the display parameter as an example of the brightness parameter. It should be understood that when the display parameters are other parameters, the method is similar and will not be repeated here.
  • the processor may obtain the brightness value of each pixel, and obtain a statistics table of pixel brightness values on the first image; based on the display parameters of the pixels in each first recognition area and the statistics table of pixel brightness values, determine each The accuracy rate of the second recognition area; based on the display parameters of the pixels in each second recognition area and the pixel brightness value statistics table, determining the accuracy rate of each second recognition area, based on multiple first recognition areas, multiple first The two recognition areas, the accuracy rate of each first recognition area, and the accuracy rate of each second recognition area determine a plurality of image areas.
  • the area E in the plurality of first recognition areas is higher than the accuracy rate of the area F corresponding to the area E in the plurality of second recognition areas
  • the area E is used as the image area obtained by the recognition, and vice versa , Then use the area F as the image area obtained by the recognition.
  • Step 205 The terminal determines a region to be processed from a plurality of image regions, and performs a blurring process on an object at the region to be processed to obtain a second image.
  • Step 206 The terminal acquires a target image based on the first image and the second image.
  • This embodiment supplements the steps of recognizing the first image to obtain multiple image regions in the first embodiment.
  • the image processing method of this embodiment determines a plurality of image regions obtained based on the plurality of first recognition regions and the plurality of second recognition regions, it is possible to synthesize each first recognition region and correspond to the first recognition region
  • the recognition accuracy rate of the second recognition region is used to obtain multiple image regions, so that the multiple obtained image regions are more accurate.
  • an embodiment of the present invention provides another image processing method, which is applied to a terminal. As shown in FIG. 5, the method includes the following steps:
  • Step 301 The terminal acquires a first image to be processed, and recognizes the first image to obtain multiple image regions.
  • Step 302 The terminal receives a first operation on the first image.
  • the terminal may be provided with a display screen, and the first operation may be a user performing a click operation or a slide operation on the first image on the display screen.
  • the terminal may be provided with a voice receiving unit.
  • the first operation may be a user's voice input, so that the voice receiving unit sends the voice signal to the processor after receiving the voice signal, and the processor receives the voice signal for the first image.
  • the first image has a plurality of image regions, and the user may perform a first operation on any one of the plurality of image regions. For example, referring to FIG. 2, when the user wants to highlight the region B in the first image, the region B of the first image can be displayed on the display screen; or when the user wants to highlight the region A and the region in the first image At B, you can click area A and area B of the first image on the display.
  • Step 303 The terminal responds to the first operation and determines an image area corresponding to the first operation from a plurality of image areas to obtain a reference area.
  • the terminal responds to the first operation and determines an image area corresponding to the first operation from a plurality of image areas, and obtaining the reference area may include: if the first operation meets a first preset condition, responding to the first operation and An image region corresponding to the first operation is determined from a plurality of image regions to obtain a reference region.
  • the first preset condition may be an operation performed an odd number of times on the same image area. For example, referring to FIG. 2, when the user clicks on the area B in the first image an odd number of times, the terminal receives the user's first Operation, in response to the first operation and determining an image area corresponding to the first operation from a plurality of image areas, to obtain a reference area.
  • the processor may further control the display screen to highlight the image area corresponding to the first operation, and the highlighting may include: changing the color or brightness of the image area corresponding to the first operation, or One operation sets the shadow of the corresponding image area. For example, referring to FIG. 2, when the user performs an odd number of click operations on the region B of the first image, the display screen increases the brightness of the object in the region B.
  • the display screen may also provide the user with a "OK” or "Cancel All” selection box, so that the user can confirm the selected image area or cancel all selected image areas by clicking the selection box.
  • the image area corresponding to the first operation is a reference area.
  • the image region corresponding to the first operation may be a region to be processed. That is, the processor can determine whether the first operation meets the first preset condition; if the first operation meets the first preset condition, respond to the first operation and determine an image area corresponding to the first operation from a plurality of image areas, and obtain Processing area. That is, the processor uses the image region corresponding to the first operation as a region to be processed.
  • the terminal responds to the first operation and determines an image area corresponding to the first operation from multiple image areas. After obtaining the reference area, the terminal may also receive a second operation for the first image; if the second operation The second preset condition is satisfied, and an image region corresponding to the second operation is determined from a plurality of image regions in response to the second operation, and the image region corresponding to the second operation is set as a region to be processed.
  • the second operation may make the same operation as the first operation, such as an odd number of click operations or an odd number of slide operations, and the second preset condition is that the image area corresponding to the second operation is the same as the image area corresponding to the first operation. For example, referring to FIG. 2, after the user clicks on area B in the first image, if he wants to cancel the selection of area B, he can click on area B again.
  • the terminal when the terminal determines that the second operation satisfies the second preset condition, the terminal may set the image region corresponding to the operation from the region to be processed as the reference region.
  • Step 304 The terminal determines, from a plurality of image regions, a region other than the reference region as a region to be processed.
  • the region to be processed is determined based on the reference region.
  • the reference region may be determined based on the region to be processed, that is, from a plurality of image regions, a region other than the region to be processed is determined as the reference region.
  • Step 305 The terminal performs a blurring process on the object at the area to be processed to obtain a second image.
  • Obtaining the second image by subjecting the object at the processing area to blurring may be: using the morphological filtering method to subject the subject at the processor area to blurring to obtain the second image.
  • the morphological filtering method may be processing the area to be processed by using at least one operation method among corrosion operation, expansion operation, open operation, and closed operation.
  • Step 306 The terminal obtains a target image based on the first image and the second image.
  • This embodiment supplements the step of determining a region to be processed from a plurality of image regions in the first embodiment.
  • the terminal can receive the first operation for the first image, use the image area corresponding to the first operation as the reference area, and determine that the area other than the reference area is the area to be processed. Therefore, the terminal can determine the area to be processed of the image and blur the area to be processed according to the user's selection, so that the user can determine the area to be processed according to his actual needs.
  • an embodiment of the present invention provides an image processing method, which is applied to a terminal. As shown in FIG. 6, the method includes the following steps:
  • Step 401 The terminal acquires a third image to be processed.
  • the image collector collects the third image and sends the third image to the processor, and the processor acquires the third image sent by the image collector; or the processor can call and process The processor acquires the third image through the third image stored in the memory connected to the communication bus.
  • a preset function may be selected to enable the image processing method in this embodiment.
  • Step 402 The terminal filters and denoises the third image to obtain a first image.
  • filtering and denoising the third image may be: denoising the third image by using a median filtering method.
  • the median filtering method is a non-linear smoothing technique. It sets the gray value of each pixel to the median of the gray values of all pixels in a neighborhood window at that point.
  • filtering and denoising the third image may be: using a mean filtering method to denoise the third image.
  • Mean filtering is a typical linear filtering algorithm. It refers to giving a template to the target pixel on the image. The template includes neighboring pixels around it (8 pixels around the target pixel as the center, forming a filtering template, that is, removing The target pixel itself), and then replace the original pixel value with the average of all pixels in the template.
  • Step 403 The terminal acquires a first image to be processed, and recognizes the first image to obtain multiple image regions.
  • the first image to be processed is obtained by filtering and denoising the third image.
  • Step 404 The terminal determines a region to be processed from a plurality of image regions, and performs a blurring process on an object at the region to be processed to obtain a second image.
  • Step 405 The terminal acquires a target image based on the first image and the second image.
  • This embodiment further supplements the step of acquiring the first image to be processed in the first embodiment.
  • This embodiment further supplements the step of acquiring the first image to be processed in the first embodiment.
  • the first image in the image processing method of this embodiment is obtained by filtering and denoising the third image, it is possible to avoid interference of image noise when identifying the first image, and make the recognition result more accurate. .
  • an embodiment of the present invention provides another image processing method, which is applied to a terminal. As shown in FIG. 7, the method includes the following steps:
  • Step 501 The terminal acquires a third image to be processed.
  • Step 502 The terminal filters and denoises the third image to obtain a first image.
  • Step 503 The terminal acquires a first image to be processed.
  • Step 504 The terminal uses the first detection method to recognize the edges of the objects in the first image, and obtains multiple first recognition areas.
  • Step 505 The terminal uses a second detection method to recognize the edge of the object in the first image, and obtains a plurality of second recognition regions.
  • This embodiment does not limit the sequence of steps 504 and 505.
  • the processor may execute step 504 and then execute 505; or may execute step 505 and then execute step 504; or step 504 and step 505 may be performed simultaneously.
  • Step 506 The terminal determines a plurality of image regions based on the plurality of first identification regions and the plurality of second identification regions.
  • Step 507 The terminal receives a first operation on the first image.
  • Step 508 The terminal responds to the first operation and determines an image area corresponding to the first operation from a plurality of image areas to obtain a reference area.
  • Step 509 The terminal determines an area other than the reference area from a plurality of image areas to be processed.
  • Step 510 The terminal performs a blurring process on an object at the region to be processed to obtain a second image.
  • Step 511 The terminal acquires a target image based on the first image and the second image.
  • an image is captured through an image collector of a terminal, and the captured image is subjected to median filtering and denoising, and the filtered image is saved, and then the edge detection and multi-processing are performed on the filtered image.
  • Gradient detection and use the Bayesian method to fuse the two detection results to ensure the accuracy of the detection and to accurately distinguish the scene from the background.
  • a preview screen can appear on the display screen, and the different scenes in the preview picture are already different from the background. Differentiate, there are multiple virtual frames containing the scene; then, the user can select the touch points according to the needs, and can select the touch points for one or more scenes.
  • the image processing method provided in this embodiment can avoid the situation that when a specific object in an image is blurred, the user may fail to recognize the blurred object due to excessive blurring; each first recognition can be integrated
  • the recognition accuracy rate of the region and the second recognition region corresponding to the first recognition region is to obtain multiple image regions, so that the multiple image regions obtained are more accurate;
  • the region to be processed and the region to be processed can be determined according to the user's selection Blur, so that users can determine the area to be processed according to their actual needs; and can avoid the interference of image noise, making the recognition results more accurate.
  • an embodiment of the present invention provides a terminal 6.
  • the terminal may be applied to an image processing method provided by the embodiments corresponding to FIGS. 1 and 4 to 7.
  • the terminal may include : Processor 61, memory 62, and communication bus 63, where:
  • the communication bus 63 is used to implement a communication connection between the processor 61 and the memory 62.
  • the processor 61 is configured to execute a program of an image processing method stored in the memory 62 to implement the following steps:
  • a target image is acquired based on the first image and the second image.
  • the processor 61 is configured to execute the identification of the first image stored in the memory 62 to obtain multiple image regions, so as to implement the following steps:
  • the processor 61 is configured to perform recognition of an edge of an object in the first image stored in the memory 62 to obtain a plurality of image regions to implement the following steps:
  • a plurality of image areas are determined.
  • the processor 61 is configured to execute a plurality of image regions based on the plurality of first identification regions and the plurality of second identification regions stored in the memory 62 to implement the following steps:
  • a plurality of image regions are determined based on a plurality of first recognition regions, a plurality of second recognition regions, an accuracy rate of each first recognition region, and an accuracy rate of each second recognition region.
  • the processor 61 is configured to execute the determination of a region to be processed from a plurality of image regions stored in the memory 62 to implement the following steps:
  • a region other than the reference region is determined as a region to be processed.
  • the processor 61 is configured to execute a response to the first operation stored in the memory 62 and determine an image region corresponding to the first operation from a plurality of image regions to obtain a reference region to implement the following steps:
  • a reference region is obtained.
  • the processor 61 is configured to execute a response to the first operation stored in the memory 62 and determine an image region corresponding to the first operation from a plurality of image regions. After obtaining the reference region, the following steps are implemented: :
  • the processor 61 is configured to execute the acquisition of the target image based on the first image and the second image stored in the memory 62 to implement the following steps:
  • the target image is obtained by fusing the image of the region corresponding to the region to be processed with the second image.
  • the processor 61 is configured to execute acquiring the first image stored in the memory 62 to implement the following steps:
  • an embodiment of the present invention provides a computer-readable storage medium.
  • the computer-readable storage medium stores one or more programs, and the one or more programs can be executed by one or more processors to implement The following steps:
  • a target image is acquired based on the first image and the second image.
  • the one or more programs may be executed by one or more processors to identify the first image to obtain multiple image regions, so as to implement the following steps:
  • the one or more programs may be executed by one or more processors to recognize edges of objects in the first image to obtain a plurality of image regions to implement the following steps:
  • a plurality of image areas are determined.
  • the one or more programs may be executed by one or more processors to determine a plurality of image regions based on a plurality of first recognition regions and a plurality of second recognition regions to implement the following steps:
  • a plurality of image regions are determined based on a plurality of first recognition regions, a plurality of second recognition regions, an accuracy rate of each first recognition region, and an accuracy rate of each second recognition region.
  • the one or more programs may be executed by one or more processors to determine a region to be processed from a plurality of image regions to implement the following steps:
  • a region other than the reference region is determined as a region to be processed.
  • the one or more programs may be executed by one or more processors in response to the first operation and determining an image region corresponding to the first operation from a plurality of image regions to obtain a reference region to Implement the following steps:
  • a reference region is obtained.
  • the one or more programs may be executed by one or more processors in response to the first operation and determining an image region corresponding to the first operation from a plurality of image regions. After obtaining the reference region, To achieve the following steps:
  • the one or more programs may be executed by one or more processors to obtain a target image based on the first image and the second image to implement the following steps:
  • the target image is obtained by fusing the image of the region corresponding to the region to be processed with the second image.
  • the one or more programs may be executed by one or more processors to acquire the first image to implement the following steps:
  • the above processor may be an application specific integrated circuit (ASIC, Application Specific Integrated Circuit), a digital signal processor (DSP, Digital Signal Processor), a digital signal processing device (DSPD, Digital Signal Processing Device), programmable At least one of a logic device (PLD, Programmable Logic Device), field programmable gate array (FPGA, Field Programmable Gate Array), central processing unit (CPU, Central Processing Unit), controller, microcontroller, and microprocessor . Understandably, the electronic device that implements the foregoing processor function may also be other, which is not specifically limited in the embodiment of the present application.
  • ASIC Application Specific Integrated Circuit
  • DSP Digital Signal Processor
  • DSPD Digital Signal Processing Device
  • PLD Programmable Logic Device
  • FPGA Field Programmable Gate Array
  • CPU Central Processing Unit
  • controller microcontroller
  • microprocessor microprocessor
  • the computer storage medium / memory may be a read-only memory (ROM), a programmable read-only memory (PROM), and an erasable programmable read-only memory (PROM Read-Only Memory (EPROM), Electrically Erasable Programmable Read-Only Memory (EEPROM), Ferromagnetic Random Access Memory (FRAM), Flash Memory (Flash Memory) , Magnetic surface memory, compact disc, or read-only memory (CD-ROM) and other memories; it can also include one or any combination of the above terminals, such as mobile phones, computers, tablet devices , Personal digital assistants, etc.
  • ROM read-only memory
  • PROM programmable read-only memory
  • EPROM erasable programmable read-only memory
  • EEPROM Electrically Erasable Programmable Read-Only Memory
  • FRAM Ferromagnetic Random Access Memory
  • Flash Memory Flash Memory
  • CD-ROM Compact disc
  • CD-ROM Compact Memory
  • the methods in the above embodiments can be implemented by means of software plus a necessary universal hardware platform, and of course, also by hardware, but in many cases the former is better.
  • Implementation Based on such an understanding, the technical solution of this application that is essentially or contributes to the existing technology can be embodied in the form of a software product.
  • the computer software product is stored in a storage medium (such as ROM / RAM, magnetic disk,
  • the optical disc includes several instructions for causing a terminal device (which may be a mobile phone, a computer, a server, an air conditioner, or a network device, etc.) to execute the methods described in the embodiments of the present application.
  • These computer program instructions may also be stored in a computer-readable memory capable of directing a computer or other programmable data processing device to work in a particular manner such that the instructions stored in the computer-readable memory produce a manufactured article including an instruction device, the instructions
  • the device implements the functions specified in one or more flowcharts and / or one or more blocks of the block diagram.
  • These computer program instructions can also be loaded on a computer or other programmable data processing device, so that a series of steps can be performed on the computer or other programmable device to produce a computer-implemented process, which can be executed on the computer or other programmable device.
  • the instructions provide steps for implementing the functions specified in one or more flowcharts and / or one or more blocks of the block diagrams.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Image Processing (AREA)
  • Image Analysis (AREA)

Abstract

Disclosed by the embodiments of the present invention is an image processing method, the method comprising: acquiring a first image to be processed, and performing recognition on the first image to obtain a plurality of image regions; determining a region to be processed from among the plurality of image regions, and blurring an object in the region to be processed to obtain a second image; and acquiring a target image on the basis of the first image and the second image. Further simultaneously disclosed by the embodiments of the present invention are a terminal and a computer storage medium.

Description

一种图像处理方法、终端及计算机存储介质Image processing method, terminal and computer storage medium
本申请要求享有2018年08月21日提交的名称为“一种图像处理方法、终端及计算机存储介质”的中国专利申请CN201810955721.6的优先权,其全部内容通过引用并入本文中。This application claims priority from Chinese patent application CN201810955721.6 entitled "An Image Processing Method, Terminal, and Computer Storage Medium" filed on August 21, 2018, the entire contents of which are incorporated herein by reference.
技术领域Technical field
本发明涉及但不限于图像处理技术领域,尤其涉及一种图像处理方法、终端及计算机存储介质。The present invention relates to, but is not limited to, the field of image processing technologies, and in particular, to an image processing method, a terminal, and a computer storage medium.
背景技术Background technique
随着手机、照相机等终端的快速发展,越来越多的人会利用终端的图像处理功能对图像进行处理,人们对终端的图像处理功能的多样性和便利性越来越高,以获得好的显示效果,例如色彩平衡、饱和度调节和背景虚化等。With the rapid development of mobile phones, cameras and other terminals, more and more people will use the terminal's image processing function to process images, and the diversity and convenience of people's terminal image processing functions are getting higher and higher in order to obtain good results. Display effects such as color balance, saturation adjustment, and background blur.
相关技术中,为了突出图像中的某一对象,会将图像中除该对象之外的对象进行虚化处理,以使未进行虚化的对象突出。但是,相关技术在对图像中的特定对象进行虚化处理时,会出现过度虚化的情况,导致用户无法辨识虚化处理后的对象。In the related art, in order to highlight an object in an image, an object other than the object in the image is subjected to a blurring process so that an object that has not been blurred is highlighted. However, when the related technology performs a blurring process on a specific object in an image, a situation of excessive blurring may occur, causing the user to fail to recognize the blurred object.
发明内容Summary of the Invention
有鉴于此,本发明实施例期望提供一种图像处理方法、终端及计算机存储介质,避免在对图像中的特定对象进行虚化处理时,会出现过度虚化而导致用户无法辨识虚化处理后的对象的情况。In view of this, embodiments of the present invention desire to provide an image processing method, a terminal, and a computer storage medium, so as to avoid blurring when a specific object in an image is subjected to blurring, which may cause users to fail to recognize the blurring process Of the object.
为达到上述目的,本发明的技术方案是这样实现的:To achieve the above object, the technical solution of the present invention is implemented as follows:
一种图像处理方法,所述方法包括:An image processing method, the method includes:
获取待处理的第一图像,并对所述第一图像进行识别得到多个图像区域;Acquiring a first image to be processed, and identifying the first image to obtain multiple image regions;
从所述多个图像区域中确定待处理区域,并对所述待处理区域处的对象进行虚化处理得到第二图像;Determining a region to be processed from the plurality of image regions, and performing a blurring process on an object at the region to be processed to obtain a second image;
基于所述第一图像和所述第二图像获取目标图像。A target image is acquired based on the first image and the second image.
一种终端,所述终端包括:处理器、存储器和通信总线;A terminal includes: a processor, a memory, and a communication bus;
所述通信总线用于实现处理器和存储器之间的通信连接;The communication bus is used to implement a communication connection between the processor and the memory;
所述处理器用于执行存储器中的图像处理方法的程序,以实现以下步骤:The processor is configured to execute a program of an image processing method in a memory to implement the following steps:
获取待处理的第一图像,并对所述第一图像进行识别得到多个图像区域;Acquiring a first image to be processed, and identifying the first image to obtain multiple image regions;
从所述多个图像区域中确定待处理区域,并对所述待处理区域处的对象进行虚化处理得到第二图像;Determining a region to be processed from the plurality of image regions, and performing a blurring process on an object at the region to be processed to obtain a second image;
基于所述第一图像和所述第二图像获取目标图像。A target image is acquired based on the first image and the second image.
一种计算机存储介质,所述计算机存储介质存储有一个或者多个程序,所述一个或者多个程序可被一个或者多个处理器执行,以实现如上述图像处理方法的步骤。A computer storage medium stores one or more programs, and the one or more programs can be executed by one or more processors to implement the steps of the image processing method described above.
本发明实施例所提供的图像处理方法、终端及计算机存储介质,获取待处理的第一图像,并对第一图像进行识别得到多个图像区域;从多个图像区域中确定待处理区域,并对待处理区域处的对象进行虚化处理得到第二图像;基于第一图像和第二图像获取目标图像;如此,由于本发明实施例是通过对第一图像的待处理区域进行虚化,得到第二图像后,将对待处理区域虚化后的第二图像和第一图像结合,进而避免在对图像中的特定对象进行虚化处理时,会出现过度虚化而导致用户无法辨识虚化处理后的对象的情况。The image processing method, the terminal, and the computer storage medium provided in the embodiments of the present invention obtain a first image to be processed and identify the first image to obtain multiple image regions; determine the region to be processed from the multiple image regions, and The object at the area to be processed is blurred to obtain a second image; the target image is obtained based on the first image and the second image; thus, since the embodiment of the present invention is to blur the area to be processed of the first image, the first After the second image, the second image and the first image after blurring the area to be processed are combined, so as to avoid the blurring of the specific object in the image, which will cause the user to fail to recognize the blurring process. Of the object.
附图说明BRIEF DESCRIPTION OF THE DRAWINGS
图1为本发明实施例提供的一种图像处理方法的流程示意图;FIG. 1 is a schematic flowchart of an image processing method according to an embodiment of the present invention;
图2为本发明实施例提供的一种终端的结构示意图;2 is a schematic structural diagram of a terminal according to an embodiment of the present invention;
图3为本发明实施例提供的图像变化的过程示意图;3 is a schematic diagram of an image change process according to an embodiment of the present invention;
图4为本发明实施例提供的另一种图像处理方法的流程示意图;4 is a schematic flowchart of another image processing method according to an embodiment of the present invention;
图5为本发明实施例提供的又一种图像处理方法的流程示意图;5 is a schematic flowchart of another image processing method according to an embodiment of the present invention;
图6为本发明另一实施例提供的一种图像处理方法的流程示意图;6 is a schematic flowchart of an image processing method according to another embodiment of the present invention;
图7为本发明另一实施例提供的另一种图像处理方法的流程示意图;7 is a schematic flowchart of another image processing method according to another embodiment of the present invention;
图8为本发明实施例提供的图像处理方法的实现方式的流程示意图;8 is a schematic flowchart of an implementation manner of an image processing method according to an embodiment of the present invention;
图9为本发明实施例提供的另一种终端的结构示意图。FIG. 9 is a schematic structural diagram of another terminal according to an embodiment of the present invention.
具体实施方式detailed description
下面将结合本发明实施例中的附图,对本发明实施例中的技术方案进行清楚、完整地描述。The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present invention.
应理解,说明书通篇中提到的“本发明实施例”或“前述实施例”意味着与实施例有关的特定特征、结构或特性包括在本发明的至少一个实施例中。因此,在整个说明书各处出现的“本发明实施例中”或“在前述实施例中”未必一定指相同的实施例。此外,这些特定的特征、结构或特性可以任意适合的方式结合在一个或多个实施例中应。在本发明的各种实施例中,上述各过程的序号的大小并不意味着执行顺序的先后,各过程的执行顺序应以其功能和内在逻辑确定,而不应对本发明实施例的实施过程构成任何限定。上述本发明实施例序号仅仅为了描述,不代表实施例的优劣。It should be understood that “an embodiment of the present invention” or “the foregoing embodiment” mentioned throughout the specification means that a particular feature, structure, or characteristic related to the embodiment is included in at least one embodiment of the present invention. Therefore, "in the embodiments of the present invention" or "in the foregoing embodiments" appearing throughout the specification does not necessarily refer to the same embodiment. Furthermore, the particular features, structures, or characteristics may be combined in any suitable manner in one or more embodiments. In various embodiments of the present invention, the size of the sequence numbers of the above processes does not mean the order of execution. The execution order of each process should be determined by its function and internal logic, and should not deal with the implementation process of the embodiment of the present invention Constitute any limitation. The sequence numbers of the foregoing embodiments of the present invention are only for description, and do not represent the superiority or inferiority of the embodiments.
应需说明的是,本发明的任一实施例的图像处理方法均应用于终端,其中,终端可以为手机、电脑、照相机或者平板电脑等等,本发明对终端不作限定,只要终端能够实现本发明任一实施例的图像处理功能即可。It should be noted that the image processing method of any embodiment of the present invention is applied to a terminal. The terminal may be a mobile phone, a computer, a camera, or a tablet computer. The present invention does not limit the terminal, as long as the terminal can implement The image processing function of any embodiment of the invention is sufficient.
在图像处理中,为了让图像更有层次,往往需要将图像的背景虚化。例如,当图像中包括人像时,将除人像外的背景进行虚化,从而使图像更加突出人像。In image processing, in order to make the image more hierarchical, it is often necessary to blur the background of the image. For example, when a portrait is included in an image, the background other than the portrait is blurred to make the image more prominent.
本发明实施例提供一种图像处理方法,应用于终端中,如图1所示,该方法包括以下步骤:An embodiment of the present invention provides an image processing method, which is applied to a terminal. As shown in FIG. 1, the method includes the following steps:
步骤101:终端获取待处理的第一图像,并对第一图像进行识别得到多个图像区域。Step 101: The terminal acquires a first image to be processed, and recognizes the first image to obtain multiple image regions.
如图2所示,该终端可以包括:图像处理模块、摄像模块以及图像显示模块,图像处理模块又可以包括图像预处理模块和图像景物增强模块。在本实施例中,图像处理模块可以为处理器,摄像模块可以为图像采集器,例如摄像头,图像显示模块可以是显示屏。其中,对第一图像进行识别得到多个图像区域的步骤以及下述步骤102和步骤103的执行主体为图像景物增强模块。As shown in FIG. 2, the terminal may include an image processing module, a camera module, and an image display module. The image processing module may further include an image preprocessing module and an image scene enhancement module. In this embodiment, the image processing module may be a processor, the camera module may be an image collector, such as a camera, and the image display module may be a display screen. The step of recognizing the first image to obtain multiple image regions and the execution subject of the following steps 102 and 103 are image scene enhancement modules.
获取第一图像的方法有多种,例如,图像采集器采集第一图像,并将第一图像发送至处理器,处理器获取图像采集器发送的第一图像;或者处理器可以通过调用与处理器通过通信总线连接的存储器中存储的第一图像来获取第一图像。可选地,当用户采用摄像头采集第一图像之前,可以选择开启预设功能,以实现本实施例中的图像处理方法。There are multiple methods for obtaining the first image. For example, the image collector collects the first image and sends the first image to the processor, and the processor obtains the first image sent by the image collector; or the processor can call and process The processor acquires the first image through the first image stored in the memory connected to the communication bus. Optionally, before the user uses the camera to collect the first image, the user may choose to enable the preset function to implement the image processing method in this embodiment.
第一图像中可以包括多个对象,每一对象对应一个图像区域。对象可以为图像的任意事物,例如,人像、人的眼睛、大树等等。The first image may include multiple objects, and each object corresponds to an image area. Objects can be anything in the image, such as portraits, human eyes, big trees, and so on.
对第一图像进行识别得到多个图像区域可以包括:对第一图像中对象的边缘进行识别,得到多个图像区域。Recognizing the first image to obtain multiple image regions may include: identifying edges of objects in the first image to obtain multiple image regions.
在对第一图像中对象的边缘进行识别后,还可以将识别结果输出在终端的显示屏上,例如,可以将对象的边缘以虚线或者实线的方式输出在显示屏上。通过将识别结果输出在显示屏上,可以使用户清楚地知道图像识别后的结果,便于用户后续的操作。在另一实施例中,在对第一图像中对象的边缘进行识别后,显示屏中显示的依然是和第一图像相同的图像,但是,处理器已经识别得到多个图像区域。在又一实施例中,显示屏可以显示多个图像区域且每相邻两个图像区域之间具有预设距离。After the edge of the object in the first image is recognized, the recognition result may also be output on the display screen of the terminal. For example, the edge of the object may be output on the display screen in a dashed or solid line. By outputting the recognition result on the display screen, the user can clearly know the result of the image recognition, which is convenient for the user's subsequent operations. In another embodiment, after the edge of the object in the first image is identified, the same image as the first image is still displayed on the display screen, but the processor has identified multiple image regions. In yet another embodiment, the display screen may display a plurality of image regions with a preset distance between each two adjacent image regions.
处理器可以对第一图像中的每一对象的边缘进行识别得到多个图像区域,并将所有识别结果均输出在显示屏上。通过这种方式,能够对图像中每一对象区域进行识别,以避免漏掉第一图像中的对象,从而漏掉图像中的关键信息。在另一实施例中,处理器可以对第一图像中每一对象的边缘进行识别得到多个图像区域,并获取多个图像区域中每一图像区域的面积与第一图像面积的比例,将图像区域的面积与第一图像面积的比例大于预设值的图像区域的识别结果显示在显示屏上。例如,具有人像的图像可以仅在显示屏上显示人像的头部的识别结果,而不显示人像的眼睛的识别结果,再例如,具有大树的图像可以仅在显示屏中显示大树的识别结果,而不显示每一片树叶的识别结果。通过这种方式,可以避免处理器对图像中每一对象的识别结果都在显示屏上显示,不仅造成显示混乱,还导致用户不容易选中过小的图像区域。The processor may recognize edges of each object in the first image to obtain multiple image regions, and output all recognition results on a display screen. In this way, each object region in the image can be identified to avoid missing objects in the first image and thus missing key information in the image. In another embodiment, the processor may recognize edges of each object in the first image to obtain multiple image regions, and obtain a ratio of an area of each image region to the first image area in the multiple image regions, and The recognition result of the image area whose ratio of the area of the image area to the first image area is greater than a preset value is displayed on the display screen. For example, an image with a portrait may only show the recognition result of the head of the portrait on the display screen, but not the recognition result of the eyes of the portrait. For another example, an image with a big tree may display the recognition of the big tree only on the display As a result, the recognition result of each leaf is not displayed. In this way, the recognition result of each object in the image by the processor can be prevented from being displayed on the display screen, which not only causes display confusion but also makes it difficult for the user to select an image area that is too small.
对第一图像中对象的边缘进行识别得到多个图像区域具有多种实现方式:例如利用边缘检测的方法、多梯度检测的方法或者边缘检测和多梯度检测结合的方法对第一图像中对象的边缘进行识别得到多个图像区域。Recognizing the edges of objects in the first image to obtain multiple image regions has multiple implementations: for example, using edge detection, multi-gradient detection, or a combination of edge detection and multi-gradient detection The edges are identified to obtain multiple image regions.
边缘检测的方法可以为基于Roberts算子的边缘检测方法、基于Sobel算子的边缘检测方法、基于Prewitt算子的边缘检测方法、基于Laplace算子的边缘检测方法、基于高斯和拉普拉斯结合(Laplacian of Gaussian,LOG)算子的边缘检测方法、基于Canny算子的边缘检测方法、小波分析方法、模糊算法以及人工神经网络的方法中的任一种方法。The method of edge detection can be an edge detection method based on Roberts operator, an edge detection method based on Sobel operator, an edge detection method based on Prewitt operator, an edge detection method based on Laplace operator, a combination of Gaussian and Laplacian (Laplacian of Gaussian, LOG) any one of the edge detection method of the operator, the edge detection method based on the Canny operator, the wavelet analysis method, the fuzzy algorithm, and the artificial neural network method.
多梯度检测的方法的具体步骤为:抽取第一图像中每一像素点的亮度分量Y、颜色分量Cb(蓝色色度分量)、Cr(红色色度分量),以及深度分量D,计算不同分量下的梯度图像,并将计算到的梯度图像针对不同的方向(θ可以取0,π/8,π/4,3π/8,…,7π/8)分别进行融合,并在融合后取不同方向中梯度值的最大值作为该像素点最终的梯度(亮 度梯度G y,颜色梯度G cb,G cr,深度梯度G d),并对这4种梯度进行融合得到融合后的梯度,并基于融合后的梯度得到多个图像区域。可以采用线性加权的方式对4种梯度进行融合,具体的公式为
Figure PCTCN2019090079-appb-000001
i=1,2,3,4。其中,G mix(x,y,θ)为融合后,像素点(x,y)在方向θ上的梯度,α i为第i梯度对应到的权重,G i(x,y,θ)为线性融合前,像素点(x,y)在方向θ上的梯度。在其它实施例中,可以基于亮度分量Y、颜色分量Cb、Cr、深度分量D以及其它分量中的任意两个分量得到多个图像区域,具体步骤与上述步骤类似,此处不再赘述。
The specific steps of the multi-gradient detection method are: extract the luminance component Y, color component Cb (blue chrominance component), Cr (red chrominance component), and depth component D of each pixel in the first image, and calculate different components Down the gradient image, and fuse the calculated gradient image for different directions (θ can take 0, π / 8, π / 4, 3π / 8, ..., 7π / 8), and take different after fusion The maximum value of the gradient value in the direction is used as the final gradient of the pixel (luminance gradient G y , color gradient G cb , G cr , depth gradient G d ), and the four gradients are fused to obtain a fused gradient, and based on The fused gradients yield multiple image regions. You can use linear weighting to fuse the 4 gradients. The specific formula is
Figure PCTCN2019090079-appb-000001
i = 1,2,3,4. Where G mix (x, y, θ) is the gradient of pixel point (x, y) in direction θ after fusion, α i is the weight corresponding to the i-th gradient, and G i (x, y, θ) is Before linear fusion, the gradient of the pixel (x, y) in the direction θ. In other embodiments, multiple image regions may be obtained based on any two components of the luminance component Y, the color component Cb, Cr, the depth component D, and other components. The specific steps are similar to the above steps, and are not repeated here.
步骤102:终端从多个图像区域中确定待处理区域,并对待处理区域处的对象进行虚化处理得到第二图像。Step 102: The terminal determines a region to be processed from a plurality of image regions, and performs a blurring process on an object at the region to be processed to obtain a second image.
待处理区域具体为多个图像区域中需要进行虚化的图像区域。待处理区域可以是多个图像区域中的一个图像区域,也可以是至少两个图像区域。例如当识别得到M个图像区域时,待处理区域可以为m个图像区域,其中1≤m≤M。The region to be processed is specifically an image region that needs to be blurred out of the multiple image regions. The region to be processed may be one image region among a plurality of image regions, or may be at least two image regions. For example, when M image regions are identified, the region to be processed may be m image regions, where 1 ≦ m ≦ M.
从多个图像区域中确定待处理区域可以有多种实现方式。例如,处理器可以从多个图像区域中识别具有预设特征的图像区域,例如人像区域、动物区域或者植物区域等,并将识别出的具有预设特征的图像区域作为待处理区域。再例如,处理器可以获取操作指令,并将操作指令对应的图像区域作为待处理区域,或者,将操作指令对应的图像区域之外的图像区域作为待处理区域。There are multiple implementations for determining a region to be processed from multiple image regions. For example, the processor may identify an image region having a preset feature from a plurality of image regions, such as a portrait region, an animal region, or a plant region, and use the identified image region having the preset feature as a region to be processed. For another example, the processor may obtain an operation instruction, and use an image area corresponding to the operation instruction as the area to be processed, or use an image area other than the image area corresponding to the operation instruction as the area to be processed.
对待处理区域处的对象进行虚化处理得到第二图像可以为:利用形态学滤波方法对待处理器区域处的对象进行虚化处理得到第二图像。其中,形态学滤波方法可以为利用腐蚀运算、膨胀运算、开运算以及闭运算中的至少一种运算方法对待处理区域进行处理。应理解,第二图像的尺寸和第一图像的尺寸相同。Obtaining the second image by subjecting the object at the processing area to blurring may be: using the morphological filtering method to subject the subject at the processor area to blurring to obtain the second image. The morphological filtering method may be processing the area to be processed by using at least one operation method among corrosion operation, expansion operation, open operation, and closed operation. It should be understood that the size of the second image is the same as the size of the first image.
步骤103:终端基于第一图像和第二图像获取目标图像。Step 103: The terminal obtains a target image based on the first image and the second image.
可以采用将第一图像和第二图像进行融合的方式来获取目标图像。The target image may be acquired by fusing the first image and the second image.
在本实施例中,基于第一图像和第二图像获取目标图像,可以包括:将发明第一图像中与发明待处理区域对应的区域的图像和发明第二图像融合,得到发明目标图像。具体而言,处理器可以将第二图像中进行虚化处理后的图像区域和第一图像中的待处理区域进行融合得到融合后的图像区域,并将融合后的图像区域与第一图像中除待处理区域之外的参考区域或者第二图像中未进行虚化处理的图像区域进行结合,得到目标图像。In this embodiment, acquiring the target image based on the first image and the second image may include: fusing the image of the region corresponding to the region to be processed of the invention in the first image of the invention and the second image of the invention to obtain the target image of the invention. Specifically, the processor may fuse the image region after the blurring processing in the second image and the region to be processed in the first image to obtain a fused image region, and combine the fused image region with the first image. The reference region other than the region to be processed or the image region in the second image that is not subjected to the blurring processing is combined to obtain a target image.
其中,图像融合的方法可以为基于像素灰度的图像融合方法、基于主成分分析(principal components analysis,PCA)变换的图像融合方法、基于亮度、色调和饱和度(Hue,Intensity and Saturation,HIS)变换的融合方法以及基于多分辨率分析的图像融合方法中的一种或至少两种方法结合的方法进行图像融合。其中,基于像素灰度的图像融合方法可以为像素灰度值选大图像融合方法、像素灰度值选小图像融合方法或者加权平均图像融合方法。Among them, the image fusion method may be an image fusion method based on pixel grayscale, an image fusion method based on principal component analysis (PCA) transformation, and a brightness, hue, and saturation (HIS) One of the transform fusion method and the image fusion method based on multi-resolution analysis, or a combination of at least two methods, performs image fusion. Among them, the image grayscale-based image fusion method may select a large image fusion method for a pixel grayscale value, a small image fusion method for a pixel grayscale value, or a weighted average image fusion method.
处理器在得到目标图像后,可以将目标图像输出至终端的显示屏上。After the processor obtains the target image, it can output the target image to the display screen of the terminal.
为了更清楚的说明本发明实施例对第一图像进行处理得到目标图像的过程,请结合图3。对第一图像进行识别得到区域A1,区域B1,区域C1以及区域D1这四个图像区域,即第一图像I包括区域A1、区域B1、区域C1以及区域D1,在确定区域A1和区域B1为待处理区域后,则区域C1和区域D1为第一图像中除待处理区域之外的参考区域,处理器对待处理区域A1和区域B1处的对象继续虚化处理后,得到第二图像II,其中,第二图像包括区域A2,区域B2,区域C1以及区域D1,区域A2和区域B2分别为对区域A1和区域B1进行虚化后的区域,接着将区域A1、区域B1与区域A2、区域B2进行融合得到融合后的图像区域,并将融合后的图像区域与区域C1和区域D1结合得到目标图像III。In order to more clearly illustrate the process of processing the first image to obtain the target image in the embodiment of the present invention, please refer to FIG. 3. The first image is recognized to obtain four image areas: area A1, area B1, area C1, and area D1, that is, the first image I includes area A1, area B1, area C1, and area D1. In the determined area A1 and area B1, After the region to be processed, the region C1 and the region D1 are reference regions other than the region to be processed in the first image, and the processor continues to blur the objects in the region A1 and the region B1 to be processed to obtain a second image II. The second image includes area A2, area B2, area C1, and area D1. Area A2 and area B2 are areas after blurring area A1 and area B1, respectively, and then area A1, area B1, and area A2, area B2 performs fusion to obtain a fused image region, and combines the fused image region with regions C1 and D1 to obtain a target image III.
由于本发明实施例是通过对第一图像的待处理区域进行虚化,得到第二图像后,将对待处理区域虚化后的第二图像和第一图像结合,进而避免在对图像中的特定对象进行虚化处理时,会出现过度虚化而导致用户无法辨识虚化处理后的对象的情况。Because the embodiment of the present invention blurs the area to be processed of the first image, and obtains the second image, combines the second image after blurring the area to be processed with the first image, thereby avoiding specific changes in the image. When the object is blurred, there may be situations where the user is unable to recognize the blurred object due to excessive blur.
基于前述实施例,本发明实施例提供一种图像处理方法,应用于终端,如图4所示,该方法包括以下步骤:Based on the foregoing embodiments, an embodiment of the present invention provides an image processing method, which is applied to a terminal. As shown in FIG. 4, the method includes the following steps:
步骤201:终端获取待处理的第一图像。Step 201: The terminal acquires a first image to be processed.
获取第一图像的方法有多种,例如,图像采集器采集第一图像,并将第一图像发送至处理器,处理器获取图像采集器发送的第一图像;或者处理器可以通过调用与处理器通过通信总线连接的存储器中存储的第一图像来获取第一图像。可选地,当用户采用摄像头采集第一图像之前,可以选择开启预设功能,以实现本实施例中的图像处理方法。There are multiple methods for obtaining the first image. For example, the image collector collects the first image and sends the first image to the processor, and the processor obtains the first image sent by the image collector; or the processor can call and process The processor acquires the first image through the first image stored in the memory connected to the communication bus. Optionally, before the user uses the camera to collect the first image, the user may choose to enable the preset function to implement the image processing method in this embodiment.
步骤202:终端采用第一检测方法对第一图像中对象的边缘进行识别,得到多个第一识别区域。Step 202: The terminal uses a first detection method to recognize the edges of the objects in the first image, and obtains multiple first recognition areas.
第一检测方法用于识别第一图像中对象的边缘。The first detection method is used to identify edges of objects in the first image.
在本实施例中,第一检测方法可以为边缘检测的方法。In this embodiment, the first detection method may be an edge detection method.
边缘检测的方法可以为基于Roberts算子的边缘检测方法、基于Sobel算子的边缘检测方法、基于Prewitt算子的边缘检测方法、基于Laplace算子的边缘检测方法、基于高斯和拉普拉斯结合(Laplacian of Gaussian,LOG)算子的边缘检测方法、基于Canny算子的边缘检测方法、小波分析方法、模糊算法以及人工神经网络的方法中的任一种方法。The method of edge detection can be an edge detection method based on Roberts operator, an edge detection method based on Sobel operator, an edge detection method based on Prewitt operator, an edge detection method based on Laplace operator, a combination of Gaussian and Laplacian (Laplacian of Gaussian, LOG) any one of the edge detection method of the operator, the edge detection method based on the Canny operator, the wavelet analysis method, the fuzzy algorithm, and the artificial neural network method.
步骤203:终端采用第二检测方法对第一图像中对象的边缘进行识别,得到多个第二识别区域。Step 203: The terminal uses a second detection method to recognize the edge of the object in the first image, and obtains a plurality of second recognition regions.
第二检测方法用于识别第一图像中对象的边缘。其中,第一检测方法和第二检测方法是两个不同的检测方法。The second detection method is used to identify edges of objects in the first image. The first detection method and the second detection method are two different detection methods.
在本实施例中,第二检测方法可以为多梯度检测的方法。多梯度检测的方法的具体步骤请参阅第一实施例中的相关描述,此处不再赘述。在其它实施例中,第二检测方法也可以为与第一检测方法不同的边缘检测的方法。In this embodiment, the second detection method may be a multi-gradient detection method. For specific steps of the multi-gradient detection method, refer to related descriptions in the first embodiment, and details are not described herein again. In other embodiments, the second detection method may also be an edge detection method different from the first detection method.
本实施例不限定步骤202和步骤203的先后顺序。处理器可以先执行步骤202,再执行步骤203;或者可以先执行步骤203,再执行步骤202;或者步骤202和步骤203同时执行。This embodiment does not limit the sequence of steps 202 and 203. The processor may execute step 202 and then step 203; or may execute step 203 and then step 202; or step 202 and step 203 may be performed simultaneously.
步骤204:终端基于多个第一识别区域和多个第二识别区域,确定多个图像区域。Step 204: The terminal determines a plurality of image regions based on the plurality of first identification regions and the plurality of second identification regions.
基于第一识别结果和第二识别结果得到多个图像区域,包括:利用贝叶斯概率问题将第一识别结果和第二识别结果融合得到多个图像区域。通过贝叶斯概率问题将采用第一检测方法得到的多个第一识别区域和采用第二检测方得到的多个第二识别区域进行融合,能够提高识别得到多个图像区域的准确率。Obtaining a plurality of image regions based on the first recognition result and the second recognition result includes: using a Bayesian probability problem to fuse the first recognition result and the second recognition result to obtain a plurality of image regions. The Bayesian probability problem is used to fuse a plurality of first recognition areas obtained by using the first detection method and a plurality of second recognition areas obtained by using the second detection method, which can improve the accuracy of identifying and obtaining a plurality of image areas.
本实施例可以通过以下方式确定多个图像区域:基于每一第一识别区域中的像素点的显示参数,确定每个第一识别区域的准确率;基于每一第二识别区域中的像素点的显示参数,确定每个第二识别区域的准确率;基于多个第一识别区域、多个第二识别区域、每个第一识别区域的准确率以及每个第二识别区域的准确率,确定多个图像区域。In this embodiment, multiple image regions can be determined by: determining an accuracy rate of each first recognition region based on a display parameter of a pixel point in each first recognition region; and based on a pixel point in each second recognition region The display parameters of, determine the accuracy rate of each second recognition area; based on the plurality of first recognition areas, the plurality of second recognition areas, the accuracy rate of each first recognition area, and the accuracy rate of each second recognition area, Identify multiple image areas.
显示参数可以为亮度参数、深度参数和颜色参数等至少一种参数。以下以显示参数为亮度参数举例对确定多个图像区域的具体方法进行说明。应理解,当显示参数为其它参数时,方法类似,此处不再赘述。处理器可以获取每一像素点的亮度值,以及获取关于第一图像的像素亮度值统计表;基于每一第一识别区域中的像素点的显示参数和像素亮度值统 计表,确定每个第二识别区域的准确率;基于每一第二识别区域中的像素点的显示参数和像素亮度值统计表,确定每个第二识别区域的准确率,基于多个第一识别区域、多个第二识别区域、每个第一识别区域的准确率以及每个第二识别区域的准确率,确定多个图像区域。例如,当多个第一识别区域中的区域E的准确率高于多个第二识别区域中的与区域E相对应的区域F的准确率时,将区域E作为识别得到的图像区域,反之,则将区域F作为识别得到的图像区域。The display parameter may be at least one parameter such as a brightness parameter, a depth parameter, and a color parameter. The specific method of determining multiple image regions will be described below using the display parameter as an example of the brightness parameter. It should be understood that when the display parameters are other parameters, the method is similar and will not be repeated here. The processor may obtain the brightness value of each pixel, and obtain a statistics table of pixel brightness values on the first image; based on the display parameters of the pixels in each first recognition area and the statistics table of pixel brightness values, determine each The accuracy rate of the second recognition area; based on the display parameters of the pixels in each second recognition area and the pixel brightness value statistics table, determining the accuracy rate of each second recognition area, based on multiple first recognition areas, multiple first The two recognition areas, the accuracy rate of each first recognition area, and the accuracy rate of each second recognition area determine a plurality of image areas. For example, when the accuracy rate of the area E in the plurality of first recognition areas is higher than the accuracy rate of the area F corresponding to the area E in the plurality of second recognition areas, the area E is used as the image area obtained by the recognition, and vice versa , Then use the area F as the image area obtained by the recognition.
步骤205:终端从多个图像区域中确定待处理区域,并对待处理区域处的对象进行虚化处理得到第二图像。Step 205: The terminal determines a region to be processed from a plurality of image regions, and performs a blurring process on an object at the region to be processed to obtain a second image.
步骤206:终端基于第一图像和第二图像获取目标图像。Step 206: The terminal acquires a target image based on the first image and the second image.
本实施例是对第一实施例中的对第一图像进行识别得到多个图像区域的步骤进行补充,本实施例中与前述实施例中相同步骤和相同内容的说明,可以参照前述实施例中的描述,此处不再赘述。This embodiment supplements the steps of recognizing the first image to obtain multiple image regions in the first embodiment. For descriptions of the same steps and the same contents in the embodiment as in the foregoing embodiment, reference may be made to the foregoing embodiment. The description is not repeated here.
由于本实施例的图像处理方法是基于多个第一识别区域和多个第二识别区域,确定识别得到的多个图像区域,因此,能够综合每一第一识别区域和与第一识别区域对应的第二识别区域的识别准确率得到多个图像区域,使识别得到的多个图像区域更加准确。Since the image processing method of this embodiment determines a plurality of image regions obtained based on the plurality of first recognition regions and the plurality of second recognition regions, it is possible to synthesize each first recognition region and correspond to the first recognition region The recognition accuracy rate of the second recognition region is used to obtain multiple image regions, so that the multiple obtained image regions are more accurate.
基于前述实施例,本发明实施例提供又一种图像处理方法,应用于终端,如图5所示,该方法包括以下步骤:Based on the foregoing embodiments, an embodiment of the present invention provides another image processing method, which is applied to a terminal. As shown in FIG. 5, the method includes the following steps:
步骤301:终端获取待处理的第一图像,并对第一图像进行识别得到多个图像区域。Step 301: The terminal acquires a first image to be processed, and recognizes the first image to obtain multiple image regions.
步骤302:终端接收针对第一图像的第一操作。Step 302: The terminal receives a first operation on the first image.
具体地,终端可以设有显示屏,第一操作可以是用户对显示屏上的第一图像进行点击操作或者滑动操作等。或者,终端可以设有语音接收单元,第一操作可以是用户的语音输入,以使语音接收单元接收到语音信号后,将语音信号发送至处理器,处理器接收针对第一图像语音信号。Specifically, the terminal may be provided with a display screen, and the first operation may be a user performing a click operation or a slide operation on the first image on the display screen. Alternatively, the terminal may be provided with a voice receiving unit. The first operation may be a user's voice input, so that the voice receiving unit sends the voice signal to the processor after receiving the voice signal, and the processor receives the voice signal for the first image.
第一图像具有多个图像区域,用户可以对多个图像区域中的任一个图像区域执行第一操作。例如,请参阅图2,当用户想要在第一图像中突出区域B时,可以在显示屏上第一图像的区域B;或者当用户想要想要在第一图像中突出区域A和区域B时,可以在显示屏上点击第一图像的区域A和区域B。The first image has a plurality of image regions, and the user may perform a first operation on any one of the plurality of image regions. For example, referring to FIG. 2, when the user wants to highlight the region B in the first image, the region B of the first image can be displayed on the display screen; or when the user wants to highlight the region A and the region in the first image At B, you can click area A and area B of the first image on the display.
步骤303:终端响应第一操作并从多个图像区域中确定与第一操作对应的图像区域, 得到参考区域。Step 303: The terminal responds to the first operation and determines an image area corresponding to the first operation from a plurality of image areas to obtain a reference area.
在本实施例中,终端响应第一操作并从多个图像区域中确定与第一操作对应的图像区域,得到参考区域可以包括:若第一操作满足第一预设条件,响应第一操作并从多个图像区域中确定与第一操作对应的图像区域,得到参考区域。In this embodiment, the terminal responds to the first operation and determines an image area corresponding to the first operation from a plurality of image areas, and obtaining the reference area may include: if the first operation meets a first preset condition, responding to the first operation and An image region corresponding to the first operation is determined from a plurality of image regions to obtain a reference region.
第一预设条件可以为对同一图像区域进行奇数次的操作,例如,请参阅图2,当用户奇数次点击第一图像中的区域B,终端接收到用户对区域B的奇数次的第一操作,响应第一操作并从多个图像区域中确定与第一操作对应的图像区域,得到参考区域。The first preset condition may be an operation performed an odd number of times on the same image area. For example, referring to FIG. 2, when the user clicks on the area B in the first image an odd number of times, the terminal receives the user's first Operation, in response to the first operation and determining an image area corresponding to the first operation from a plurality of image areas, to obtain a reference area.
可选地,处理器在响应第一操作之后,还可以控制显示屏将第一操作对应的图像区域突出显示,突出显示可以包括:改变第一操作对应的图像区域的颜色或者亮度,或者向第一操作对应的图像区域设置阴影。例如,请参阅图2,当用户对第一图像的区域B进行奇数次的点击操作时,显示屏会将区域B中对象的亮度增强。Optionally, after responding to the first operation, the processor may further control the display screen to highlight the image area corresponding to the first operation, and the highlighting may include: changing the color or brightness of the image area corresponding to the first operation, or One operation sets the shadow of the corresponding image area. For example, referring to FIG. 2, when the user performs an odd number of click operations on the region B of the first image, the display screen increases the brightness of the object in the region B.
可选地,显示屏还可以向用户提供“确定”或“取消全部”的选择框,以使用户通过点击选择框,将选择好的图像区域确认或者将选择好的图像区域全部取消。Optionally, the display screen may also provide the user with a "OK" or "Cancel All" selection box, so that the user can confirm the selected image area or cancel all selected image areas by clicking the selection box.
在本实施例中,与第一操作对应的图像区域为参考区域。在其它实施例中,与第一操作对应到图像区域可以为待处理区域。即处理器可以判断第一操作是否满足第一预设条件;若第一操作满足第一预设条件,响应第一操作并从多个图像区域中确定与第一操作对应的图像区域,得到待处理区域。也就是说,处理器将与第一操作对应的图像区域作为待处理区域。In this embodiment, the image area corresponding to the first operation is a reference area. In other embodiments, the image region corresponding to the first operation may be a region to be processed. That is, the processor can determine whether the first operation meets the first preset condition; if the first operation meets the first preset condition, respond to the first operation and determine an image area corresponding to the first operation from a plurality of image areas, and obtain Processing area. That is, the processor uses the image region corresponding to the first operation as a region to be processed.
在本实施例中,终端响应第一操作并从多个图像区域中确定与第一操作对应的图像区域,得到参考区域之后,终端还可以接收针对第一图像的第二操作;若第二操作满足第二预设条件,响应第二操作并从多个图像区域中确定与第二操作对应的图像区域,并将第二操作对应的图像区域设置为待处理区域。In this embodiment, the terminal responds to the first operation and determines an image area corresponding to the first operation from multiple image areas. After obtaining the reference area, the terminal may also receive a second operation for the first image; if the second operation The second preset condition is satisfied, and an image region corresponding to the second operation is determined from a plurality of image regions in response to the second operation, and the image region corresponding to the second operation is set as a region to be processed.
第二操作可以使与第一操作相同的操作,例如是奇数次点击操作或者是奇数次滑动操作,第二预设条件为第二操作对应的图像区域与第一操作对应的图像区域相同。例如,请参阅图2,用户在点击第一图像中的区域B后,若想取消对将区域B的选择,则再次点击区域B即可。The second operation may make the same operation as the first operation, such as an odd number of click operations or an odd number of slide operations, and the second preset condition is that the image area corresponding to the second operation is the same as the image area corresponding to the first operation. For example, referring to FIG. 2, after the user clicks on area B in the first image, if he wants to cancel the selection of area B, he can click on area B again.
类似地,在其它实施例中,终端判断到第二操作满足第二预设条件时,可以将操作对应的图像区域由待处理区域设置为参考区域。Similarly, in other embodiments, when the terminal determines that the second operation satisfies the second preset condition, the terminal may set the image region corresponding to the operation from the region to be processed as the reference region.
步骤304:终端从多个图像区域中,确定除参考区域之外的区域为待处理区域。Step 304: The terminal determines, from a plurality of image regions, a region other than the reference region as a region to be processed.
在本实施例中,是基于参考区域确定待处理区域。在其它实施例中,可以基于待处理区域确定参考区域,即从多个图像区域中,确定除待处理区域之外的区域为参考区域。In this embodiment, the region to be processed is determined based on the reference region. In other embodiments, the reference region may be determined based on the region to be processed, that is, from a plurality of image regions, a region other than the region to be processed is determined as the reference region.
步骤305:终端对待处理区域处的对象进行虚化处理得到第二图像。Step 305: The terminal performs a blurring process on the object at the area to be processed to obtain a second image.
对待处理区域处的对象进行虚化处理得到第二图像可以为:利用形态学滤波方法对待处理器区域处的对象进行虚化处理得到第二图像。其中,形态学滤波方法可以为利用腐蚀运算、膨胀运算、开运算以及闭运算中的至少一种运算方法对待处理区域进行处理。Obtaining the second image by subjecting the object at the processing area to blurring may be: using the morphological filtering method to subject the subject at the processor area to blurring to obtain the second image. The morphological filtering method may be processing the area to be processed by using at least one operation method among corrosion operation, expansion operation, open operation, and closed operation.
步骤306:终端基于第一图像和第二图像获取目标图像。Step 306: The terminal obtains a target image based on the first image and the second image.
本实施例是对第一实施例中的从多个图像区域中确定待处理区域的步骤进行补充。本实施例中与前述实施例中相同步骤和相同内容的说明,可以参照前述实施例中的描述,此处不再赘述。This embodiment supplements the step of determining a region to be processed from a plurality of image regions in the first embodiment. For descriptions of the same steps and the same contents in the foregoing embodiment as in the foregoing embodiment, reference may be made to the description in the foregoing embodiment, and details are not described herein again.
通过本实施例的图像处理方法,由于终端可以接收针对第一图像的第一操作,并将第一操作对应的图像区域作为参考区域,并确定除参考区域之外的区域为待处理区域。因此,终端可以根据用户的选择来确定图像的待处理区域并对待处理区域进行虚化,从而使用户能够根据自己的实际需要来确定待处理区域。With the image processing method of this embodiment, since the terminal can receive the first operation for the first image, use the image area corresponding to the first operation as the reference area, and determine that the area other than the reference area is the area to be processed. Therefore, the terminal can determine the area to be processed of the image and blur the area to be processed according to the user's selection, so that the user can determine the area to be processed according to his actual needs.
基于前述实施例,本发明实施例提供一种图像处理方法,应用于终端,如图6所示,该方法包括以下步骤:Based on the foregoing embodiments, an embodiment of the present invention provides an image processing method, which is applied to a terminal. As shown in FIG. 6, the method includes the following steps:
步骤401:终端获取待处理的第三图像。Step 401: The terminal acquires a third image to be processed.
获取第三图像的方法有多种,例如,图像采集器采集第三图像,并将第三图像发送至处理器,处理器获取图像采集器发送的第三图像;或者处理器可以通过调用与处理器通过通信总线连接的存储器中存储的第三图像来获取第三图像。可选地,当用户采用摄像头采集第三图像之前,可以选择开启预设功能,以实现本实施例中的图像处理方法。There are multiple methods for acquiring the third image. For example, the image collector collects the third image and sends the third image to the processor, and the processor acquires the third image sent by the image collector; or the processor can call and process The processor acquires the third image through the third image stored in the memory connected to the communication bus. Optionally, before a user uses a camera to collect a third image, a preset function may be selected to enable the image processing method in this embodiment.
步骤402:终端对第三图像进行滤波去噪,得到第一图像。Step 402: The terminal filters and denoises the third image to obtain a first image.
其中,本步骤的执行主体为图像预处理模块。对第三图像进行滤波去噪的方式有多种。在本实施例中,对第三图像进行滤波去噪可以为:对第三图像采用中值滤波的方法去噪。中值滤波法是一种非线性平滑技术,它将每一像素点的灰度值设置为该点某邻域窗口内的所有像素点灰度值的中值。在其它实施例中,对第三图像进行滤波去噪可以为:对第三图像采用均值滤波的方法去噪。均值滤波是典型的线性滤波算法,它是指在图像上对目标像 素给一个模板,该模板包括了其周围的临近像素(以目标像素为中心的周围8个像素,构成一个滤波模板,即去掉目标像素本身),再用模板中的全体像素的平均值来代替原来像素值。The execution subject of this step is an image preprocessing module. There are multiple ways to filter and denoise the third image. In this embodiment, filtering and denoising the third image may be: denoising the third image by using a median filtering method. The median filtering method is a non-linear smoothing technique. It sets the gray value of each pixel to the median of the gray values of all pixels in a neighborhood window at that point. In other embodiments, filtering and denoising the third image may be: using a mean filtering method to denoise the third image. Mean filtering is a typical linear filtering algorithm. It refers to giving a template to the target pixel on the image. The template includes neighboring pixels around it (8 pixels around the target pixel as the center, forming a filtering template, that is, removing The target pixel itself), and then replace the original pixel value with the average of all pixels in the template.
步骤403:终端获取待处理的第一图像,并对第一图像进行识别得到多个图像区域。Step 403: The terminal acquires a first image to be processed, and recognizes the first image to obtain multiple image regions.
其中,待处理的第一图像是对第三图像进行滤波去噪后得到的。The first image to be processed is obtained by filtering and denoising the third image.
步骤404:终端从多个图像区域中确定待处理区域,并对待处理区域处的对象进行虚化处理得到第二图像。Step 404: The terminal determines a region to be processed from a plurality of image regions, and performs a blurring process on an object at the region to be processed to obtain a second image.
步骤405:终端基于第一图像和第二图像获取目标图像。Step 405: The terminal acquires a target image based on the first image and the second image.
本实施例是针对第一实施例中的获取待处理的第一图像的步骤进一步地补充。本实施例中与前述实施例中相同步骤和相同内容的说明,可以参照前述实施例中的描述,此处不再赘述。This embodiment further supplements the step of acquiring the first image to be processed in the first embodiment. For descriptions of the same steps and the same contents in the foregoing embodiment as in the foregoing embodiment, reference may be made to the description in the foregoing embodiment, and details are not described herein again.
由于本实施例的图像处理方法中的第一图像是由滤波去噪后的第三图像得到的,因此,能够在对第一图像进行识别时,能够避免图像噪声的干扰,使得识别结果更准确。Because the first image in the image processing method of this embodiment is obtained by filtering and denoising the third image, it is possible to avoid interference of image noise when identifying the first image, and make the recognition result more accurate. .
基于前述实施例,本发明实施例提供另一种图像处理方法,应用于终端,如图7所示,该方法包括以下步骤:Based on the foregoing embodiments, an embodiment of the present invention provides another image processing method, which is applied to a terminal. As shown in FIG. 7, the method includes the following steps:
步骤501:终端获取待处理的第三图像。Step 501: The terminal acquires a third image to be processed.
步骤502:终端对第三图像进行滤波去噪,得到第一图像。Step 502: The terminal filters and denoises the third image to obtain a first image.
步骤503:终端获取待处理的第一图像。Step 503: The terminal acquires a first image to be processed.
步骤504:终端采用第一检测方法对第一图像中对象的边缘进行识别,得到多个第一识别区域。Step 504: The terminal uses the first detection method to recognize the edges of the objects in the first image, and obtains multiple first recognition areas.
步骤505:终端采用第二检测方法对第一图像中对象的边缘进行识别,得到多个第二识别区域。Step 505: The terminal uses a second detection method to recognize the edge of the object in the first image, and obtains a plurality of second recognition regions.
本实施例不限定步骤504和步骤505的先后顺序,处理器可以先执行步骤504,再执行505;或者可以先执行步骤505,再执行步骤504;或者步骤504和步骤505同时执行。This embodiment does not limit the sequence of steps 504 and 505. The processor may execute step 504 and then execute 505; or may execute step 505 and then execute step 504; or step 504 and step 505 may be performed simultaneously.
步骤506:终端基于多个第一识别区域和多个第二识别区域,确定多个图像区域。Step 506: The terminal determines a plurality of image regions based on the plurality of first identification regions and the plurality of second identification regions.
步骤507:终端接收针对第一图像的第一操作。Step 507: The terminal receives a first operation on the first image.
步骤508:终端响应第一操作并从多个图像区域中确定与第一操作对应的图像区域, 得到参考区域。Step 508: The terminal responds to the first operation and determines an image area corresponding to the first operation from a plurality of image areas to obtain a reference area.
步骤509:终端从多个图像区域中,确定除参考区域之外的区域为待处理区域。Step 509: The terminal determines an area other than the reference area from a plurality of image areas to be processed.
步骤510:终端对待处理区域处的对象进行虚化处理得到第二图像。Step 510: The terminal performs a blurring process on an object at the region to be processed to obtain a second image.
步骤511:终端基于第一图像和第二图像获取目标图像。Step 511: The terminal acquires a target image based on the first image and the second image.
本实施例中与前述实施例中相同步骤和相同内容的说明,可以参照前述实施例中的描述,此处不再赘述。For descriptions of the same steps and the same contents in the foregoing embodiment as in the foregoing embodiment, reference may be made to the description in the foregoing embodiment, and details are not described herein again.
以下就一个具体的实现方式对本实施例中的图像处理方法进行说明。请结合图8,本实施例中,首先通过终端的图像采集器进行图像拍摄,并将拍摄的图像进行中值滤波去噪,并保存滤波后的图像,然后对滤波后图像进行边缘检测和多梯度检测,并对两种检测结果利用贝叶斯方法进行融合,保证检测的准确性,实现景物与背景的准确区分,此时,显示屏可以出现预览画面,预览图片中不同的景物已与背景进行区分,出现包含景物的多个虚框;接着,用户可以根据需要进行触点选择,可对一个或多个景物进行触点选择,对同一个景物虚框点击奇数次即认为景物增强,点击偶数次即认为取消增强,通过用户的点击,终端进行图像中背景与目标景物的分割,将得到的背景放入下一步操作;接着,利用形态学滤波方法对得到的图像背景进行平滑,并将中值滤波去噪后的图像与利用形态学滤波方法得到的图像背景进行融合,实现背景的弱化,达到景物、人像的增强和突出;最后,输出增强后图像并将增强后的图像显示在显示屏上。The image processing method in this embodiment is described below with a specific implementation manner. With reference to FIG. 8, in this embodiment, an image is captured through an image collector of a terminal, and the captured image is subjected to median filtering and denoising, and the filtered image is saved, and then the edge detection and multi-processing are performed on the filtered image. Gradient detection, and use the Bayesian method to fuse the two detection results to ensure the accuracy of the detection and to accurately distinguish the scene from the background. At this time, a preview screen can appear on the display screen, and the different scenes in the preview picture are already different from the background. Differentiate, there are multiple virtual frames containing the scene; then, the user can select the touch points according to the needs, and can select the touch points for one or more scenes. Click the odd frame of the same scene for an odd number of times to consider the scene to be enhanced. Even-numbered times are considered to cancel enhancement. The user clicks on the terminal to segment the background and the target scene in the image, and puts the obtained background into the next operation. Then, the morphological filtering method is used to smooth the background of the obtained image, and The median filtered and denoised image is fused with the image background obtained by the morphological filtering method. The background is weakened to achieve the enhancement and highlighting of scenes and portraits. Finally, the enhanced image is output and the enhanced image is displayed on the display screen.
本实施例提供的图像处理方法,能够避免在对图像中的特定对象进行虚化处理时,会出现过度虚化而导致用户无法辨识虚化处理后的对象的情况;能够综合每一第一识别区域和与第一识别区域对应的第二识别区域的识别准确率得到多个图像区域,使识别得到的多个图像区域更加准确;能够根据用户的选择来确定图像的待处理区域并对待处理区域进行虚化,从而使用户能够根据自己的实际需要来确定待处理区域;以及能够避免图像噪声的干扰,使得识别结果更准确。The image processing method provided in this embodiment can avoid the situation that when a specific object in an image is blurred, the user may fail to recognize the blurred object due to excessive blurring; each first recognition can be integrated The recognition accuracy rate of the region and the second recognition region corresponding to the first recognition region is to obtain multiple image regions, so that the multiple image regions obtained are more accurate; the region to be processed and the region to be processed can be determined according to the user's selection Blur, so that users can determine the area to be processed according to their actual needs; and can avoid the interference of image noise, making the recognition results more accurate.
基于前述实施例,本发明的实施例提供一种终端6,该终端可以应用于图1和4~7对应的实施例提供的一种图像处理方法中,参照图9所示,该终端可以包括:处理器61、存储器62和通信总线63,其中:Based on the foregoing embodiments, an embodiment of the present invention provides a terminal 6. The terminal may be applied to an image processing method provided by the embodiments corresponding to FIGS. 1 and 4 to 7. Referring to FIG. 9, the terminal may include : Processor 61, memory 62, and communication bus 63, where:
通信总线63用于实现处理器61和存储器62之间的通信连接。The communication bus 63 is used to implement a communication connection between the processor 61 and the memory 62.
处理器61用于执行存储器62中存储的图像处理方法的程序,以实现以下步骤:The processor 61 is configured to execute a program of an image processing method stored in the memory 62 to implement the following steps:
获取待处理的第一图像,并对第一图像进行识别得到多个图像区域;Acquiring a first image to be processed, and identifying the first image to obtain multiple image regions;
从多个图像区域中确定待处理区域,并对待处理区域处的对象进行虚化处理得到第二图像;Determining a region to be processed from a plurality of image regions, and performing a blurring process on an object at the region to be processed to obtain a second image;
基于第一图像和第二图像获取目标图像。A target image is acquired based on the first image and the second image.
在本发明的其他实施例中,处理器61用于执行存储器62中存储的对第一图像进行识别得到多个图像区域,以实现以下步骤:In other embodiments of the present invention, the processor 61 is configured to execute the identification of the first image stored in the memory 62 to obtain multiple image regions, so as to implement the following steps:
对第一图像中对象的边缘进行识别,得到多个图像区域。Recognize the edges of objects in the first image to obtain multiple image regions.
在本发明的其他实施例中,处理器61用于执行存储器62中存储的对第一图像中对象的边缘进行识别,得到多个图像区域,以实现以下步骤:In other embodiments of the present invention, the processor 61 is configured to perform recognition of an edge of an object in the first image stored in the memory 62 to obtain a plurality of image regions to implement the following steps:
采用第一检测方法对第一图像中对象的边缘进行识别,得到多个第一识别区域;Adopting a first detection method to recognize the edges of objects in the first image to obtain a plurality of first recognition regions;
采用第二检测方法对第一图像中对象的边缘进行识别,得到多个第二识别区域;其中,第一检测方法和第二检测方法均用于识别第一图像中对象的边缘;Adopting a second detection method to recognize the edges of the objects in the first image to obtain a plurality of second recognition areas; wherein the first detection method and the second detection method are both used to identify the edges of the objects in the first image;
基于多个第一识别区域和多个第二识别区域,确定多个图像区域。Based on the plurality of first recognition areas and the plurality of second recognition areas, a plurality of image areas are determined.
在本发明的其他实施例中,处理器61用于执行存储器62中存储的基于多个第一识别区域和多个第二识别区域,确定多个图像区域,以实现以下步骤:In other embodiments of the present invention, the processor 61 is configured to execute a plurality of image regions based on the plurality of first identification regions and the plurality of second identification regions stored in the memory 62 to implement the following steps:
基于每一第一识别区域中的像素点的显示参数,确定每个第一识别区域的准确率;Determining an accuracy rate of each first recognition area based on display parameters of pixels in each first recognition area;
基于每一第二识别区域中的像素点的显示参数,确定每个第二识别区域的准确率;Determining an accuracy rate of each second recognition area based on display parameters of pixels in each second recognition area;
基于多个第一识别区域、多个第二识别区域、每个第一识别区域的准确率以及每个第二识别区域的准确率,确定多个图像区域。A plurality of image regions are determined based on a plurality of first recognition regions, a plurality of second recognition regions, an accuracy rate of each first recognition region, and an accuracy rate of each second recognition region.
在本发明的其他实施例中,处理器61用于执行存储器62中存储的从多个图像区域中确定待处理区域,以实现以下步骤:In other embodiments of the present invention, the processor 61 is configured to execute the determination of a region to be processed from a plurality of image regions stored in the memory 62 to implement the following steps:
接收针对第一图像的第一操作;Receiving a first operation on a first image;
响应第一操作并从多个图像区域中确定与第一操作对应的图像区域,得到参考区域;Responding to the first operation and determining an image area corresponding to the first operation from a plurality of image areas to obtain a reference area;
从多个图像区域中,确定除参考区域之外的区域为待处理区域。From the plurality of image regions, a region other than the reference region is determined as a region to be processed.
在本发明的其他实施例中,处理器61用于执行存储器62中存储的响应第一操作并从多个图像区域中确定与第一操作对应的图像区域,得到参考区域,以实现以下步骤:In other embodiments of the present invention, the processor 61 is configured to execute a response to the first operation stored in the memory 62 and determine an image region corresponding to the first operation from a plurality of image regions to obtain a reference region to implement the following steps:
若第一操作满足第一预设条件,响应第一操作并从多个图像区域中确定与第一操作对 应的图像区域,得到参考区域。If the first operation satisfies the first preset condition, in response to the first operation and determining an image region corresponding to the first operation from a plurality of image regions, a reference region is obtained.
在本发明的其他实施例中,处理器61用于执行存储器62中存储的响应第一操作并从多个图像区域中确定与第一操作对应的图像区域,得到参考区域之后,以实现以下步骤:In other embodiments of the present invention, the processor 61 is configured to execute a response to the first operation stored in the memory 62 and determine an image region corresponding to the first operation from a plurality of image regions. After obtaining the reference region, the following steps are implemented: :
接收针对第一图像的第二操作;Receiving a second operation for the first image;
若第二操作满足第二预设条件,响应第二操作并从多个图像区域中确定与第二操作对应的图像区域,并将第二操作对应的图像区域设置为待处理区域。If the second operation meets the second preset condition, respond to the second operation and determine an image region corresponding to the second operation from a plurality of image regions, and set the image region corresponding to the second operation as a region to be processed.
在本发明的其他实施例中,处理器61用于执行存储器62中存储的基于第一图像和第二图像获取目标图像,以实现以下步骤:In other embodiments of the present invention, the processor 61 is configured to execute the acquisition of the target image based on the first image and the second image stored in the memory 62 to implement the following steps:
将第一图像中与待处理区域对应的区域的图像和第二图像融合,得到目标图像。The target image is obtained by fusing the image of the region corresponding to the region to be processed with the second image.
在本发明的其他实施例中,处理器61用于执行存储器62中存储的获取第一图像,以实现以下步骤:In other embodiments of the present invention, the processor 61 is configured to execute acquiring the first image stored in the memory 62 to implement the following steps:
获取待处理的第三图像;Obtaining a third image to be processed;
对第三图像进行滤波去噪,得到第一图像。Filter and denoise the third image to obtain a first image.
需要说明的是,本实施例中处理器所执行的步骤的具体实现过程,可以参照图1和4~7对应的实施例提供的图像处理方法中的实现过程,此处不再赘述。It should be noted that, for the specific implementation process of the steps performed by the processor in this embodiment, reference may be made to the implementation process in the image processing method provided by the embodiment corresponding to FIG. 1 and FIG. 4 to FIG.
基于前述实施例,本发明的实施例提供一种计算机可读存储介质,计算机可读存储介质存储有一个或者多个程序,该一个或者多个程序可被一个或者多个处理器执行,以实现以下步骤:Based on the foregoing embodiments, an embodiment of the present invention provides a computer-readable storage medium. The computer-readable storage medium stores one or more programs, and the one or more programs can be executed by one or more processors to implement The following steps:
获取待处理的第一图像,并对第一图像进行识别得到多个图像区域;Acquiring a first image to be processed, and identifying the first image to obtain multiple image regions;
从多个图像区域中确定待处理区域,并对待处理区域处的对象进行虚化处理得到第二图像;Determining a region to be processed from a plurality of image regions, and performing a blurring process on an object at the region to be processed to obtain a second image;
基于第一图像和第二图像获取目标图像。A target image is acquired based on the first image and the second image.
在本发明的其他实施例中,该一个或者多个程序可被一个或者多个处理器执行对第一图像进行识别得到多个图像区域,以实现以下步骤:In other embodiments of the present invention, the one or more programs may be executed by one or more processors to identify the first image to obtain multiple image regions, so as to implement the following steps:
对第一图像中对象的边缘进行识别,得到多个图像区域。Recognize the edges of objects in the first image to obtain multiple image regions.
在本发明的其他实施例中,该一个或者多个程序可被一个或者多个处理器执行对第一图像中对象的边缘进行识别,得到多个图像区域,以实现以下步骤:In other embodiments of the present invention, the one or more programs may be executed by one or more processors to recognize edges of objects in the first image to obtain a plurality of image regions to implement the following steps:
采用第一检测方法对第一图像中对象的边缘进行识别,得到多个第一识别区域;Adopting a first detection method to recognize the edges of objects in the first image to obtain a plurality of first recognition regions;
采用第二检测方法对第一图像中对象的边缘进行识别,得到多个第二识别区域;其中,第一检测方法和第二检测方法均用于识别第一图像中对象的边缘;Adopting a second detection method to recognize the edges of objects in the first image to obtain a plurality of second recognition areas; wherein the first detection method and the second detection method are both used to identify the edges of the objects in the first image;
基于多个第一识别区域和多个第二识别区域,确定多个图像区域。Based on the plurality of first recognition areas and the plurality of second recognition areas, a plurality of image areas are determined.
在本发明的其他实施例中,该一个或者多个程序可被一个或者多个处理器执行基于多个第一识别区域和多个第二识别区域,确定多个图像区域,以实现以下步骤:In other embodiments of the present invention, the one or more programs may be executed by one or more processors to determine a plurality of image regions based on a plurality of first recognition regions and a plurality of second recognition regions to implement the following steps:
基于每一第一识别区域中的像素点的显示参数,确定每个第一识别区域的准确率;Determining an accuracy rate of each first recognition area based on display parameters of pixels in each first recognition area;
基于每一第二识别区域中的像素点的显示参数,确定每个第二识别区域的准确率;Determining an accuracy rate of each second recognition area based on display parameters of pixels in each second recognition area;
基于多个第一识别区域、多个第二识别区域、每个第一识别区域的准确率以及每个第二识别区域的准确率,确定多个图像区域。A plurality of image regions are determined based on a plurality of first recognition regions, a plurality of second recognition regions, an accuracy rate of each first recognition region, and an accuracy rate of each second recognition region.
在本发明的其他实施例中,该一个或者多个程序可被一个或者多个处理器执行从多个图像区域中确定待处理区域,以实现以下步骤:In other embodiments of the present invention, the one or more programs may be executed by one or more processors to determine a region to be processed from a plurality of image regions to implement the following steps:
接收针对第一图像的第一操作;Receiving a first operation on a first image;
响应第一操作并从多个图像区域中确定与第一操作对应的图像区域,得到参考区域;Responding to the first operation and determining an image area corresponding to the first operation from a plurality of image areas to obtain a reference area;
从多个图像区域中,确定除参考区域之外的区域为待处理区域。From the plurality of image regions, a region other than the reference region is determined as a region to be processed.
在本发明的其他实施例中,该一个或者多个程序可被一个或者多个处理器执行响应第一操作并从多个图像区域中确定与第一操作对应的图像区域,得到参考区域,以实现以下步骤:In other embodiments of the present invention, the one or more programs may be executed by one or more processors in response to the first operation and determining an image region corresponding to the first operation from a plurality of image regions to obtain a reference region to Implement the following steps:
若第一操作满足第一预设条件,响应第一操作并从多个图像区域中确定与第一操作对应的图像区域,得到参考区域。If the first operation satisfies the first preset condition, in response to the first operation and determining an image region corresponding to the first operation from a plurality of image regions, a reference region is obtained.
在本发明的其他实施例中,该一个或者多个程序可被一个或者多个处理器执行响应第一操作并从多个图像区域中确定与第一操作对应的图像区域,得到参考区域之后,以实现以下步骤:In other embodiments of the present invention, the one or more programs may be executed by one or more processors in response to the first operation and determining an image region corresponding to the first operation from a plurality of image regions. After obtaining the reference region, To achieve the following steps:
接收针对第一图像的第二操作;Receiving a second operation for the first image;
若第二操作满足第二预设条件,响应第二操作并从多个图像区域中确定与第二操作对应的图像区域,并将第二操作对应的图像区域设置为待处理区域。If the second operation meets the second preset condition, respond to the second operation and determine an image region corresponding to the second operation from a plurality of image regions, and set the image region corresponding to the second operation as a region to be processed.
在本发明的其他实施例中,该一个或者多个程序可被一个或者多个处理器执行基于第 一图像和第二图像获取目标图像,以实现以下步骤:In other embodiments of the present invention, the one or more programs may be executed by one or more processors to obtain a target image based on the first image and the second image to implement the following steps:
将第一图像中与待处理区域对应的区域的图像和第二图像融合,得到目标图像。The target image is obtained by fusing the image of the region corresponding to the region to be processed with the second image.
在本发明的其他实施例中,该一个或者多个程序可被一个或者多个处理器执行获取第一图像,以实现以下步骤:In other embodiments of the present invention, the one or more programs may be executed by one or more processors to acquire the first image to implement the following steps:
获取待处理的第三图像;Obtaining a third image to be processed;
对第三图像进行滤波去噪,得到第一图像。Filter and denoise the third image to obtain a first image.
需要说明的是,本实施例中处理器所执行的步骤的具体实现过程,可以参照图1和4~7对应的实施例提供的图像处理方法中的实现过程,此处不再赘述。It should be noted that, for the specific implementation process of the steps performed by the processor in this embodiment, reference may be made to the implementation process in the image processing method provided by the embodiment corresponding to FIG. 1 and FIG. 4 to FIG.
需要说明的是,上述处理器可以为特定用途集成电路(ASIC,Application Specific Integrated Circuit)、数字信号处理器(DSP,Digital Signal Processor)、数字信号处理装置(DSPD,Digital Signal Processing Device)、可编程逻辑装置(PLD,Programmable Logic Device)、现场可编程门阵列(FPGA,Field Programmable Gate Array)、中央处理器(CPU,Central Processing Unit)、控制器、微控制器、微处理器中的至少一种。可以理解地,实现上述处理器功能的电子器件还可以为其它,本申请实施例不作具体限定。It should be noted that the above processor may be an application specific integrated circuit (ASIC, Application Specific Integrated Circuit), a digital signal processor (DSP, Digital Signal Processor), a digital signal processing device (DSPD, Digital Signal Processing Device), programmable At least one of a logic device (PLD, Programmable Logic Device), field programmable gate array (FPGA, Field Programmable Gate Array), central processing unit (CPU, Central Processing Unit), controller, microcontroller, and microprocessor . Understandably, the electronic device that implements the foregoing processor function may also be other, which is not specifically limited in the embodiment of the present application.
需要说明的是,上述计算机存储介质/存储器可以是只读存储器(Read Only Memory,ROM)、可编程只读存储器(Programmable Read-Only Memory,PROM)、可擦除可编程只读存储器(Erasable Programmable Read-Only Memory,EPROM)、电可擦除可编程只读存储器(Electrically Erasable Programmable Read-Only Memory,EEPROM)、磁性随机存取存储器(Ferromagnetic Random Access Memory,FRAM)、快闪存储器(Flash Memory)、磁表面存储器、光盘、或只读光盘(Compact Disc Read-Only Memory,CD-ROM)等存储器;也可以是包括上述存储器之一或任意组合的各种终端,如移动电话、计算机、平板设备、个人数字助理等。It should be noted that the computer storage medium / memory may be a read-only memory (ROM), a programmable read-only memory (PROM), and an erasable programmable read-only memory (PROM Read-Only Memory (EPROM), Electrically Erasable Programmable Read-Only Memory (EEPROM), Ferromagnetic Random Access Memory (FRAM), Flash Memory (Flash Memory) , Magnetic surface memory, compact disc, or read-only memory (CD-ROM) and other memories; it can also include one or any combination of the above terminals, such as mobile phones, computers, tablet devices , Personal digital assistants, etc.
需要说明的是,在本文中,术语“包括”、“包含”或者其任何其他变体意在涵盖非排他性的包含,从而使得包括一系列要素的过程、方法、物品或者装置不仅包括那些要素,而且还包括没有明确列出的其他要素,或者是还包括为这种过程、方法、物品或者装置所固有的要素。在没有更多限制的情况下,从语句“包括一个……”限定的要素,并不排除在包括该要素的过程、方法、物品或者装置中还存在另外的相同要素。It should be noted that, in this article, the terms "including", "including" or any other variants thereof are intended to cover non-exclusive inclusion, so that a process, method, article or device including a series of elements includes not only those elements, It also includes other elements not explicitly listed, or elements inherent to such a process, method, article, or device. Without more restrictions, an element limited from the sentence "including a ..." does not exclude that there are other identical elements in the process, method, article, or device that includes the element.
上述本申请实施例序号仅仅为了描述,不代表实施例的优劣。The above-mentioned serial numbers of the embodiments of the present application are merely for description, and do not represent the superiority or inferiority of the embodiments.
通过以上的实施方式的描述,本领域的技术人员可以清楚地了解到上述实施例方法可借助软件加必需的通用硬件平台的方式来实现,当然也可以通过硬件,但很多情况下前者是更佳的实施方式。基于这样的理解,本申请的技术方案本质上或者说对现有技术做出贡献的部分可以以软件产品的型式体现出来,该计算机软件产品存储在一个存储介质(如ROM/RAM、磁碟、光盘)中,包括若干指令用以使得一台终端设备(可以是手机,计算机,服务器,空调器,或者网络设备等)执行本申请各个实施例所描述的方法。Through the description of the above embodiments, those skilled in the art can clearly understand that the methods in the above embodiments can be implemented by means of software plus a necessary universal hardware platform, and of course, also by hardware, but in many cases the former is better. Implementation. Based on such an understanding, the technical solution of this application that is essentially or contributes to the existing technology can be embodied in the form of a software product. The computer software product is stored in a storage medium (such as ROM / RAM, magnetic disk, The optical disc) includes several instructions for causing a terminal device (which may be a mobile phone, a computer, a server, an air conditioner, or a network device, etc.) to execute the methods described in the embodiments of the present application.
本申请是参照根据本申请实施例的方法、设备(系统)、和计算机程序产品的流程图和/或方框图来描述的。应理解可从计算机程序指令实现流程图和/或方框图中的每一流程和/或方框、以及流程图和/或方框图中的流程和/或方框的结合。可提供这些计算机程序指令到通用计算机、专用计算机、嵌入式处理机或其他可编程数据处理设备的处理器以产生一个机器,使得通过计算机或其他可编程数据处理设备的处理器执行的指令产生用于实现在流程图一个流程或多个流程和/或方框图一个方框或多个方框中指定的功能的装置。This application is described with reference to flowcharts and / or block diagrams of methods, devices (systems), and computer program products according to embodiments of the present application. It should be understood that each process and / or block in the flowcharts and / or block diagrams, and combinations of processes and / or blocks in the flowcharts and / or block diagrams, can be implemented from computer program instructions. These computer program instructions may be provided to a processor of a general-purpose computer, special-purpose computer, embedded processor, or other programmable data processing device to produce a machine, so that the instructions generated by the processor of the computer or other programmable data processing device are used to generate instructions Means for implementing the functions specified in one or more flowcharts and / or one or more blocks of the block diagrams.
这些计算机程序指令也可存储在能引导计算机或其他可编程数据处理设备以特定方式工作的计算机可读存储器中,使得存储在该计算机可读存储器中的指令产生包括指令装置的制造品,该指令装置实现在流程图一个流程或多个流程和/或方框图一个方框或多个方框中指定的功能。These computer program instructions may also be stored in a computer-readable memory capable of directing a computer or other programmable data processing device to work in a particular manner such that the instructions stored in the computer-readable memory produce a manufactured article including an instruction device, the instructions The device implements the functions specified in one or more flowcharts and / or one or more blocks of the block diagram.
这些计算机程序指令也可装载到计算机或其他可编程数据处理设备上,使得在计算机或其他可编程设备上执行一系列操作步骤以产生计算机实现的处理,从而在计算机或其他可编程设备上执行的指令提供用于实现在流程图一个流程或多个流程和/或方框图一个方框或多个方框中指定的功能的步骤。These computer program instructions can also be loaded on a computer or other programmable data processing device, so that a series of steps can be performed on the computer or other programmable device to produce a computer-implemented process, which can be executed on the computer or other programmable device. The instructions provide steps for implementing the functions specified in one or more flowcharts and / or one or more blocks of the block diagrams.
以上仅为本申请的优选实施例,并非因此限制本申请的专利范围,凡是利用本申请说明书及附图内容所作的等效结构或等效流程变换,或直接或间接运用在其他相关的技术领域,均同理包括在本申请的专利保护范围内。The above are only preferred embodiments of the present application, and thus do not limit the patent scope of the present application. Any equivalent structure or equivalent process transformation made by using the contents of the specification and drawings of the present application, or directly or indirectly used in other related technical fields Are included in the scope of patent protection of this application.

Claims (13)

  1. 一种图像处理方法,其中,所述方法包括:An image processing method, wherein the method includes:
    获取待处理的第一图像,并对所述第一图像进行识别得到多个图像区域;Acquiring a first image to be processed, and identifying the first image to obtain multiple image regions;
    从所述多个图像区域中确定待处理区域,并对所述待处理区域处的对象进行虚化处理得到第二图像;Determining a region to be processed from the plurality of image regions, and performing a blurring process on an object at the region to be processed to obtain a second image;
    基于所述第一图像和所述第二图像获取目标图像。A target image is acquired based on the first image and the second image.
  2. 根据权利要求1所述的方法,其中,所述对所述第一图像进行识别得到多个图像区域,包括:The method according to claim 1, wherein the identifying the first image to obtain a plurality of image regions comprises:
    对所述第一图像中对象的边缘进行识别,得到所述多个图像区域。Identify the edges of the objects in the first image to obtain the plurality of image regions.
  3. 根据权利要求2所述的方法,其中,所述对所述第一图像中对象的边缘进行识别,得到所述多个图像区域,包括:The method according to claim 2, wherein said recognizing an edge of an object in said first image to obtain said plurality of image regions comprises:
    采用第一检测方法对所述第一图像中对象的边缘进行识别,得到多个第一识别区域;Adopting a first detection method to recognize the edges of objects in the first image to obtain a plurality of first recognition areas;
    采用第二检测方法对所述第一图像中对象的边缘进行识别,得到多个第二识别区域;其中,所述第一检测方法和所述第二检测方法均用于识别所述第一图像中对象的边缘;A second detection method is used to recognize the edges of the objects in the first image to obtain a plurality of second recognition areas; wherein the first detection method and the second detection method are both used to identify the first image Edge of object
    基于所述多个第一识别区域和所述多个第二识别区域,确定所述多个图像区域。The plurality of image regions are determined based on the plurality of first recognition regions and the plurality of second recognition regions.
  4. 根据权利要求3所述的方法,其中,所述基于所述多个第一识别区域和所述多个第二识别区域,确定所述多个图像区域,包括:The method according to claim 3, wherein the determining the plurality of image regions based on the plurality of first recognition regions and the plurality of second recognition regions comprises:
    基于每一第一识别区域中的像素点的显示参数,确定每个第一识别区域的准确率;Determining an accuracy rate of each first recognition area based on display parameters of pixels in each first recognition area;
    基于每一第二识别区域中的像素点的显示参数,确定每个第二识别区域的准确率;Determining an accuracy rate of each second recognition area based on display parameters of pixels in each second recognition area;
    基于所述多个第一识别区域、所述多个第二识别区域、所述每个第一识别区域的准确率以及所述每个第二识别区域的准确率,确定所述多个图像区域。Determining the plurality of image regions based on the plurality of first recognition regions, the plurality of second recognition regions, the accuracy rate of each of the first recognition regions, and the accuracy rate of each of the second recognition regions .
  5. 根据权利要求1所述的方法,其中,所述从所述多个图像区域中确定待处理区域,包括:The method according to claim 1, wherein said determining a region to be processed from said plurality of image regions comprises:
    接收针对所述第一图像的第一操作;Receiving a first operation on the first image;
    响应所述第一操作并从所述多个图像区域中确定与所述第一操作对应的图像区域,得到参考区域;Responding to the first operation and determining an image area corresponding to the first operation from the plurality of image areas to obtain a reference area;
    从所述多个图像区域中,确定除所述参考区域之外的区域为所述待处理区域。From the plurality of image regions, a region other than the reference region is determined to be the region to be processed.
  6. 根据权利要求5所述的方法,其中,所述响应所述第一操作并从所述多个图像区域中确定与所述第一操作对应的图像区域,得到参考区域,包括:The method according to claim 5, wherein the responding to the first operation and determining an image area corresponding to the first operation from the plurality of image areas to obtain a reference area comprises:
    若所述第一操作满足第一预设条件,响应所述第一操作并从所述多个图像区域中确定与所述第一操作对应的图像区域,得到所述参考区域。If the first operation satisfies a first preset condition, in response to the first operation and determining an image region corresponding to the first operation from the plurality of image regions, the reference region is obtained.
  7. 根据权利要求6所述的方法,其中,所述响应所述第一操作并从所述多个图像区域中确定与所述第一操作对应的图像区域,得到所述参考区域之后,还包括:The method according to claim 6, wherein in response to the first operation and determining an image area corresponding to the first operation from the plurality of image areas, after obtaining the reference area, further comprising:
    接收针对所述第一图像的第二操作;Receiving a second operation for the first image;
    若所述第二操作满足第二预设条件,响应所述第二操作并从所述多个图像区域中确定与所述第二操作对应的图像区域为所述待处理区域。If the second operation satisfies a second preset condition, respond to the second operation and determine from the plurality of image regions that an image region corresponding to the second operation is the region to be processed.
  8. 根据权利要求1所述的方法,其中,所述基于所述第一图像和所述第二图像获取目标图像,包括:The method according to claim 1, wherein the acquiring a target image based on the first image and the second image comprises:
    将所述第一图像中与所述待处理区域对应的区域的图像和所述第二图像融合,得到所述目标图像。The target image is obtained by fusing an image of a region corresponding to the region to be processed with the second image in the first image.
  9. 根据权利要求1所述的方法,其中,所述获取第一图像,包括:The method according to claim 1, wherein the acquiring the first image comprises:
    获取待处理的第三图像;Obtaining a third image to be processed;
    对所述第三图像进行滤波去噪,得到所述第一图像。Filtering and denoising the third image to obtain the first image.
  10. 一种终端,其中,所述终端包括:处理器、存储器和通信总线;A terminal, wherein the terminal includes: a processor, a memory, and a communication bus;
    所述通信总线用于实现处理器和存储器之间的通信连接;The communication bus is used to implement a communication connection between the processor and the memory;
    所述处理器用于执行存储器中的图像处理方法的程序,以实现以下步骤:The processor is configured to execute a program of an image processing method in a memory to implement the following steps:
    获取待处理的第一图像,并对所述第一图像进行识别得到多个图像区域;Acquiring a first image to be processed, and identifying the first image to obtain multiple image regions;
    从所述多个图像区域中确定待处理区域,并对所述待处理区域处的对象进行虚化处理得到第二图像;Determining a region to be processed from the plurality of image regions, and performing a blurring process on an object at the region to be processed to obtain a second image;
    基于所述第一图像和所述第二图像获取目标图像。A target image is acquired based on the first image and the second image.
  11. 根据权利要求10所述的终端,其中,所述处理器执行对所述待处理区域处的对象进行虚化处理得到第二图像的步骤时,还可以实现以下步骤:The terminal according to claim 10, wherein when the processor executes the step of blurring the object at the region to be processed to obtain a second image, the following steps may be further implemented:
    采用第一检测方法对所述第一图像中对象的边缘进行识别,得到多个第一识别区域;Adopting a first detection method to recognize the edges of objects in the first image to obtain a plurality of first recognition areas;
    采用第二检测方法对所述第一图像中对象的边缘进行识别,得到多个第二识别区域;其中,所述第一检测方法和所述第二检测方法均用于识别所述第一图像中对象的边缘;A second detection method is used to recognize the edges of the objects in the first image to obtain a plurality of second recognition areas; wherein the first detection method and the second detection method are both used to identify the first image Edge of object
    基于所述多个第一识别区域和所述多个第二识别区域,确定所述多个图像区域。The plurality of image regions are determined based on the plurality of first recognition regions and the plurality of second recognition regions.
  12. 根据权利要求10所述的终端,其中,所述处理器执行从所述多个图像区域中确定待处理区域的步骤时,还可以实现以下步骤:The terminal according to claim 10, wherein when the processor executes the step of determining a region to be processed from the plurality of image regions, the following steps can be further implemented:
    接收针对所述第一图像的第一操作;Receiving a first operation on the first image;
    响应所述第一操作并从所述多个图像区域中确定与所述第一操作对应的图像区域,得到参考区域;Responding to the first operation and determining an image area corresponding to the first operation from the plurality of image areas to obtain a reference area;
    从所述多个图像区域中,确定除所述参考区域之外的区域为所述待处理区域。From the plurality of image regions, a region other than the reference region is determined to be the region to be processed.
  13. 一种计算机存储介质,其中,所述计算机存储介质存储有一个或者多个程序,所述一个或者多个程序可被一个或者多个处理器执行,以实现如权利要求1至9中任一项所述的图像处理方法的步骤。A computer storage medium, wherein the computer storage medium stores one or more programs, and the one or more programs can be executed by one or more processors to implement any one of claims 1 to 9. The steps of the image processing method.
PCT/CN2019/090079 2018-08-21 2019-06-05 Image processing method, terminal, and computer storage medium WO2020038065A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201810955721.6 2018-08-21
CN201810955721.6A CN110855876B (en) 2018-08-21 2018-08-21 Image processing method, terminal and computer storage medium

Publications (1)

Publication Number Publication Date
WO2020038065A1 true WO2020038065A1 (en) 2020-02-27

Family

ID=69592348

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2019/090079 WO2020038065A1 (en) 2018-08-21 2019-06-05 Image processing method, terminal, and computer storage medium

Country Status (2)

Country Link
CN (1) CN110855876B (en)
WO (1) WO2020038065A1 (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112184610A (en) * 2020-10-13 2021-01-05 深圳市锐尔觅移动通信有限公司 Image processing method and device, storage medium and electronic equipment
CN116399401A (en) * 2023-04-14 2023-07-07 浙江年年发农业开发有限公司 Agricultural planting system and method based on artificial intelligence
WO2024022149A1 (en) * 2022-07-29 2024-02-01 马上消费金融股份有限公司 Data enhancement method and apparatus, and electronic device

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117974493A (en) * 2024-03-28 2024-05-03 荣耀终端有限公司 Image processing method and related device

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101587586A (en) * 2008-05-20 2009-11-25 株式会社理光 Device and method for processing images
US8306283B2 (en) * 2009-04-21 2012-11-06 Arcsoft (Hangzhou) Multimedia Technology Co., Ltd. Focus enhancing method for portrait in digital image
CN103366352A (en) * 2012-03-30 2013-10-23 北京三星通信技术研究有限公司 Device and method for producing image with background being blurred
US20140184853A1 (en) * 2012-12-27 2014-07-03 Canon Kabushiki Kaisha Image processing apparatus, image processing method, and image processing program
CN105049695A (en) * 2015-07-07 2015-11-11 广东欧珀移动通信有限公司 Video recording method and device
CN105578070A (en) * 2015-12-21 2016-05-11 深圳市金立通信设备有限公司 Image processing method and terminal
CN105611154A (en) * 2015-12-21 2016-05-25 深圳市金立通信设备有限公司 Image processing method and terminal

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8379120B2 (en) * 2009-11-04 2013-02-19 Eastman Kodak Company Image deblurring using a combined differential image
CN102737370B (en) * 2011-04-02 2015-07-01 株式会社理光 Method and device for detecting image foreground
US8873852B2 (en) * 2011-09-29 2014-10-28 Mediatek Singapore Pte. Ltd Method and apparatus for foreground object detection
CN103235692A (en) * 2013-03-28 2013-08-07 中兴通讯股份有限公司 Touch-screen device and method for touch-screen device to select target objects
CN104794696B (en) * 2015-05-04 2018-05-11 长沙市英迈瑞孚智能技术有限公司 A kind of image goes motion blur method and device
CN107369134A (en) * 2017-06-12 2017-11-21 上海斐讯数据通信技术有限公司 A kind of image recovery method of blurred picture
CN107730460B (en) * 2017-09-26 2020-02-14 维沃移动通信有限公司 Image processing method and mobile terminal

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101587586A (en) * 2008-05-20 2009-11-25 株式会社理光 Device and method for processing images
US8306283B2 (en) * 2009-04-21 2012-11-06 Arcsoft (Hangzhou) Multimedia Technology Co., Ltd. Focus enhancing method for portrait in digital image
CN103366352A (en) * 2012-03-30 2013-10-23 北京三星通信技术研究有限公司 Device and method for producing image with background being blurred
US20140184853A1 (en) * 2012-12-27 2014-07-03 Canon Kabushiki Kaisha Image processing apparatus, image processing method, and image processing program
CN105049695A (en) * 2015-07-07 2015-11-11 广东欧珀移动通信有限公司 Video recording method and device
CN105578070A (en) * 2015-12-21 2016-05-11 深圳市金立通信设备有限公司 Image processing method and terminal
CN105611154A (en) * 2015-12-21 2016-05-25 深圳市金立通信设备有限公司 Image processing method and terminal

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112184610A (en) * 2020-10-13 2021-01-05 深圳市锐尔觅移动通信有限公司 Image processing method and device, storage medium and electronic equipment
CN112184610B (en) * 2020-10-13 2023-11-28 深圳市锐尔觅移动通信有限公司 Image processing method and device, storage medium and electronic equipment
WO2024022149A1 (en) * 2022-07-29 2024-02-01 马上消费金融股份有限公司 Data enhancement method and apparatus, and electronic device
CN116399401A (en) * 2023-04-14 2023-07-07 浙江年年发农业开发有限公司 Agricultural planting system and method based on artificial intelligence
CN116399401B (en) * 2023-04-14 2024-02-09 浙江年年发农业开发有限公司 Agricultural planting system and method based on artificial intelligence

Also Published As

Publication number Publication date
CN110855876A (en) 2020-02-28
CN110855876B (en) 2022-04-05

Similar Documents

Publication Publication Date Title
WO2020038065A1 (en) Image processing method, terminal, and computer storage medium
US11882357B2 (en) Image display method and device
KR101662846B1 (en) Apparatus and method for generating bokeh in out-of-focus shooting
JP6961797B2 (en) Methods and devices for blurring preview photos and storage media
CN105426861B (en) Lane line determines method and device
WO2021022983A1 (en) Image processing method and apparatus, electronic device and computer-readable storage medium
CN110493527B (en) Body focusing method and device, electronic equipment and storage medium
WO2017054314A1 (en) Building height calculation method and apparatus, and storage medium
WO2015184408A1 (en) Scene motion correction in fused image systems
CN105303514A (en) Image processing method and apparatus
EP3798975B1 (en) Method and apparatus for detecting subject, electronic device, and computer readable storage medium
CN110276831B (en) Method and device for constructing three-dimensional model, equipment and computer-readable storage medium
CN110503704B (en) Method and device for constructing three-dimensional graph and electronic equipment
WO2017173578A1 (en) Image enhancement method and device
Asha et al. Auto removal of bright spot from images captured against flashing light source
CN112036209A (en) Portrait photo processing method and terminal
CN108234826B (en) Image processing method and device
Choi et al. A method for fast multi-exposure image fusion
CN113658197B (en) Image processing method, device, electronic equipment and computer readable storage medium
WO2020098325A1 (en) Image synthesis method, electronic device and storage medium
CN114372931A (en) Target object blurring method and device, storage medium and electronic equipment
CN108399617B (en) Method and device for detecting animal health condition
Chen et al. Hybrid saliency detection for images
CN112839167A (en) Image processing method, image processing device, electronic equipment and computer readable medium
JP3860540B2 (en) Entropy filter and region extraction method using the filter

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19852986

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 25.06.2021)

122 Ep: pct application non-entry in european phase

Ref document number: 19852986

Country of ref document: EP

Kind code of ref document: A1