CN110874814A - Image processing method, image processing device and terminal equipment - Google Patents

Image processing method, image processing device and terminal equipment Download PDF

Info

Publication number
CN110874814A
CN110874814A CN201811013836.XA CN201811013836A CN110874814A CN 110874814 A CN110874814 A CN 110874814A CN 201811013836 A CN201811013836 A CN 201811013836A CN 110874814 A CN110874814 A CN 110874814A
Authority
CN
China
Prior art keywords
image
foreground
position information
processed
detecting
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201811013836.XA
Other languages
Chinese (zh)
Other versions
CN110874814B (en
Inventor
张弓
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong Oppo Mobile Telecommunications Corp Ltd
Original Assignee
Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong Oppo Mobile Telecommunications Corp Ltd filed Critical Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority to CN201811013836.XA priority Critical patent/CN110874814B/en
Publication of CN110874814A publication Critical patent/CN110874814A/en
Application granted granted Critical
Publication of CN110874814B publication Critical patent/CN110874814B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/04Context-preserving transformations, e.g. by using an importance map
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/20Image enhancement or restoration using local operators
    • G06T5/30Erosion or dilatation, e.g. thinning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Image Analysis (AREA)
  • Facsimile Image Signal Circuits (AREA)

Abstract

The application provides an image processing method, an image processing device and a terminal device, wherein the method comprises the following steps: acquiring an image to be processed; detecting a foreground in the image to be processed to obtain position information of the foreground; performing expansion operation on an image area indicated by the position information in the image to be processed to obtain corrected position information; blurring the area except the image area indicated by the corrected position information in the image to be processed to obtain a processed image. The image after blurring can be more natural.

Description

Image processing method, image processing device and terminal equipment
Technical Field
The present application belongs to the field of image processing technologies, and in particular, to an image processing method, an image processing apparatus, a terminal device, and a computer-readable storage medium.
Background
Most of existing terminal devices (such as mobile phones, tablet computers, and the like) can capture images with background blurring effects, and a conventional background blurring method is to detect a foreground region and a background region in an image to be processed, and perform blurring processing on the background region in the image to be processed, so as to obtain a blurred image with a clear foreground region and a blurred background region.
However, in the blurred image obtained by the conventional background blurring method, the boundary between the foreground region and the background region in the blurred image is abrupt, so that the blurred image is unnatural. For example, when the foreground is a portrait, in a blurred image obtained by using a conventional background blurring method, at an edge contour line of the portrait, an image area on one side is relatively clear, and an image area on the other side is relatively blurred, so that the edge area of the portrait in the blurred image is unnatural.
Disclosure of Invention
In view of the above, the present application provides an image processing method, an image processing apparatus, a terminal device and a computer readable storage medium, which can solve the technical problem that a blurred image is unnatural by using a conventional background blurring method.
A first aspect of the present application provides an image processing method, including:
acquiring an image to be processed;
detecting the foreground in the image to be processed to obtain the position information of the foreground;
performing expansion operation on the image area indicated by the position information in the image to be processed to obtain corrected position information;
blurring the area of the image to be processed except the image area indicated by the corrected position information to obtain a processed image.
A second aspect of the present application provides an image processing apparatus comprising:
the image acquisition module is used for acquiring an image to be processed;
the foreground detection module is used for detecting the foreground in the image to be processed to obtain the position information of the foreground;
the expansion operation module is used for performing expansion operation on the image area indicated by the position information in the image to be processed to obtain corrected position information;
and the blurring processing module is used for blurring the area except the image area indicated by the corrected position information in the image to be processed to obtain a processed image.
A third aspect of the present application provides a terminal device comprising a memory, a processor and a computer program stored in the memory and executable on the processor, wherein the processor implements the steps of the method according to the first aspect when executing the computer program.
A fourth aspect of the present application provides a computer-readable storage medium having stored thereon a computer program which, when executed by a processor, performs the steps of the method of the first aspect as described above.
A fifth aspect of the present application provides a computer program product comprising a computer program which, when executed by one or more processors, performs the steps of the method of the first aspect as described above.
In view of the above, the present application provides an image processing method. Firstly, acquiring an image to be processed, such as an image shot by a user through a mobile phone camera; secondly, detecting the foreground in the image to be processed to obtain position information of the foreground, and determining an image area where the foreground is located in the image to be processed according to the position information; then, performing an expansion operation on the image area indicated by the position information in the image to be processed to obtain corrected position information, that is, performing an expansion operation on the image area where the foreground in the image to be processed is located to obtain corrected position information, where the image area indicated by the corrected position information covers the image area where the foreground is located; and finally, blurring the area except the image area indicated by the corrected position information in the image to be processed to obtain a processed image. Therefore, the method and the device limit that when the blurring processing is performed on the image to be processed, blurring processing is performed on the area outside the image area indicated by the correction position information, and the correction position information is obtained by performing expansion operation on the image area where the foreground is located, so that the image area indicated by the correction position information covers the image area where the foreground is located, and therefore, at the foreground contour line in the processed image, the phenomenon that one side of the image area is clearer and the other side of the image area is more fuzzy does not occur, so that the junction of the foreground area and the background area in the processed image is not more abrupt.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings needed to be used in the embodiments or the prior art descriptions will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without inventive exercise.
Fig. 1 is a schematic flow chart illustrating an implementation of an image processing method according to an embodiment of the present application;
fig. 2 is a schematic diagram of detecting foreground position information according to depth information according to an embodiment of the present application;
FIG. 3 is a schematic diagram of a process for implementing a training process of a full convolution neural network model according to an embodiment of the present application;
FIG. 4 is a schematic diagram of a training process of a full convolution neural network model according to an embodiment of the present application;
fig. 5 is a schematic flow chart of an implementation of step S103 according to an embodiment of the present application;
FIG. 6 is a schematic diagram of a dilation operation provided in accordance with an embodiment of the present disclosure;
fig. 7 is a schematic flow chart illustrating an implementation of another image processing method according to the second embodiment of the present application;
fig. 8 is a schematic diagram of a modified location information determining method according to a second embodiment of the present application;
fig. 9 is a schematic structural diagram of an image processing apparatus according to a third embodiment of the present application;
fig. 10 is a schematic structural diagram of a terminal device according to a fourth embodiment of the present application.
Detailed Description
In the following description, for purposes of explanation and not limitation, specific details are set forth, such as particular system structures, techniques, etc. in order to provide a thorough understanding of the embodiments of the present application. It will be apparent, however, to one skilled in the art that the present application may be practiced in other embodiments that depart from these specific details. In other instances, detailed descriptions of well-known systems, devices, circuits, and methods are omitted so as not to obscure the description of the present application with unnecessary detail.
The image processing method provided by the embodiment of the application can be applied to terminal devices, and the terminal devices include, but are not limited to: smart phones, tablet computers, learning machines, intelligent wearable devices, and the like.
It will be understood that the terms "comprises" and/or "comprising," when used in this specification and the appended claims, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof.
It is also to be understood that the terminology used in the description of the present application herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the application. As used in the specification of the present application and the appended claims, the singular forms "a," "an," and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise.
It should be further understood that the term "and/or" as used in this specification and the appended claims refers to and includes any and all possible combinations of one or more of the associated listed items.
As used in this specification and the appended claims, the term "if" may be interpreted contextually as "when", "upon" or "in response to a determination" or "in response to a detection". Similarly, the phrase "if it is determined" or "if a [ described condition or event ] is detected" may be interpreted contextually to mean "upon determining" or "in response to determining" or "upon detecting [ described condition or event ]" or "in response to detecting [ described condition or event ]".
In addition, in the description of the present application, the terms "first", "second", and the like are used only for distinguishing the description, and are not intended to indicate or imply relative importance.
In order to explain the technical solution of the present application, the following description will be given by way of specific examples.
Example one
Referring to fig. 1, an image processing method according to a first embodiment of the present application is described below, where the image processing method includes:
in step S101, an image to be processed is acquired;
in the embodiment of the present application, an image to be processed is first acquired. The image to be processed may be an image shot by a user through a terminal device, for example, an image shot after the user starts a camera application program in a mobile phone and clicks a shooting button; or, the preview image of a certain frame collected by the camera may be acquired after the user starts the camera or the video camera in the terminal device, for example, after the user starts the camera application program in the mobile phone, the preview image of a certain frame collected by the camera may be acquired by the user; or, the image received by the user through other application programs, for example, the image sent by other wechat contacts received by the user in wechat; or, the image may be an image downloaded by the user from the internet, for example, an image downloaded by the user in a browser through a common carrier network; or, it may also be a certain frame of image in the video, for example, one of the frames of image in the television program watched by the user, where the source of the image to be processed is not limited.
In step S102, detecting a foreground in the image to be processed to obtain position information of the foreground;
in the embodiment of the present application, after the to-be-processed image is acquired, it is necessary to implement the blurring processing on the to-be-processed image through the subsequent steps S102 to S104, and therefore, before the step S102, it may be first detected whether the user inputs an instruction for instructing to perform the blurring processing on the to-be-processed image, and if it is detected that the user inputs an instruction for instructing to perform the blurring processing on the to-be-processed image, the step S102 and the subsequent steps are executed again.
In addition, in this embodiment of the application, if the to-be-processed image acquired in step S101 is an image acquired by a camera in the terminal device, step S102 may be:
firstly, obtaining depth information of the image to be processed, where the depth information may be obtained based on methods such as structured light, a TOF (Time of flight) camera, or a binocular camera, and the like, and is used to indicate distance information of each pixel point in the image to be processed from the terminal device; secondly, determining each pixel point in the image to be processed, wherein the distance between the pixel point and the terminal equipment is smaller than a preset distance; and then, determining the position information of each pixel point with the distance from the terminal equipment smaller than the preset distance as the position information of the foreground in the image to be processed. As shown in fig. 2, assuming that the image 201 is an image to be processed captured by the terminal device 200, in the step S102, first, depth information of the image to be processed 201 is obtained, where the depth information is used to indicate distance information of each pixel point in the image to be processed 201 from the terminal device 200, for example, the depth information indicates that the distance between the pixel point a and the terminal device 200 is 2 meters, and the distance between the pixel point B and the terminal device 200 is 1.8 meters; secondly, according to the depth information of the image 201 to be processed, determining all pixel points with a distance to the terminal device 200 smaller than a preset distance, wherein an image area formed by the determined pixel points is a foreground area of the image 201 to be processed, and assuming that the preset distance is 2.2 meters, the depth information of the image 201 to be processed indicates that the distance between the pixel point a and the terminal device 200 is 2 meters, and the distance between the pixel point B and the terminal device 200 is 1.8 meters, determining that the pixel point a and the pixel point B are both pixel points forming the foreground of the image 201 to be processed; then, the position information of each pixel point whose distance from the terminal device 200 is smaller than the preset distance is determined as the position information of the foreground in the image to be processed 201, as shown in fig. 2, it is assumed that the pixel point a and the pixel point B are all the finally determined pixel points whose distance from the terminal device 200 is smaller than the preset distance, and the position information of the pixel point a is (1000 ), that is, the pixel point a is located at a position which is 1000 pixel points apart from the first pixel point O at the upper left corner in the X direction, and is also 1000 pixel points apart from the O point in the Y direction, and the position information of the pixel point B is (1800,2300), that is, the pixel point B is located at a position which is 1800 pixel points apart from the O point in the X direction, and is 2300 pixel points apart from the O point in the Y direction, and the position information of the foreground in the image to be processed 201 is (1000 ) and (1800,2300). In addition, in the embodiment of the present application, the position information of the pixel point may be represented by the number of the pixel points spaced from the reference point, and may also be represented by the distance length from the reference point, for example, the position information of the pixel point a in fig. 2 may also be represented by (3cm ), that is, the pixel point a is located at a position spaced 3cm from the O point in the X direction and also spaced 3cm from the O point in the Y direction, and the present application does not limit the method for representing the position information of the pixel point.
In addition, the method for detecting the foreground in the image to be processed in step S102 to obtain the position information of the foreground may further include:
detecting the foreground in the image to be processed by using a trained full Convolutional neural Network (FCN) model, thereby obtaining position information of the foreground in the image to be processed. Compared with a CNN (Convolutional Neural Network) model, the FCN model has no full connection layer, and thus the FCN model can obtain the type information of each pixel point in an input image and is often used in the field of image segmentation.
For example, the training process of the above-mentioned trained FCN model may be as shown in fig. 3, and includes steps S301 to S304:
in step S301, obtaining each sample image and foreground position information corresponding to each sample image in advance;
in this step, each sample image may be selected from the sample database, and foreground position information corresponding to each sample image may be acquired. For example, as shown in fig. 4, an initial FCN model 401 is trained by using a sample image 1, a sample image 2, and a sample image 3, foreground position information a corresponding to the sample image 1, foreground position information B corresponding to the sample image 2, and foreground position information C corresponding to the sample image 3, where each foreground position information is composed of position information of each pixel point in a foreground region in the corresponding sample image, and as shown in fig. 4, the foreground position information a is composed of position information of each pixel point in a foreground region X (a shadow portion) in the sample image 1. In addition, in order to enable the trained FCN model to more accurately identify the foreground position in the input image, as many sample images as possible may be acquired in this step S301.
In step S302, each sample image is input into an initial FCN model, so that the initial FCN model detects position information of the foreground in each sample image;
in this embodiment of the present application, an initial FCN model for detecting a foreground position in an input image is first established, and then each sample image obtained in step S301 is respectively input into the initial FCN model, so that the initial FCN model outputs position information of a foreground in each sample image. As shown in fig. 4, the initial FCN model 401 outputs position information a1 of the foreground in the sample image 1, position information B1 of the foreground in the sample image 2, and position information C1 of the foreground in the sample image 3.
In step S303, determining the detection accuracy of the initial FCN model according to the foreground position information corresponding to each pre-acquired sample image and the position information of the foreground in each sample image detected by the initial FCN model;
in this embodiment of the present application, whether the initial FCN model accurately detects the foreground position of a sample image is determined according to the position information of the foreground corresponding to the sample image output by the initial FCN model and the foreground position information of the sample image acquired in advance in step S301, all sample images are traversed, the proportion of the sample image accurately detecting the foreground position in all sample images is calculated, and the proportion is determined as the detection accuracy of the initial FCN model.
As shown in fig. 4, it may be determined whether the initial FCN model 401 accurately detects the foreground position in the sample image 1 according to the position information a1 of the foreground corresponding to the sample image 1 output by the initial FCN model 401 and the foreground position information a acquired in step S301. The specific method for judging whether the foreground position in the sample image 1 is accurately detected may be: determining the number of pixel points in the intersection of the foreground position information A1 and the foreground position information A and the number of pixel points in the intersection; if the ratio of the number of the pixels in the intersection of the a1 and the a to the number of the pixels in the union of the a1 and the a is greater than the preset ratio, the FCN model 401 is considered to accurately detect the foreground position in the sample image 1. In addition, the method for determining whether to accurately detect the foreground position in the sample image 1 may further include: according to the foreground position information a and the position information a1 of the foreground, calculating an intersection ratio of an image region determined by the foreground position information a in the sample image 1 and an image region determined by the position information a1 of the foreground in the sample image 1, and if the intersection ratio of the areas is greater than a preset area intersection ratio, determining that the FCN model 401 accurately detects the foreground position in the sample image 1. The method for judging whether the FCN model accurately detects the foreground position in the sample image is not limited in the application. Finally, traversing the rest of sample images, namely the sample image 2 and the sample image 3, and calculating the proportion of the sample image of the initial FCN model 401 at the accurate detection foreground position in all the sample images, thereby obtaining the detection accuracy of the initial FCN model 401.
In step S304, continuously adjusting each parameter of the current FCN model, and continuously detecting the foreground position in each sample image through the FCN model after parameter adjustment until the detection accuracy of the FCN model after parameter adjustment is greater than a preset accuracy, taking the current FCN model as the FCN model after training;
in general, the initial FCN model cannot detect the foreground position in the sample image very accurately, and therefore, each parameter of the initial FCN model needs to be adjusted, and each sample image obtained in step S301 needs to be input into the FCN model after parameter adjustment again, so that the FCN model after parameter adjustment continues to detect the foreground position in each sample image, the detection accuracy of the FCN model after parameter adjustment is obtained again, each parameter of the current FCN model is continuously adjusted until the detection accuracy of the current FCN model is greater than the preset accuracy, and the current FCN model is taken as the FCN model after training. Common methods for adjusting parameters include a Stochastic Gradient Descent (SGD) algorithm, a power update (momentupdate) algorithm, and the like, and the method used for adjusting parameters is not limited herein.
In step S103, performing dilation operation on an image area indicated by the position information of the foreground in the image to be processed to obtain corrected position information;
in the embodiment of the present application, a specific execution process of step S103 may be as shown in fig. 5, and includes steps S501 to S503:
in step S501, setting a pixel value of each pixel point in an image region indicated by the position information of the foreground in the image to be processed as a first pixel value, and setting other pixel values in the image to be processed as second pixel values, to obtain a binarized image;
as shown in fig. 6, assuming that the image 601 is the image to be processed acquired in step S101, and the shadow area in the image 601 is the image area indicated by the position information of the foreground detected in step S102, that is, the foreground area in the image 601, in step S501, the pixel value of each pixel point in the foreground area of the image 601 may be set to a uniform first pixel value, for example, 255, and the remaining pixel points of the image 601 may be set to a uniform second pixel value, for example, 0, so as to obtain the binary image 602.
In step S502, performing an expansion operation on an image region having a pixel value equal to the first pixel value in the binarized image to obtain a corrected binarized image;
specifically, the specific execution process of step S502 may be: firstly, defining a first traversal frame, wherein the first traversal frame comprises one or more reference points, can be a rectangular frame, a circular frame or the like, and has the size smaller than the image size of the binary image; secondly, traversing all pixel points of the binary image by using the first traversal frame, if an image area covered by the first traversal frame contains pixel points with first pixel values, modifying the pixel values of the reference points in the first traversal frame into the first pixel values, and if the pixel values of all the pixel points in the image area covered by the first traversal frame are the second pixel values, not modifying the pixel values of all the pixel points in the first traversal frame.
To describe this step more clearly, referring to fig. 6, as shown in fig. 6, a 3 × 3 rectangular frame 603 (i.e., the first traversal frame) is defined, where the second row and the second column in the rectangular frame 603 are the reference points of the rectangular frame, first, the rectangular frame 603 is covered with the upper left corner region of the binarized image 602, and at this time, the pixel values of the pixels covered by the rectangular frame 603 are the second pixel values, and the pixel values of the pixels covered by the rectangular frame 603 are not modified; secondly, moving the rectangular frame to the right by one pixel point, judging whether the image area covered by the rectangular frame 603 contains the pixel point with the first pixel value again, if so, modifying the pixel value of the reference point in the rectangular frame 603 to the first pixel value, if not, continuously moving one pixel point to the right (or downwards) without modifying the pixel value of each pixel point in the rectangular frame 603 until the binary image 602 is traversed, thereby obtaining the modified binary image. Those skilled in the art will readily appreciate that the image area of the modified binarized image having pixel values at the first pixel values is larger than the image area of the binarized image having pixel values at the first pixel values.
In step S503, the position information of each pixel point of the corrected binary image whose pixel value is the first pixel value is determined as the corrected position information.
In obtaining the corrected binary image, the correction position information is determined, and those skilled in the art will readily understand that in the image to be processed, the image area indicated by the correction position information will cover the foreground area determined in step S102.
In step S104, blurring a region of the to-be-processed image excluding the image region indicated by the corrected position information to obtain a processed image;
in the embodiment of the present application, after the corrected position information is obtained, blurring processing, that is, blurring processing, is performed on the image area except for the corrected position information in the to-be-processed image acquired in step S101. In this embodiment of the application, in step S104, different degrees of blurring may be performed according to the distance between different image areas and the foreground area in the image to be processed (the distance is not an actual distance from the foreground in a real scene, but a linear length between the image to be processed and the foreground area), for example, a region with a short linear length from the foreground area may be subjected to blurring processing with a smaller degree, and a region with a long linear length from the foreground area may be subjected to blurring processing with a larger degree. As shown in fig. 6, an arbitrary point L is selected from the foreground region, an arbitrary point M is selected from the image region U, an arbitrary point N is selected from the image region V, the linear distance a between the point M and the point L is calculated, a is taken as the distance between the image region U and the foreground region, the linear distance b between the point N and the point L is calculated, b is taken as the distance between the image region V and the foreground region, and if a is 5cm and b is 3cm, the image region U can be blurred to a higher degree, and the image region V can be blurred to a lower degree, so that the background region closer to the foreground region in the blurred image is clearer, and the blurred image is more natural.
The embodiment of the application limits that when the blurring processing is performed on the image to be processed, the blurring processing is performed on the area outside the image area indicated by the corrected position information, and the corrected position information is obtained by performing the expansion operation on the image area where the foreground is located, so that the image area indicated by the corrected position information can cover the image area where the foreground is located, and therefore, at the foreground contour line in the blurred image, the phenomenon that the image area is clearer on one side and blurred on the other side can not occur, so that the junction between the foreground area and the background area in the processed image is not sharp, and the technical problem that the blurred image is unnatural due to the traditional background blurring method is solved to a certain extent.
Example two
Another image processing method is provided in the second embodiment of the present application, referring to fig. 7, including:
in step S701, an image to be processed is acquired;
in step S702, detecting a foreground in the image to be processed to obtain position information of the foreground;
in the second embodiment of the present application, the steps S701 to S702 are the same as the steps S101 to S102 in the first embodiment, and specific reference may be made to the description of the first embodiment, which is not repeated herein.
In step S703, traversing all pixel points of the image to be processed by using a predefined second traversal frame, and determining whether an image area covered by the second traversal frame includes both a foreground area and a background area according to position information of a foreground in the image to be processed, where a size of the second traversal frame is smaller than an image size of the image to be processed;
in step S704, if yes, all the image areas framed in the second traversal frame are determined as correction areas;
in step S705, a union of the position information of the pixel points in each correction area and the position information of the foreground determined in step S702 is taken, and all the position information in the union is determined as correction position information;
in the first embodiment of the present application, a specific method for determining corrected location information is provided (see fig. 5), and the above steps S703-S705 are another method for determining corrected location information provided in the second embodiment of the present application.
In order to more clearly describe another determination method of the corrected position information provided in the second embodiment of the present application, the following description uses fig. 8. As shown in fig. 8, a 3 × 3 rectangular frame 802 is defined, the rectangular frame 802 is the above-mentioned second traversal frame (the second traversal frame may be a circular frame or a rectangular frame, etc.), first, the rectangular frame 802 is covered on the upper left corner area of the image 801 to be processed, at this time, the image areas covered by the rectangular frame are all background areas, in this case, the rectangular frame 802 is continuously moved, one pixel point is moved to the right of the rectangular frame 802, it is determined again whether the image area covered by the rectangular frame 802 includes both a foreground area and a background area, if yes, the image area currently covered by the rectangular frame 802 is determined as a correction area, if no, the rectangular frame 802 is continuously moved one pixel point to the right (or downward) until the image 801 to be processed is traversed, so as to obtain each correction area, then each correction area is merged with the foreground area determined in step S702, and determining the position information of all the pixel points in the union set as the corrected position information.
In step S706, blurring an area of the to-be-processed image excluding the image area indicated by the corrected position information to obtain a processed image;
in the second embodiment of the present application, the step S706 is the same as the step S104 in the first embodiment, and specific reference may be made to the description of the first embodiment, which is not repeated herein.
Compared with the first embodiment, the second embodiment of the application defines another determination method for correcting the position information. Generally, before performing an expansion operation on a certain region in an image, a binarization process is performed on the image first, and then the binarized image is temporarily stored, so that the binarization-processed image is subsequently operated. However, the dilation operation method provided in the second embodiment of the present application avoids performing binarization on the image, so that when performing dilation operation on the image, additional temporary storage of the binarized image is avoided, and the storage space can be saved to a certain extent when the technical solution of the second embodiment is executed. In addition, the second embodiment of the present application is the same as the first embodiment, so that the boundary between the foreground region and the background region in the blurred image is no longer abrupt, and the technical problem that the blurred image is unnatural due to the conventional background blurring method is solved to a certain extent.
It should be understood that the sequence numbers of the steps in the first method embodiment and the second method embodiment do not mean the execution sequence, and the execution sequence of each process should be determined by the function and the inherent logic of the process, and should not limit the implementation process of the embodiments of the present application.
EXAMPLE III
A third embodiment of the present application provides an image processing apparatus, in which only a part related to the present application is shown for convenience of description, and as shown in fig. 9, an image processing apparatus 900 includes:
an image obtaining module 901, configured to obtain an image to be processed;
a foreground detection module 902, configured to detect a foreground in the image to be processed, to obtain location information of the foreground;
an expansion operation module 903, configured to perform expansion operation on an image area indicated by the position information in the image to be processed to obtain corrected position information;
a blurring processing module 904, configured to perform blurring processing on an area of the to-be-processed image, except for the image area indicated by the corrected position information, to obtain a processed image.
Optionally, the foreground detection module 902 is specifically configured to:
and when detecting that a user inputs an instruction for indicating blurring processing on the image to be processed, detecting a foreground in the image to be processed to obtain position information of the foreground.
Optionally, the expansion operation module 903 includes:
a binarization unit, configured to set a pixel value of each pixel point in an image region indicated by the position information in the image to be processed to a first pixel value, and set a pixel value of each pixel point in the image to be processed, except for the image region indicated by the position information, to a second pixel value, so as to obtain a binarized image;
an expansion unit, configured to perform expansion operation on an image region having a pixel value equal to the first pixel value in the binarized image to obtain a corrected binarized image;
and a corrected position determining unit configured to determine, as corrected position information, position information of each pixel point in the corrected binarized image, the pixel value of which is the first pixel value.
Optionally, the image to be processed is an image acquired by the terminal device;
accordingly, the foreground detecting module 902 includes:
a depth obtaining unit, configured to obtain depth information of the image to be processed;
a foreground determining unit, configured to determine, according to the depth information, each pixel point whose distance from the terminal device is smaller than a preset distance;
and a foreground position determining unit, configured to determine, as the position information of the foreground, position information of each pixel point whose distance from the terminal device is smaller than the preset distance.
Optionally, the foreground detection module 902 is specifically configured to:
and detecting the foreground in the image to be processed by using the trained full convolution neural network model to obtain the position information of the foreground.
Optionally, the fully convolutional neural network model is obtained by training a training module, where the training module includes:
the system comprises a sample acquisition unit, a foreground analysis unit and a foreground analysis unit, wherein the sample acquisition unit is used for acquiring each sample image and foreground position information corresponding to each sample image in advance;
a position detection unit, configured to input each sample image into an initial full-convolution neural network model, so that the initial full-convolution neural network model detects position information of a foreground in each sample image;
an accuracy determining unit, configured to determine a detection accuracy of the initial full convolution neural network model according to foreground position information corresponding to each pre-acquired sample image and position information of a foreground in each sample image detected by the initial full convolution neural network model;
and the parameter adjusting unit is used for continuously adjusting each parameter of the current full convolution neural network model until the detection accuracy of the full convolution neural network model after parameter adjustment is greater than the preset accuracy.
It should be noted that, for the information interaction, execution process, and other contents between the above-mentioned devices/units, the specific functions and technical effects thereof are based on the same concept as those of the embodiment of the method of the present application, and specific reference may be made to the part of the embodiment of the method, which is not described herein again.
Example four
Fig. 10 is a schematic diagram of a terminal device provided in the fourth embodiment of the present application. As shown in fig. 10, the terminal device 10 of this embodiment includes: a processor 100, a memory 101 and a computer program 102 stored in the memory 101 and executable on the processor 100. The processor 100 executes the computer program 102 to implement the steps in the various method embodiments, such as the steps S101 to S104 shown in fig. 1. Alternatively, the processor 100 implements the functions of the modules/units in the device embodiments, such as the functions of the modules 901 to 904 shown in fig. 9, when executing the computer program 102.
Illustratively, the computer program 102 may be divided into one or more modules/units, and the one or more modules/units are stored in the memory 101 and executed by the processor 100 to complete the present application. The one or more modules/units may be a series of computer program instruction segments capable of performing specific functions, which are used to describe the execution process of the computer program 102 in the terminal device 10. For example, the computer program 102 may be divided into an image acquisition module, a foreground detection module, an expansion operation module and a blurring processing module, and the specific functions of each module are as follows:
acquiring an image to be processed;
detecting the foreground in the image to be processed to obtain the position information of the foreground;
performing expansion operation on the image area indicated by the position information in the image to be processed to obtain corrected position information;
blurring the area of the image to be processed except the image area indicated by the corrected position information to obtain a processed image.
The terminal device may include, but is not limited to, a processor 100 and a memory 101. Those skilled in the art will appreciate that fig. 10 is merely an example of a terminal device 10 and does not constitute a limitation of terminal device 10 and may include more or fewer components than shown, or some components may be combined, or different components, for example, the terminal device may also include input-output devices, network access devices, buses, etc.
The Processor 100 may be a Central Processing Unit (CPU), other general purpose Processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA) or other Programmable logic device, discrete Gate or transistor logic device, discrete hardware component, or the like. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
The storage 101 may be an internal storage unit of the terminal device 10, such as a hard disk or a memory of the terminal device 10. The memory 101 may be an external storage device of the terminal device 10, such as a plug-in hard disk, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash memory Card (Flash Card), and the like, which are provided on the terminal device 10. Further, the memory 101 may include both an internal storage unit and an external storage device of the terminal device 10. The memory 101 is used for storing the computer program and other programs and data required by the terminal device. The above-mentioned memory 101 may also be used to temporarily store data that has been output or is to be output.
It will be apparent to those skilled in the art that, for convenience and brevity of description, only the above-mentioned division of the functional units and modules is illustrated, and in practical applications, the above-mentioned functions may be distributed as different functional units and modules according to needs, that is, the internal structure of the apparatus may be divided into different functional units or modules to implement all or part of the above-mentioned functions. Each functional unit and module in the embodiments may be integrated in one processing unit, or each unit may exist alone physically, or two or more units are integrated in one unit, and the integrated unit may be implemented in a form of hardware, or in a form of software functional unit. In addition, specific names of the functional units and modules are only for convenience of distinguishing from each other, and are not used for limiting the protection scope of the present application. The specific working processes of the units and modules in the system may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
In the above embodiments, the descriptions of the respective embodiments have respective emphasis, and reference may be made to the related descriptions of other embodiments for parts that are not described or illustrated in a certain embodiment.
Those of ordinary skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
In the embodiments provided in the present application, it should be understood that the disclosed apparatus/terminal device and method may be implemented in other ways. For example, the above-described embodiments of the apparatus/terminal device are merely illustrative, and for example, the division of the above modules or units is only one logical function division, and there may be other division manners in actual implementation, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The integrated modules/units described above, if implemented in the form of software functional units and sold or used as separate products, may be stored in a computer readable storage medium. Based on such understanding, all or part of the flow in the method of the embodiments described above may be implemented by a computer program, which may be stored in a computer readable storage medium and used by a processor to implement the steps of the embodiments of the methods described above. The computer program includes computer program code, and the computer program code may be in a source code form, an object code form, an executable file or some intermediate form. The computer readable medium may include: any entity or device capable of carrying the above-mentioned computer program code, recording medium, usb disk, removable hard disk, magnetic disk, optical disk, computer Memory, Read-Only Memory (ROM), Random Access Memory (RAM), electrical carrier wave signal, telecommunication signal, software distribution medium, etc. It should be noted that the computer readable medium described above may include content that is subject to appropriate increase or decrease as required by legislation and patent practice in jurisdictions, for example, in some jurisdictions, computer readable media that does not include electrical carrier signals and telecommunications signals in accordance with legislation and patent practice.
The above embodiments are only used for illustrating the technical solutions of the present application, and not for limiting the same; although the present application has been described in detail with reference to the foregoing embodiments, it should be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; such modifications and substitutions do not substantially depart from the spirit and scope of the embodiments of the present application and are intended to be included within the scope of the present application.

Claims (10)

1. An image processing method, comprising:
acquiring an image to be processed;
detecting a foreground in the image to be processed to obtain position information of the foreground;
performing expansion operation on an image area indicated by the position information in the image to be processed to obtain corrected position information;
blurring the area except the image area indicated by the corrected position information in the image to be processed to obtain a processed image.
2. The image processing method of claim 1, wherein the detecting a foreground in the image to be processed to obtain location information of the foreground comprises:
and when detecting that a user inputs an instruction for indicating blurring processing of the image to be processed, detecting a foreground in the image to be processed to obtain position information of the foreground.
3. The image processing method according to claim 1, wherein performing a dilation operation on an image area indicated by the position information in the image to be processed to obtain corrected position information comprises:
setting the pixel value of each pixel point in the image area indicated by the position information in the image to be processed as a first pixel value, and setting the pixel value of each pixel point except the image area indicated by the position information in the image to be processed as a second pixel value to obtain a binary image;
performing expansion operation on an image area with the pixel value being the first pixel value in the binary image to obtain a corrected binary image;
and determining the position information of each pixel point with the pixel value as the first pixel value in the corrected binary image as the corrected position information.
4. The image processing method according to any one of claims 1 to 3, wherein the image to be processed is an image acquired by a terminal device;
correspondingly, the detecting the foreground in the image to be processed to obtain the position information of the foreground includes:
acquiring depth information of the image to be processed;
determining each pixel point with the distance to the terminal equipment smaller than a preset distance according to the depth information;
and determining the position information of each pixel point with the distance from the terminal equipment smaller than the preset distance as the position information of the foreground.
5. The image processing method according to any one of claims 1 to 3, wherein the detecting a foreground in the image to be processed to obtain position information of the foreground includes:
and detecting the foreground in the image to be processed by using the trained full convolution neural network model to obtain the position information of the foreground.
6. The image processing method of claim 5, wherein the training process of the full convolutional neural network model comprises:
obtaining each sample image and foreground position information corresponding to each sample image in advance;
inputting each sample image into an initial full convolution neural network model so that the initial full convolution neural network model detects position information of a foreground in each sample image;
determining the detection accuracy of the initial full convolution neural network model according to the foreground position information corresponding to each pre-acquired sample image and the position information of the foreground in each sample image detected by the initial full convolution neural network model;
continuously adjusting each parameter of the current full convolution neural network model, continuously detecting the foreground position in each sample image through the full convolution neural network model after parameter adjustment until the detection accuracy of the full convolution neural network model after parameter adjustment is greater than the preset accuracy, and determining the current full convolution neural network model as the trained full convolution neural network model.
7. An image processing apparatus characterized by comprising:
the image acquisition module is used for acquiring an image to be processed;
the foreground detection module is used for detecting a foreground in the image to be processed to obtain position information of the foreground;
the expansion operation module is used for performing expansion operation on the image area indicated by the position information in the image to be processed to obtain corrected position information;
and the blurring processing module is used for blurring the area except the image area indicated by the corrected position information in the image to be processed to obtain a processed image.
8. The image processing apparatus of claim 7, wherein the foreground detection module is specifically configured to:
and when detecting that a user inputs an instruction for indicating blurring processing of the image to be processed, detecting a foreground in the image to be processed to obtain position information of the foreground.
9. A terminal device comprising a memory, a processor and a computer program stored in the memory and executable on the processor, characterized in that the processor implements the steps of the method according to any of claims 1 to 6 when executing the computer program.
10. A computer-readable storage medium, in which a computer program is stored which, when being executed by a processor, carries out the steps of the method according to any one of claims 1 to 6.
CN201811013836.XA 2018-08-31 2018-08-31 Image processing method, image processing device and terminal equipment Active CN110874814B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811013836.XA CN110874814B (en) 2018-08-31 2018-08-31 Image processing method, image processing device and terminal equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811013836.XA CN110874814B (en) 2018-08-31 2018-08-31 Image processing method, image processing device and terminal equipment

Publications (2)

Publication Number Publication Date
CN110874814A true CN110874814A (en) 2020-03-10
CN110874814B CN110874814B (en) 2023-07-28

Family

ID=69715274

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811013836.XA Active CN110874814B (en) 2018-08-31 2018-08-31 Image processing method, image processing device and terminal equipment

Country Status (1)

Country Link
CN (1) CN110874814B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112862852A (en) * 2021-02-24 2021-05-28 深圳市慧鲤科技有限公司 Image processing method and device, electronic equipment and computer readable storage medium
CN113808154A (en) * 2021-08-02 2021-12-17 惠州Tcl移动通信有限公司 Video image processing method and device, terminal equipment and storage medium

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102087707A (en) * 2009-12-03 2011-06-08 索尼株式会社 Image processing equipment and image processing method
CN107038681A (en) * 2017-05-31 2017-08-11 广东欧珀移动通信有限公司 Image weakening method, device, computer-readable recording medium and computer equipment

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102087707A (en) * 2009-12-03 2011-06-08 索尼株式会社 Image processing equipment and image processing method
CN107038681A (en) * 2017-05-31 2017-08-11 广东欧珀移动通信有限公司 Image weakening method, device, computer-readable recording medium and computer equipment

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112862852A (en) * 2021-02-24 2021-05-28 深圳市慧鲤科技有限公司 Image processing method and device, electronic equipment and computer readable storage medium
WO2022179045A1 (en) * 2021-02-24 2022-09-01 深圳市慧鲤科技有限公司 Image processing method and apparatus, and storage medium, program and program product
CN113808154A (en) * 2021-08-02 2021-12-17 惠州Tcl移动通信有限公司 Video image processing method and device, terminal equipment and storage medium

Also Published As

Publication number Publication date
CN110874814B (en) 2023-07-28

Similar Documents

Publication Publication Date Title
CN110660066B (en) Training method of network, image processing method, network, terminal equipment and medium
CN110310229B (en) Image processing method, image processing apparatus, terminal device, and readable storage medium
US11151723B2 (en) Image segmentation method, apparatus, and fully convolutional network system
CN108921806B (en) Image processing method, image processing device and terminal equipment
CN108833784B (en) Self-adaptive composition method, mobile terminal and computer readable storage medium
CN110335216B (en) Image processing method, image processing apparatus, terminal device, and readable storage medium
CN109116129B (en) Terminal detection method, detection device, system and storage medium
CN111402170A (en) Image enhancement method, device, terminal and computer readable storage medium
CN105005972A (en) Shooting distance based distortion correction method and mobile terminal
CN109214996B (en) Image processing method and device
CN109005367B (en) High dynamic range image generation method, mobile terminal and storage medium
CN108764139B (en) Face detection method, mobile terminal and computer readable storage medium
CN113301320B (en) Image information processing method and device and electronic equipment
CN110677585A (en) Target detection frame output method and device, terminal and storage medium
CN111080665B (en) Image frame recognition method, device, equipment and computer storage medium
CN113744256A (en) Depth map hole filling method and device, server and readable storage medium
CN105049706A (en) Image processing method and terminal
CN114298902A (en) Image alignment method and device, electronic equipment and storage medium
CN110874814B (en) Image processing method, image processing device and terminal equipment
CN111563517A (en) Image processing method, image processing device, electronic equipment and storage medium
CN108805838B (en) Image processing method, mobile terminal and computer readable storage medium
CN111428740A (en) Detection method and device for network-shot photo, computer equipment and storage medium
CN113112511B (en) Method and device for correcting test paper, storage medium and electronic equipment
CN110047126B (en) Method, apparatus, electronic device, and computer-readable storage medium for rendering image
CN108810407B (en) Image processing method, mobile terminal and computer readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant