CN105608699B - A kind of image processing method and electronic equipment - Google Patents

A kind of image processing method and electronic equipment Download PDF

Info

Publication number
CN105608699B
CN105608699B CN201510997367.XA CN201510997367A CN105608699B CN 105608699 B CN105608699 B CN 105608699B CN 201510997367 A CN201510997367 A CN 201510997367A CN 105608699 B CN105608699 B CN 105608699B
Authority
CN
China
Prior art keywords
image
subgraph
area
pixel point
personage
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201510997367.XA
Other languages
Chinese (zh)
Other versions
CN105608699A (en
Inventor
辛晨
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Lenovo Beijing Ltd
Original Assignee
Lenovo Beijing Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Lenovo Beijing Ltd filed Critical Lenovo Beijing Ltd
Priority to CN201510997367.XA priority Critical patent/CN105608699B/en
Publication of CN105608699A publication Critical patent/CN105608699A/en
Application granted granted Critical
Publication of CN105608699B publication Critical patent/CN105608699B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/80Camera processing pipelines; Components thereof

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The invention discloses a kind of image processing methods, for providing a kind of method for preferably carrying out stingy figure to the personage in image.The described method includes: obtaining facial image of first personage in the first image;First area is determined in the first image according to the facial image;The first area is used to indicate profile of the subgraph of first personage in the first image;According to profile of the subgraph of first personage in the first image, determine that each pixel for including in the second area of the first image is foreground pixel point or background pixel point;According to the determining foreground pixel point, the depth information of the background pixel point and the first image, the corresponding subgraph of first personage is obtained from the first image.The invention also discloses corresponding electronic equipments.

Description

A kind of image processing method and electronic equipment
Technical field
The present invention relates to technical field of image processing, in particular to a kind of image processing method and electronic equipment.
Background technique
With the continuous development of science and technology, electronic technology has also obtained development at full speed, and the type of electronic product is also got over Come more, people have also enjoyed the various conveniences of development in science and technology bring.For example, PC (personal computer), PAD (tablet computer) The electronic equipments such as mobile phone have become an indispensable part in people's life, and apply the figure on these electronic equipments As processing technique is also developed rapidly.
In the prior art, if user wants to carry out scratching figure to the personage in an image, it may be necessary to which user sets with electronics It is standby to interact, target person is chosen from image, for example, the profile of target person is sketched the contours of, then to user The parts of images of selection carries out scratching figure, to obtain character image.Such mode more depends on the operation of user, needs to use The step of family executes is relatively complicated, and electronic equipment needs the operation of repeatedly response user, increases the burden of electronic equipment, and by May be less accurate in user's operation, the effect so as to cause stingy figure is undesirable.
The method for scratching figure preferably is carried out to the personage in image as it can be seen that there is no in the prior art.
Summary of the invention
The application provides a kind of image processing method and electronic equipment, a kind of preferably to the personage in image for providing The method for carrying out scratching figure.
In a first aspect, providing a kind of image processing method, comprising:
Obtain facial image of first personage in the first image;
First area is determined in the first image according to the facial image;The first area is used to indicate described Profile of the subgraph of first personage in the first image;
According to profile of the subgraph of first personage in the first image, the second of the first image is determined The each pixel for including in region is foreground pixel point or background pixel point;Wherein, the second area includes described first Partial pixel point in region, and including the part picture in the remaining area in the first image in addition to the first area Vegetarian refreshments;
According to the determining foreground pixel point, the depth information of the background pixel point and the first image, from The corresponding subgraph of first personage is obtained in the first image.
Optionally, first area is determined in the first image according to the facial image, comprising:
It determines the third region in the first image including the facial image, determines the external of the third region Circle;
In the first image, two parallel lines are done, obtain two parallel lines and the circumscribed circle are formed two A intersection point;Spacing between two parallel lines is the distance between two eye images that the facial image includes, institute The line that two parallel lines are each perpendicular between described two eye images is stated, described two intersection points are located at by described two human eyes The first direction that image is directed toward, the first direction are the side for being directed toward body image in the first image by the facial image To;
It is drawn and is justified as radius using the external diameter of a circle, to obtain the first circle;Wherein, described two intersection points are located at described the On the circumference of one circle, the center of circle of first circle is located on the first direction;
Determine that whole region included by the circumscribed circle and first circle is the first area.
Optionally, the profile according to the subgraph of first personage in the first image determines first figure The each pixel for including in the second area of picture is foreground pixel point or background pixel point, comprising:
According to the profile, the second area is determined in the first image;
According to known foreground pixel point and known background pixel point, the pixel that the second area includes is clicked through Row gauss hybrid models GMM processing, using the pixel in the determination second area as foreground pixel point or background pixel point;Its In, determine that the pixel for being located in the first area and being not belonging to the second area is the known foreground pixel point, It determines and is located at outside the first area and is not belonging to the pixel of the second area as the known background pixel point.
Optionally, the method also includes:
To the corresponding depth image of the first image carry out over-segmentation, with included by the determination depth image at least Two subgraphs;
At least two subgraph is divided into first kind subgraph and the second class subgraph;Wherein, the first kind The average depth value for whole pixels that subgraph includes is greater than being averaged for whole pixels that the second class subgraph includes Depth value;
According to the depth information of definitive result and the first image, obtained from the first image described the first The corresponding subgraph of object, comprising:
According to the definitive result, the first kind subgraph and the second class subgraph, in the first image Determine prospect subgraph and background subgraph;
Determine that the prospect subgraph is the subgraph of first personage.
Optionally, before determining the prospect subgraph for the corresponding subgraph of first personage, further includes:
Determine the transparency of the pixel at the edge of the prospect subgraph;
According to the transparency, the pixel at the edge is smoothed.
Second aspect provides a kind of electronic equipment, comprising:
Memory, for storing instruction;
Processor, for executing described instruction:
Obtain facial image of first personage in the first image;
First area is determined in the first image according to the facial image;The first area is used to indicate described Profile of the subgraph of first personage in the first image;
According to profile of the subgraph of first personage in the first image, the second of the first image is determined The each pixel for including in region is foreground pixel point or background pixel point;Wherein, the second area includes described first Partial pixel point in region, and including the part picture in the remaining area in the first image in addition to the first area Vegetarian refreshments;
According to the determining foreground pixel point, the depth information of the background pixel point and the first image, from The corresponding subgraph of first personage is obtained in the first image.
Optionally, the processor is used for:
It determines the third region in the first image including the facial image, determines the external of the third region Circle;
In the first image, two parallel lines are done, obtain the boundary of two parallel lines and the third region Two intersection points formed;Spacing between two parallel lines is between two eye images that the facial image includes Distance, two parallel lines are each perpendicular to the line between described two eye images, and described two intersection points are located at by described The first direction that two eye images are directed toward, the first direction are to be directed toward body by the facial image in the first image The direction of image;
It is drawn and is justified as radius using the external diameter of a circle, to obtain the first circle;Wherein, described two intersection points are located at described the On the circumference of one circle, the center of circle of first circle is located on the first direction;
Determine that whole region included by the circumscribed circle and first circle is the first area.
Optionally, the processor is used for:
According to the profile, the second area is determined in the first image;
According to known foreground pixel point and known background pixel point, the pixel that the second area includes is clicked through Row gauss hybrid models GMM processing, using the pixel in the determination second area as foreground pixel point or background pixel point;Its In, determine that the pixel for being located in the first area and being not belonging to the second area is the known foreground pixel point, It determines and is located at outside the first area and is not belonging to the pixel of the second area as the known background pixel point.
Optionally, the processor is also used to:
To the corresponding depth image of the first image carry out over-segmentation, with included by the determination depth image at least Two subgraphs;
At least two subgraph is divided into first kind subgraph and the second class subgraph;Wherein, the first kind The average depth value for whole pixels that subgraph includes is greater than being averaged for whole pixels that the second class subgraph includes Depth value;
According to the determining foreground pixel point, the depth information of the background pixel point and the first image, from The corresponding subgraph of first personage is obtained in the first image, comprising:
According to the foreground pixel point, the background pixel point, the first kind subgraph and the second class subgraph Picture determines prospect subgraph and background subgraph in the first image;
Determine that the prospect subgraph is the subgraph of first personage.
Optionally, the processor is also used to:
Determine the transparency of the pixel at the edge of the prospect subgraph;
According to the transparency, the pixel at the edge is smoothed.
The third aspect provides a kind of electronic equipment, comprising:
First obtains module, for obtaining facial image of first personage in the first image;
First determining module, for determining first area in the first image according to the facial image;Described One region is used to indicate profile of the subgraph of first personage in the first image;
Second determining module is determined for the profile according to the subgraph of first personage in the first image The each pixel for including in the second area of the first image is foreground pixel point or background pixel point;Wherein, described Two regions include the partial pixel point in the first area, and including in the first image in addition to the first area Partial pixel point in remaining area;
Second obtains module, for according to the determining foreground pixel point, the background pixel point and described first The depth information of image obtains the corresponding subgraph of first personage from the first image.
In the application, the profile of the subgraph of the first personage can be determined according to the facial image of the first personage, in conjunction with The subgraph of the first personage of Depth Information Acquistion of pixel and the first image in second area, in this way, The corresponding subgraph of the first personage can be more accurately, easily obtained from the first image, and whole process can be by electronics The step of equipment is automatically performed, and user is needed to execute is less, therefore electronic equipment is reduced without the operation of multiple response user The burden of electronic equipment, while also improving the image-capable of electronic equipment.
Detailed description of the invention
Fig. 1 is the flow chart of image processing method in the embodiment of the present invention;
Fig. 2 is the schematic diagram of first area in the embodiment of the present invention;
Fig. 3 is the schematic diagram of second area in the embodiment of the present invention;
Fig. 4 is the structural schematic diagram of electronic equipment in the embodiment of the present invention;
Fig. 5 is the structural block diagram of electronic equipment in the embodiment of the present invention.
Specific embodiment
In order to make the object, technical scheme and advantages of the embodiment of the invention clearer, below in conjunction with the embodiment of the present invention In attached drawing, technical scheme in the embodiment of the invention is clearly and completely described, it is clear that described embodiment is A part of the embodiment of the present invention, instead of all the embodiments.Based on the embodiments of the present invention, those of ordinary skill in the art Every other embodiment obtained without making creative work, shall fall within the protection scope of the present invention.
Electronic equipment in the embodiment of the present invention can be the different electronic equipment of PC, PAD, mobile phone etc., and the present invention is real Example is applied to this with no restriction.
Referring to Figure 1, the embodiment of the present invention provides a kind of image processing method, and this method can be applied to electronic equipment, The process of this method is described as follows.
Step 101: obtaining facial image of first personage in the first image;
Step 102: first area is determined in the first image according to facial image;First area is used to indicate the first personage Profile of the subgraph in the first image;
Step 103: according to profile of the subgraph of the first personage in the first image, determining the second area of the first image In include each pixel be foreground pixel point or background pixel point;Wherein, second area includes the part in first area Pixel, and including the partial pixel point in the remaining area in the first image in addition to first area;
Step 104: according to the depth information of determining foreground pixel point, background pixel point and the first image, from first The corresponding subgraph of the first personage is obtained in image.
In the embodiment of the present invention, the first image can be the image of arbitrary format, such as can be JPEG (Joint Photographic Experts Group, Joint Photographic Experts Group) format or BMP (Bitmap, bitmap) format etc..
In the embodiment of the present invention, the image of personage may include in the first image, certainly, other than personage, the first figure As that can also include other contents, for example, scenery, animal etc..
In the content included by the first image, if only one personage, this personage can be of the invention real The first personage in example is applied, if the first image includes multiple personages, the first personage can be appointing in this multiple personage One personage can be selected, or can also be determined by electronic equipment, such as electronics for being specially which personage by user The maximum personage of footprint area can be determined as first personage, etc. by equipment, and the embodiment of the present invention is not construed as limiting this.
Optionally, the mode of first area is determined in the first image for the facial image according to the first personage, such as The third region in the first image including facial image can be determined first, the circumscribed circle in third region is determined, then first In image, two parallel lines are done, obtain two intersection points of the boundary formation in two parallel lines and third region, this two parallel lines Between spacing can be the distance between two eye images that facial image includes, two parallel lines can be each perpendicular to two Line between a eye image, two intersection points can be located at the first direction being directed toward by two eye images, and first direction is The direction of body image is directed toward in first image by facial image, is subsequently drawn and is justified as radius using external diameter of a circle, to obtain First circle, two intersection points are located on the circumference of the first circle, and the center of circle of the first circle is located on first direction, finally determining circumscribed circle with And first the included whole region of circle be first area.
For example, as shown in Fig. 2, electronic equipment first can be by face recognition technology (Face Recognition) The third region including facial image is identified in one image, for example the third region in Fig. 2 is the face comprising the first personage The rectangular area of image, and determine the circumscribed circle in third region, two parallel lines then can be done from the eyes of the first personage, The boundary of this two parallel lines and third region, that is, the boundary of rectangular area in Fig. 2 form two intersection points, i.e. in Fig. 2 Point A and point B, then using the external diameter of a circle in third region be that radius draws circle (i.e. first round), point A and point B are located at the circle On circumference, it can determine that the whole region that circumscribed circle and the first circle include is first area.In this way, passing through identification face energy The first area for characterizing character contour is automatically determined, so that the first personage couple finally needed is further processed The subgraph answered, whole process just can be automatically performed stingy figure without user's manual operation, and treatment process is simple, fast, electronics The image-capable of equipment is stronger.
Optionally, after first area has determined, second area, root can be determined in the first image according to profile According to known foreground pixel point and known background pixel point, GMM (Gaussian is carried out to the pixel that second area includes Mixture Model, gauss hybrid models) processing, to determine the pixel in second area as foreground pixel point or background picture Vegetarian refreshments determines that the pixel for being located in first area and being not belonging to second area is known foreground pixel point, determines positioned at the Outside one region and the pixel of second area is not belonging to as known background pixel point.
Optionally, second area may include the partial pixel point in first area, and may include in the first image Partial pixel point in remaining area in addition to first area includes which partial pixel of first area for second area Point and second area include which partial pixel point in the remaining area in addition to first area, and the embodiment of the present invention is not It limits.For example, as shown in figure 3, be such as first area by the region that the pixel inside solid line forms, it can be by the firstth area The boundary in domain is as benchmark, to a part of region of first area interior expansion, then to expanding a part of region outside first area, For example foring the region of the dash area in Fig. 3, then second area can be the region of dash area.
First area only may relatively coarse characterize the profile of the first personage, and especially boundary part may not be very Accurately, therefore, after marking off second area, it can determine that the pixel in second area belongs to the first personage, still Belong to background, that is, determines that each of second area pixel is foreground pixel point or background pixel point (this hair Think that the pixel for belonging to the first personage is foreground pixel point in bright embodiment, the pixel for being not belonging to the first personage is background picture Vegetarian refreshments), for determining that each of second area pixel is the mode of foreground pixel point or background pixel point, the present invention Embodiment is not construed as limiting, for example, can carry out GMM processing by the pixel for including to second area to be determined, wherein GMM processing mode is that a kind of basis has been determined as foreground pixel point or the pixel point estimation zone of ignorance of background pixel point includes Pixel be foreground pixel point or background pixel point probability processing mode.
For example, continuing with referring to Fig. 3 first can will be located at using second area as the zone of ignorance for carrying out GMM processing In region and the pixel of second area is not belonging to as known foreground pixel point, such as region B and region C institute in Fig. 3 Including pixel be known foreground pixel point, will be located at first area outside and be not belonging to the pixel of second area as The background pixel point known, such as pixel included by region A in Fig. 3 are known background pixel point.
It is handled by GMM, each pixel for including in second area corresponds to a probability value, this probability value can For further confirming that corresponding pixel is foreground pixel point or background pixel point.For example, can be according to the big of probability value It is small to confirm that corresponding pixel is foreground pixel point or background pixel point, that is, determine that a pixel is to belong to prospect The probability of pixel is big, and the probability for still falling within background pixel point is big.
For example, the probability value that can preset foreground pixel point is 1 and the probability value of background pixel point is 0.Pass through GMM handles to have obtained the probability value for five pixels for including in second area, is 0.1,0.3,0.7,0.9 and 0.5 respectively, So, because 0.1 and 0.3 all closer to 0 (difference of i.e. 0.1 and 0 difference less than 0.1 and 1,0.3 and 0 difference less than 0.3 with 1 difference), two pixels that probability value is 0.1 and 0.3 can be determined as background pixel point, because of 0.7 and 0.9 closer to 1 Probability value can be 0.7 by (difference of i.e. 0.7 and 1 difference less than 0.7 and 0,0.9 and 1 difference of the difference less than 0.9 and 0) Two pixels with 0.9 are determined as foreground pixel point, and because of 0.5, (i.e. 0.5 and 1 difference is equal to 0.5 and 0 among 0 and 1 Difference), can by probability value be 0.5 pixel be determined as foreground pixel point or background pixel point.Certainly it is merely given as here A kind of example explanation method of determination, the pixel in unknown subregion can also be determined in practical application using other modes It is foreground pixel point or background pixel point.For example, in obtaining unknown subregion after the probability value of each pixel, it can also To be selected corresponding pixel by user specifically as foreground pixel point or background pixel point.
By above mode, the pixel for belonging to the subgraph of the first personage in second area may further determine that, And then the profile of available more accurate first personage, the image-capable of electronic equipment are stronger.
Optionally, over-segmentation (over segmentation) can also be carried out to the corresponding depth image of the first image, with It determines at least two subgraphs included by depth image, at least will be divided into first kind subgraph and the second class by two subgraphs Subgraph, the average depth value for whole pixels that first kind subgraph includes are greater than whole pixels that the second class subgraph includes The average depth value of point determines prospect further according to definitive result, first kind subgraph and the second class subgraph in the first image Subgraph and background subgraph, it may be determined that prospect subgraph is the subgraph of the first personage.
Over-segmentation can be carried out to the corresponding depth image of the first image according to the depth value of each pixel, such as Can be by the first image segmentation at least two subgraphs, the pixel in the same subgraph after over-segmentation, which has, to be belonged to The depth value of same range.
After at least two subgraphs have been determined, it is different at least two subgraphs can be divided into average depth value Two classes, for the mode of division, the embodiment of the present invention is not construed as limiting, for example, it is near to cut algorithm (GraphCuts) by figure Few two subgraphs are divided into the biggish first kind subgraph of average depth value and the lesser second class subgraph of average depth value Picture.
Can integrate GMM above-mentioned processing as a result, and classification obtained first kind subgraph and the second class subgraph, Prospect subgraph and background subgraph are determined in the first image, and using prospect subgraph as the subgraph of the first personage.
For example, the value that can preset foreground pixel point is 1, the value of background pixel point is 0, at GMM above-mentioned After reason, each pixel included by available first image is foreground pixel point or background pixel point, that is, can To obtain the value of each pixel as 1 or 0.The whole for including due to the first kind subgraph classified according to depth image The average depth value of pixel is greater than the average depth value for whole pixels that the second class subgraph includes, can be by first kind Whole pixels that image includes are considered background pixel point, i.e. value is 0, and whole pixels that the second class subgraph includes are recognized To be foreground pixel point, i.e., value is 1.As it can be seen that handling available bianry image by GMM, pass through the processing to depth image Also another available bianry image, then the value of the pixel of the two bianry image same positions can be carried out logic With operation, for example, 1 and 1 obtains 1,0 and 1 and obtains 0,0 and 0 obtaining 0.The final bianry image of available first image, can be with The subgraph that whole pixels that value is 1 are formed will be worth the subgraph formed for 0 whole pixels as prospect subgraph As background subgraph.Prospect subgraph is determined as to the subgraph of the first personage.
By above mode, GMM processing can be integrated and determine the first personage's to the processing of depth image is common Subgraph can obtain the subgraph of accurate first personage, and it is preferable to scratch figure effect.
Optionally, before determining prospect subgraph for the corresponding subgraph of the first personage, prospect subgraph can also be determined The transparency of the pixel at the edge of picture is smoothed the pixel at edge according to transparency.
The edge of image can be handled by image smoothing (image smoothing) method, finally make side The transition of transparency is more natural between the pixel and pixel of edge.
In this way, the edge of the corresponding subgraph of finally obtained first personage of user can be made more flat Whole, transition is more smooth between pixel and pixel, and image is more aesthetically pleasing.
Meanwhile after the transparency of the pixel at edge of prospect subgraph has been determined, transparency can also be recorded Into image information, therefore, what user obtained is include transparence information the corresponding subgraph of the first personage, make in user When being synthesized with the corresponding subgraph of the first personage and other images, can according to known transparency come to other images into The matched processing of row, can make the image of synthesis more natural, true to nature.
Fig. 4 is referred to, based on the same inventive concept, the embodiment of the present invention provides a kind of electronic equipment, which can To include:
Memory 401, for storing instruction;
Processor 402, for executing instruction:
Obtain facial image of first personage in the first image;
First area is determined in the first image according to facial image;First area is used to indicate the subgraph of the first personage Profile in the first image;
According to profile of the subgraph of the first personage in the first image, determines in the second area of the first image and include Each pixel is foreground pixel point or background pixel point;Wherein, second area includes the partial pixel point in first area, and Including the partial pixel point in the remaining area in the first image in addition to first area;
According to the depth information of determining foreground pixel point, background pixel point and the first image, obtained from the first image Take the corresponding subgraph of the first personage.
Optionally, processor 402 is used for:
It determines the third region in the first image including facial image, determines the circumscribed circle in third region;
In the first image, two parallel lines are done, obtain two friendships of the boundary formation in two parallel lines and third region Point;Spacing between two parallel lines is the distance between two eye images that facial image includes, and two parallel lines hang down The directly line between two eye images, two intersection points are located at the first direction being directed toward by two eye images, first direction For the direction for being directed toward body image in the first image by facial image;
It is drawn and is justified as radius using external diameter of a circle, to obtain the first circle;Wherein, two intersection points are located at the circumference of the first circle On, the center of circle of the first circle is located on first direction;
Determine that whole region included by circumscribed circle and the first circle is first area.
Optionally, processor 402 is used for:
According to profile, second area is determined in the first image;
According to known foreground pixel point and known background pixel point, the pixel that second area includes is carried out high This mixed model GMM processing, to determine the pixel in second area as foreground pixel point or background pixel point;Wherein it is determined that In first area and be not belonging to second area pixel be known foreground pixel point, determine be located at first area outside and The pixel of second area is not belonging to as known background pixel point.
Optionally, processor 402 is also used to:
Over-segmentation is carried out to the corresponding depth image of the first image, to determine at least two subgraphs included by depth image Picture;
At least two subgraphs are divided into first kind subgraph and the second class subgraph;Wherein, first kind sub-picture pack The average depth value of the whole pixels included is greater than the average depth value for whole pixels that the second class subgraph includes;
According to the depth information of determining foreground pixel point, background pixel point and the first image, obtained from the first image Take the corresponding subgraph of the first personage, comprising:
According to foreground pixel point, background pixel point, first kind subgraph and the second class subgraph, in the first image really Determine prospect subgraph and background subgraph;
Determine that prospect subgraph is the subgraph of the first personage.
Optionally, processor 402 is also used to:
Determine the transparency of the pixel at the edge of prospect subgraph;
According to transparency, the pixel at edge is smoothed.
Fig. 5 is referred to, based on the same inventive concept, the embodiment of the present invention provides another electronic equipment, the electronic equipment May include:
First obtains module 501, for obtaining facial image of first personage in the first image;
First determining module 502, for determining first area in the first image according to facial image;First area is used for Indicate profile of the subgraph of the first personage in the first image;
Second determining module 503 determines the first figure for profile of the subgraph according to the first personage in the first image The each pixel for including in the second area of picture is foreground pixel point or background pixel point;Wherein, second area includes first Partial pixel point in region, and including the partial pixel point in the remaining area in the first image in addition to first area;
Second obtains module 504, for the depth according to determining foreground pixel point, background pixel point and the first image Information is spent, the corresponding subgraph of the first personage is obtained from the first image.
Optionally, the first determining module 502 is used for:
It determines the third region in the first image including facial image, determines the circumscribed circle in third region;
In the first image, two parallel lines are done, obtain two friendships of the boundary formation in two parallel lines and third region Point;Spacing between two parallel lines is the distance between two eye images that facial image includes, and two parallel lines hang down The directly line between two eye images, two intersection points are located at the first direction being directed toward by two eye images, first direction For the direction for being directed toward body image in the first image by facial image;
It is drawn and is justified as radius using external diameter of a circle, to obtain the first circle;Wherein, two intersection points are located at the circumference of the first circle On, the center of circle of the first circle is located on first direction;
Determine that whole region included by circumscribed circle and the first circle is first area.
Optionally, the second determining module 503 is used for:
According to the profile, the second area is determined in the first image;
According to known foreground pixel point and known background pixel point, the pixel that the second area includes is clicked through Row gauss hybrid models GMM processing, using the pixel in the determination second area as foreground pixel point or background pixel point;Its In, determine that the pixel for being located in the first area and being not belonging to the second area is the known foreground pixel point, It determines and is located at outside the first area and is not belonging to the pixel of the second area as the known background pixel point.
Optionally, electronic equipment further include:
Over-segmentation module, for carrying out over-segmentation to the corresponding depth image of the first image, with the determination depth At least two subgraphs included by image;
Division module, at least two subgraph to be divided into first kind subgraph and the second class subgraph;Its In, the average depth value for whole pixels that the first kind subgraph includes is greater than the whole that the second class subgraph includes The average depth value of pixel;
Second acquisition module 504 is used for:
According to the foreground pixel point, the background pixel point, the first kind subgraph and the second class subgraph Picture determines prospect subgraph and background subgraph in the first image;
Determine that the prospect subgraph is the subgraph of first personage.
Optionally, electronic equipment further include:
Third determining module, the transparency of the pixel at the edge for determining the prospect subgraph;
Smoothing module, for being smoothed to the pixel at the edge according to the transparency.
In the embodiment of the present invention, the profile of the subgraph of the first personage is determined according to the facial image of the first personage, then is tied Close the subgraph of the first personage of Depth Information Acquistion of the pixel and the first image in second area, side in this way Formula can more accurately, easily obtain the corresponding subgraph of the first personage from the first image, and whole process can be by electricity The step of sub- equipment is automatically performed, user is needed to execute is less, therefore electronic equipment subtracts without the operation of multiple response user Lack the burden of electronic equipment, while also improving the image-capable of electronic equipment.
In several embodiments provided by the present invention, it should be understood that disclosed device and method can pass through it Its mode is realized.For example, the apparatus embodiments described above are merely exemplary, for example, the module or unit It divides, only a kind of logical function partition, there may be another division manner in actual implementation, such as multiple units or components It can be combined or can be integrated into another system, or some features can be ignored or not executed.Another point, it is shown or The mutual coupling, direct-coupling or communication connection discussed can be through some interfaces, the indirect coupling of device or unit It closes or communicates to connect, can be electrical property, mechanical or other forms.
The unit as illustrated by the separation member may or may not be physically separated, aobvious as unit The component shown may or may not be physical unit, it can and it is in one place, or may be distributed over multiple In network unit.It can select some or all of unit therein according to the actual needs to realize the mesh of this embodiment scheme 's.
It, can also be in addition, each functional unit in each embodiment of the application can integrate in one processing unit It is that each unit physically exists alone, can also be integrated in one unit with two or more units.Above-mentioned integrated list Member both can take the form of hardware realization, can also realize in the form of software functional units.
If the integrated unit is realized in the form of SFU software functional unit and sells or use as independent product When, it can store in a computer readable storage medium.Based on this understanding, the technical solution of the application is substantially The all or part of the part that contributes to existing technology or the technical solution can be in the form of software products in other words It embodies, which is stored in a storage medium, including some instructions are used so that a computer It is each that equipment (can be personal computer, server or the network equipment etc.) or processor (processor) execute the application The all or part of the steps of embodiment the method.And storage medium above-mentioned includes: USB flash disk, mobile hard disk, ROM (Read- Only Memory, read-only memory), RAM (Random Access Memory, random access memory), magnetic or disk Etc. the various media that can store program code.
Specifically, the corresponding computer program instructions of one of embodiment of the present invention image processing method can be deposited It stores up on CD, hard disk, the storage mediums such as USB flash disk, when the computer journey corresponding with a kind of image processing method in storage medium Sequence instruction is read or is performed by an electronic equipment, includes the following steps:
Obtain facial image of first personage in the first image;
First area is determined in the first image according to the facial image;The first area is used to indicate described Profile of the subgraph of first personage in the first image;
According to profile of the subgraph of first personage in the first image, the second of the first image is determined The each pixel for including in region is foreground pixel point or background pixel point;Wherein, the second area includes described first Partial pixel point in region, and including the part picture in the remaining area in the first image in addition to the first area Vegetarian refreshments;
According to the determining foreground pixel point, the depth information of the background pixel point and the first image, from The corresponding subgraph of first personage is obtained in the first image.
Optionally, stored in the storage medium and step: true in the first image according to the facial image Determine first area, corresponding computer instruction is during being performed, comprising:
It determines the third region in the first image including the facial image, determines the external of the third region Circle;
In the first image, two parallel lines are done, obtain the boundary of two parallel lines and the third region Two intersection points formed;Spacing between two parallel lines is between two eye images that the facial image includes Distance, two parallel lines are each perpendicular to the line between described two eye images, and described two intersection points are located at by described The first direction that two eye images are directed toward, the first direction are to be directed toward body by the facial image in the first image The direction of image;
It is drawn and is justified as radius using the external diameter of a circle, to obtain the first circle;Wherein, described two intersection points are located at described the On the circumference of one circle, the center of circle of first circle is located on the first direction;
Determine that whole region included by the circumscribed circle and first circle is the first area.
Optionally, stored in the storage medium and step: according to the subgraph of first personage described first Profile in image determines that each pixel for including in the second area of the first image is foreground pixel point or background picture Vegetarian refreshments, corresponding computer instruction is during being performed, comprising:
According to the profile, the second area is determined in the first image;
According to known foreground pixel point and known background pixel point, the pixel that the second area includes is clicked through Row gauss hybrid models GMM processing, using the pixel in the determination second area as foreground pixel point or background pixel point;Its In, determine that the pixel for being located in the first area and being not belonging to the second area is the known foreground pixel point, It determines and is located at outside the first area and is not belonging to the pixel of the second area as the known background pixel point.
Optionally, other computer instruction is also stored in the storage medium, which exists During being performed, comprising:
To the corresponding depth image of the first image carry out over-segmentation, with included by the determination depth image at least Two subgraphs;
At least two subgraph is divided into first kind subgraph and the second class subgraph;Wherein, the first kind The average depth value for whole pixels that subgraph includes is greater than being averaged for whole pixels that the second class subgraph includes Depth value;
According to the determining foreground pixel point, the depth information of the background pixel point and the first image, from The corresponding subgraph of first personage is obtained in the first image, comprising:
According to the foreground pixel point, the background pixel point, the first kind subgraph and the second class subgraph Picture determines prospect subgraph and background subgraph in the first image;
Determine that the prospect subgraph is the subgraph of first personage.
Optionally, store in the storage medium and step: determine that the prospect subgraph is first personage couple The subgraph answered, corresponding computer instruction is before being performed process, further includes:
Determine the transparency of the pixel at the edge of the prospect subgraph;
According to the transparency, the pixel at the edge is smoothed.
The above, above embodiments are only to be described in detail technical solution of the present invention, but the above implementation The explanation of example is merely used to help understand method and its core concept of the invention, should not be construed as limiting the invention.This In the technical scope disclosed by the present invention, any changes or substitutions that can be easily thought of by those skilled in the art, should all cover Within protection scope of the present invention.

Claims (9)

1. a kind of image processing method, comprising:
Obtain facial image of first personage in the first image;
First area is determined in the first image according to the facial image;The first area is used to indicate described first Profile of the subgraph of personage in the first image;
First area is determined in the first image according to the facial image, comprising:
It determines the third region in the first image including the facial image, determines the circumscribed circle in the third region;
In the first image, two parallel lines are done, two parallel lines is obtained and the boundary in the third region is formed Two intersection points;Spacing between two parallel lines be between two eye images that the facial image includes away from From two parallel lines are each perpendicular to the line between described two eye images, and described two intersection points are located at by described two The first direction that a eye image is directed toward, the first direction are to be directed toward body figure by the facial image in the first image The direction of picture;
It is drawn and is justified as radius using the external diameter of a circle, to obtain the first circle;Wherein, described two intersection points are located at first circle Circumference on, it is described first circle the center of circle be located on the first direction;
Determine that whole region included by the circumscribed circle and first circle is the first area;
According to profile of the subgraph of first personage in the first image, the second area of the first image is determined In include each pixel be foreground pixel point or background pixel point;Wherein, the second area includes the first area In partial pixel point, and including the partial pixel in the remaining area in the first image in addition to the first area Point;
According to the determining foreground pixel point, the depth information of the background pixel point and the first image, from described The corresponding subgraph of first personage is obtained in first image.
2. the method as described in claim 1, which is characterized in that according to the subgraph of first personage in the first image In profile, determine that each pixel for including in the second area of the first image is foreground pixel point or background pixel Point, comprising:
According to the profile, the second area is determined in the first image;
According to known foreground pixel point and known background pixel point, the pixel that the second area includes is carried out high This mixed model GMM processing, using the pixel in the determination second area as foreground pixel point or background pixel point;Wherein, It determines that the pixel for being located in the first area and being not belonging to the second area is the known foreground pixel point, determines Outside the first area and the pixel of the second area is not belonging to as the known background pixel point.
3. method according to claim 2, which is characterized in that the method also includes:
Over-segmentation is carried out to the corresponding depth image of the first image, at least two included by the determination depth image Subgraph;
At least two subgraph is divided into first kind subgraph and the second class subgraph;Wherein, the first kind subgraph As the average depth value for the whole pixels for including is greater than the mean depth for whole pixels that the second class subgraph includes Value;
According to the determining foreground pixel point, the depth information of the background pixel point and the first image, from described The corresponding subgraph of first personage is obtained in first image, comprising:
According to the foreground pixel point, the background pixel point, the first kind subgraph and the second class subgraph, Prospect subgraph and background subgraph are determined in the first image;
Determine that the prospect subgraph is the subgraph of first personage.
4. method a method according to any one of claims 1-3, which is characterized in that determining that the prospect subgraph is described the first Before the corresponding subgraph of object, further includes:
Determine the transparency of the pixel at the edge of the prospect subgraph;
According to the transparency, the pixel at the edge is smoothed.
5. a kind of electronic equipment, comprising:
Memory, for storing instruction;
Processor, for executing described instruction:
Obtain facial image of first personage in the first image;
First area is determined in the first image according to the facial image;The first area is used to indicate described first Profile of the subgraph of personage in the first image;
The processor is used for:
It determines the third region in the first image including the facial image, determines the circumscribed circle in the third region;
In the first image, two parallel lines are done, two parallel lines is obtained and the boundary in the third region is formed Two intersection points;Spacing between two parallel lines be between two eye images that the facial image includes away from From two parallel lines are each perpendicular to the line between described two eye images, and described two intersection points are located at by described two The first direction that a eye image is directed toward, the first direction are to be directed toward body figure by the facial image in the first image The direction of picture;
It is drawn and is justified as radius using the external diameter of a circle, to obtain the first circle;Wherein, described two intersection points are located at first circle Circumference on, it is described first circle the center of circle be located on the first direction;
Determine that whole region included by the circumscribed circle and first circle is the first area;
According to profile of the subgraph of first personage in the first image, the second area of the first image is determined In include each pixel be foreground pixel point or background pixel point;Wherein, the second area includes the first area In partial pixel point, and including the partial pixel in the remaining area in the first image in addition to the first area Point;
According to the determining foreground pixel point, the depth information of the background pixel point and the first image, from described The corresponding subgraph of first personage is obtained in first image.
6. electronic equipment as claimed in claim 5, which is characterized in that the processor is used for:
According to the profile, the second area is determined in the first image;
According to known foreground pixel point and known background pixel point, the pixel that the second area includes is carried out high This mixed model GMM processing, using the pixel in the determination second area as foreground pixel point or background pixel point;Wherein, It determines that the pixel for being located in the first area and being not belonging to the second area is the known foreground pixel point, determines Outside the first area and the pixel of the second area is not belonging to as the known background pixel point.
7. electronic equipment as claimed in claim 6, which is characterized in that the processor is also used to:
Over-segmentation is carried out to the corresponding depth image of the first image, at least two included by the determination depth image Subgraph;
At least two subgraph is divided into first kind subgraph and the second class subgraph;Wherein, the first kind subgraph As the average depth value for the whole pixels for including is greater than the mean depth for whole pixels that the second class subgraph includes Value;
According to the determining foreground pixel point, the depth information of the background pixel point and the first image, from described The corresponding subgraph of first personage is obtained in first image, comprising:
According to the foreground pixel point, the background pixel point, the first kind subgraph and the second class subgraph, Prospect subgraph and background subgraph are determined in the first image;
Determine that the prospect subgraph is the subgraph of first personage.
8. the electronic equipment as described in claim 5-7 is any, which is characterized in that the processor is also used to:
Determine the transparency of the pixel at the edge of the prospect subgraph;
According to the transparency, the pixel at the edge is smoothed.
9. a kind of electronic equipment, comprising:
First obtains module, for obtaining facial image of first personage in the first image;
First determining module, for determining first area in the first image according to the facial image;Firstth area Domain is used to indicate profile of the subgraph of first personage in the first image;
First area is determined in the first image according to the facial image, comprising:
It determines the third region in the first image including the facial image, determines the circumscribed circle in the third region;
In the first image, two parallel lines are done, two parallel lines is obtained and the boundary in the third region is formed Two intersection points;Spacing between two parallel lines be between two eye images that the facial image includes away from From two parallel lines are each perpendicular to the line between described two eye images, and described two intersection points are located at by described two The first direction that a eye image is directed toward, the first direction are to be directed toward body figure by the facial image in the first image The direction of picture;
It is drawn and is justified as radius using the external diameter of a circle, to obtain the first circle;Wherein, described two intersection points are located at first circle Circumference on, it is described first circle the center of circle be located on the first direction;
Determine that whole region included by the circumscribed circle and first circle is the first area;
Second determining module, for the profile according to the subgraph of first personage in the first image, determine described in The each pixel for including in the second area of first image is foreground pixel point or background pixel point;Wherein, secondth area Domain includes the partial pixel point in the first area, and including the residue in the first image in addition to the first area Partial pixel point in region;
Second obtains module, for according to determining the foreground pixel point, the background pixel point and the first image Depth information, the corresponding subgraph of first personage is obtained from the first image.
CN201510997367.XA 2015-12-25 2015-12-25 A kind of image processing method and electronic equipment Active CN105608699B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201510997367.XA CN105608699B (en) 2015-12-25 2015-12-25 A kind of image processing method and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201510997367.XA CN105608699B (en) 2015-12-25 2015-12-25 A kind of image processing method and electronic equipment

Publications (2)

Publication Number Publication Date
CN105608699A CN105608699A (en) 2016-05-25
CN105608699B true CN105608699B (en) 2019-03-29

Family

ID=55988615

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201510997367.XA Active CN105608699B (en) 2015-12-25 2015-12-25 A kind of image processing method and electronic equipment

Country Status (1)

Country Link
CN (1) CN105608699B (en)

Families Citing this family (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106686296A (en) * 2016-11-28 2017-05-17 努比亚技术有限公司 Method for realizing light painting, device and photographing apparatus
CN107016348B (en) * 2017-03-09 2022-11-22 Oppo广东移动通信有限公司 Face detection method and device combined with depth information and electronic device
CN106997457B (en) * 2017-03-09 2020-09-11 Oppo广东移动通信有限公司 Figure limb identification method, figure limb identification device and electronic device
CN106991688A (en) * 2017-03-09 2017-07-28 广东欧珀移动通信有限公司 Human body tracing method, human body tracking device and electronic installation
CN107231529A (en) * 2017-06-30 2017-10-03 努比亚技术有限公司 Image processing method, mobile terminal and storage medium
CN107707839A (en) * 2017-09-11 2018-02-16 广东欧珀移动通信有限公司 Image processing method and device
CN107767333B (en) * 2017-10-27 2021-08-10 努比亚技术有限公司 Method and equipment for beautifying and photographing and computer storage medium
CN110378276B (en) * 2019-07-16 2021-11-30 顺丰科技有限公司 Vehicle state acquisition method, device, equipment and storage medium

Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1734499A (en) * 2004-08-09 2006-02-15 微软公司 Border matting by dynamic programming
CN1926851A (en) * 2004-01-16 2007-03-07 索尼电脑娱乐公司 Method and apparatus for optimizing capture device settings through depth information
CN101673400A (en) * 2008-09-08 2010-03-17 索尼株式会社 Image processing apparatus, method, and program
WO2010030712A1 (en) * 2008-09-09 2010-03-18 Citrix Systems, Inc. Methods and systems for per pixel alpha-blending of a parent window and a portion of a background image
CN101777180A (en) * 2009-12-23 2010-07-14 中国科学院自动化研究所 Complex background real-time alternating method based on background modeling and energy minimization
CN102113015A (en) * 2008-07-28 2011-06-29 皇家飞利浦电子股份有限公司 Use of inpainting techniques for image correction
CN103581640A (en) * 2012-07-31 2014-02-12 乐金显示有限公司 Image data processing method and stereoscopic image display using the same
CN103856617A (en) * 2012-12-03 2014-06-11 联想(北京)有限公司 Photographing method and user terminal
CN103873834A (en) * 2012-12-10 2014-06-18 联想(北京)有限公司 Image acquisition method and corresponding image acquisition unit
CN103871014A (en) * 2012-12-17 2014-06-18 联想(北京)有限公司 Image color changing method and device
CN103973977A (en) * 2014-04-15 2014-08-06 联想(北京)有限公司 Blurring processing method and device for preview interface and electronic equipment

Patent Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1926851A (en) * 2004-01-16 2007-03-07 索尼电脑娱乐公司 Method and apparatus for optimizing capture device settings through depth information
CN1734499A (en) * 2004-08-09 2006-02-15 微软公司 Border matting by dynamic programming
CN102113015A (en) * 2008-07-28 2011-06-29 皇家飞利浦电子股份有限公司 Use of inpainting techniques for image correction
CN101673400A (en) * 2008-09-08 2010-03-17 索尼株式会社 Image processing apparatus, method, and program
WO2010030712A1 (en) * 2008-09-09 2010-03-18 Citrix Systems, Inc. Methods and systems for per pixel alpha-blending of a parent window and a portion of a background image
CN101777180A (en) * 2009-12-23 2010-07-14 中国科学院自动化研究所 Complex background real-time alternating method based on background modeling and energy minimization
CN103581640A (en) * 2012-07-31 2014-02-12 乐金显示有限公司 Image data processing method and stereoscopic image display using the same
CN103856617A (en) * 2012-12-03 2014-06-11 联想(北京)有限公司 Photographing method and user terminal
CN103873834A (en) * 2012-12-10 2014-06-18 联想(北京)有限公司 Image acquisition method and corresponding image acquisition unit
CN103871014A (en) * 2012-12-17 2014-06-18 联想(北京)有限公司 Image color changing method and device
CN103973977A (en) * 2014-04-15 2014-08-06 联想(北京)有限公司 Blurring processing method and device for preview interface and electronic equipment

Also Published As

Publication number Publication date
CN105608699A (en) 2016-05-25

Similar Documents

Publication Publication Date Title
CN105608699B (en) A kind of image processing method and electronic equipment
JP6956252B2 (en) Facial expression synthesis methods, devices, electronic devices and computer programs
CN108009465B (en) Face recognition method and device
AU2015402322B2 (en) System and method for virtual clothes fitting based on video augmented reality in mobile phone
CN108830892B (en) Face image processing method and device, electronic equipment and computer readable storage medium
CN111814520A (en) Skin type detection method, skin type grade classification method, and skin type detection device
CN109635783A (en) Video monitoring method, device, terminal and medium
US10255487B2 (en) Emotion estimation apparatus using facial images of target individual, emotion estimation method, and non-transitory computer readable medium
CN108629339A (en) Image processing method and related product
BR112016017262B1 (en) METHOD FOR SEARCHING FOR OBJECTS AND TERMINAL ATTACHED COMMUNICATIVELY TO A SERVER.
CN107610149B (en) Image segmentation result edge optimization processing method and device and computing equipment
JP2015504220A5 (en)
CN112241667A (en) Image detection method, device, equipment and storage medium
CN106096043A (en) A kind of photographic method and mobile terminal
CN107808372B (en) Image crossing processing method and device, computing equipment and computer storage medium
CN104952093B (en) Virtual hair colouring methods and device
CN105229700B (en) Device and method for extracting peak figure picture from multiple continuously shot images
CN109791703A (en) Three dimensional user experience is generated based on two-dimensional medium content
CN106558042A (en) A kind of method and apparatus that crucial point location is carried out to image
CN107945202B (en) Image segmentation method and device based on adaptive threshold value and computing equipment
CN105096355B (en) Image processing method and system
CN105631938B (en) Image processing method and electronic equipment
KR20210049648A (en) Image processing system and method of providing realistic photo image by synthesizing object and background image
CN112199975A (en) Identity verification method and device based on human face features
CN106303161B (en) A kind of image processing method and electronic equipment

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant