CN108694719A - image output method and device - Google Patents
image output method and device Download PDFInfo
- Publication number
- CN108694719A CN108694719A CN201710217139.5A CN201710217139A CN108694719A CN 108694719 A CN108694719 A CN 108694719A CN 201710217139 A CN201710217139 A CN 201710217139A CN 108694719 A CN108694719 A CN 108694719A
- Authority
- CN
- China
- Prior art keywords
- pixel
- image
- target image
- value
- human body
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20112—Image segmentation details
- G06T2207/20132—Image cropping
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30196—Human being; Person
Abstract
This application discloses image output methods and device.One specific implementation mode of this method includes:Background point set based on target image and foreground point set, are split target image, and determine the pixel value of each pixel in the first mask image generated;The pixel value of each pixel of target image is imported in the skin likelihood value detection model being generated in advance and is matched to obtain the likelihood value of each pixel, for determining the pixel for belonging to skin area in each pixel;Super-pixel segmentation is carried out to target image, generation includes the target image of super-pixel;Based on the quantity for the pixel for belonging to skin area in super-pixel, the pixel value of each pixel in super-pixel is determined, the pixel value of each pixel in the second mask image to determine target image;Based on the pixel value of each pixel in the first mask image and the second mask image, the human body image in target image is exported.The embodiment realizes the extracting mode of more accurate reliable human body image.
Description
Technical field
This application involves field of computer technology, and in particular to technical field of image processing more particularly to image output side
Method and device.
Background technology
It is accurately separated out foreground (also referred to as " scratching and scheme " technology) from image or video, is image procossing, video editing
With the key technology in film making, have more than 20 years research history at present, and have been obtained for being extensively studied and
Using.In recent years research is concentrated mainly on carries out " scratching figure " to natural image (image not limited background), to certainly
Right image carries out " scratching figure " and needs to obtain additional information to be solved, and is typically more inputted by interact to obtain with user
For information to build constraints, this mode needs more manual operation, and for some complicated images, efficiency is low and accurate
Property is not high;Meanwhile when being divided to foreground area and the background area in image using image segmentation algorithm, due to illumination,
Some extraneous factors such as angle, the result that " scratching figure " obtains are inaccurate.
Invention content
The purpose of the application is to propose a kind of improved image output method and device, to solve background above technology department
Divide the technical issues of mentioning.
In a first aspect, the embodiment of the present application provides a kind of image output method, this method includes:Based on including human figure
The background point set and foreground point set of the target image of picture, are split target image, and generation includes foreground area and background area
The first mask image in domain, and determine the pixel value of each pixel in the first mask image, wherein background dot is to belong to target
The pixel of the background of image, foreground point are the pixel for the foreground for belonging to target image;By each pixel of target image
Pixel value import in the skin likelihood value detection model that is generated in advance to be matched to obtain each pixel and belong to skin area
Likelihood value, and be based on likelihood value, determine the pixel for belonging to skin area in each pixel, wherein skin likelihood value examine
Survey the correspondence that model is used to characterize pixel value and likelihood value;Super-pixel segmentation is carried out to target image, it includes super picture to generate
The target image of element;Based on the quantity for the pixel for belonging to skin area for including in super-pixel, each picture in super-pixel is determined
The pixel value of vegetarian refreshments, with the pixel value of each pixel in the second mask image of the determining target image pre-established;It is based on
In first mask image in the pixel value of each pixel and the second mask image each pixel pixel value, export target figure
Human body image as in.
In some embodiments, in background point set and foreground point set based on the target image comprising human body image, to mesh
Before logo image is split, this method further includes:It determines the profile of the human body image in target image, obtains profile and target
At least one pixel between the top edge of image, using at least one pixel as the background point set of target image;It obtains
Take at least one pixel on the face in target image, using at least one pixel on face as target image before
Sight spot collection.
In some embodiments, the profile of the human body image in target image is determined, including:To the human body in target image
The edge of image is detected, and generates the edge feature image of the contour line comprising human body image;Edge characteristic image is carried out
Discontinuous contour line in edge feature image is converted to continuous contour line by closed operation;After determining closed operation
The contour line of target image;The target image comprising the contour line after closed operation is filled using image completion algorithm,
And obtain the profile of the human body image in the target image after filling.
In some embodiments, at least one pixel between profile and the top edge of target image is obtained, it is near
Few background point set of the pixel as target image, including:Determine that the top edge of profile mid-range objectives image is nearest
Point and top edge on any point line midpoint, by the point set by midpoint and on the line segment parallel with top edge
Background point set as target image.
In some embodiments, obtain target image in face at least one pixel, by face at least
Foreground point set of one pixel as target image, including:Two pixels on the face in target image are obtained, it will
Foreground point set of the point set as target image on the line of two pixels.
In some embodiments, the human body image in target image is exported, including:Generate human body image to be output;With
Centered on every bit on the profile of human body image to be output, the rectangle for presetting the length of side is built;Using gaussian filtering to each
Each pixel in rectangle is smoothed, to obtain the human body image after smoothing processing;Output smoothing treated people
Body image.
In some embodiments, the quantity based on the pixel for belonging to skin area for including in super-pixel determines super picture
The pixel value of each pixel in element, including:Obtain the total quantity of the pixel in super-pixel;Determine the category for including in super-pixel
In the quantity of the pixel of skin area and the ratio of total quantity, and determine whether ratio is more than preset fractional threshold;If so,
Then set the pixel value of each pixel in super-pixel to the pixel value of the pixel of skin area.
In some embodiments, the pixel value of the pixel of the foreground area in the first mask image is default value, the
The pixel value of the pixel of skin area in two mask images is default value;And based on each picture in the first mask image
The pixel value of each pixel in the pixel value of vegetarian refreshments and the second mask image exports the human body image in target image, including:
The pixel value obtained in the pixel and the second mask image that the pixel value in the first mask image is default value is present count
The pixel of value generates the pixel collection of human body image;From target image, the pixel place in pixel collection is extracted
Region, using region as human body image;Export human body image.
Second aspect, the embodiment of the present application provide a kind of image output device, which includes:First determination unit,
It is configured to the background point set based on the target image comprising human body image and foreground point set, target image is split, it is raw
At the first mask image for including foreground area and background area, and determine the pixel of each pixel in the first mask image
Value, wherein background dot is the pixel for the background for belonging to target image, and foreground point is the pixel for the foreground for belonging to target image
Point;Second determination unit is configured to the pixel value of each pixel of target image importing the skin likelihood being generated in advance
It is matched to obtain the likelihood value that each pixel belongs to skin area in value detection model, and is based on likelihood value, determined each
Belong to the pixel of skin area in pixel, wherein skin likelihood value detection model is used to characterize pixel value and likelihood value
Correspondence;Generation unit is configured to carry out super-pixel segmentation to target image, and generation includes the target image of super-pixel;
Third determination unit is configured to, based on the quantity for the pixel for belonging to skin area for including in super-pixel, determine super-pixel
In each pixel pixel value, to determine the pixel of each pixel in the second mask image of target image pre-established
Value;Output unit is configured to based on each in the pixel value of each pixel in the first mask image and the second mask image
The pixel value of pixel exports the human body image in target image.
In some embodiments, which further includes:4th determination unit is configured to determine the human body in target image
The profile of image obtains at least one pixel between profile and the top edge of target image, by least one pixel
Background point set as target image;5th determination unit is configured to obtain at least one on the face in target image
Pixel, using at least one pixel on face as the foreground point set of target image.
In some embodiments, the 4th determination unit, including:Detection module is configured to the human body in target image
The edge of image is detected, and generates the edge feature image of the contour line comprising human body image;Closed operation module, is configured to
Closed operation is carried out to edge characteristic image, the discontinuous contour line in edge feature image is converted into continuous profile
Line;Determining module is configured to determine the contour line of the target image after closed operation;Module is filled, is configured to using figure
As filling algorithm is filled the target image comprising the contour line after closed operation, and obtain the target image after filling
In human body image profile.
In some embodiments, the 4th determination unit is further configured to:Determine the top of profile mid-range objectives image
The midpoint of the line of any point on the nearest point in portion edge and top edge, will be by midpoint and the line parallel with top edge
Background point set of the point set as target image in section.
In some embodiments, the 5th determination unit is further configured to:Obtain two on the face in target image
The pixel of eye, using the point set on the line of two pixels as the foreground point set of target image.
In some embodiments, output unit, including:First generation module is configured to generate human figure to be output
Picture;Module is built, is configured to centered on the every bit on the profile of human body image to be output, the square for presetting the length of side is built
Shape;Smoothing module is configured to be smoothed each pixel in each rectangle using gaussian filtering, with
Human body image after to smoothing processing;First output module is configured to output smoothing treated human body image.
In some embodiments, third determination unit, including:Acquisition module is configured to obtain the pixel in super-pixel
The total quantity of point;Determining module, be configured to determine the quantity of the pixel for belonging to skin area for including in super-pixel with it is total
The ratio of quantity, and determine whether ratio is more than preset fractional threshold;Setup module is configured to if so, by super-pixel
In each pixel pixel value be set as skin area pixel pixel value.
In some embodiments, the pixel value of the pixel of the foreground area in the first mask image is default value, the
The pixel value of the pixel of skin area in two mask images is default value;And output unit, further include:Second generates
Module is configured to obtain the picture in the pixel and the second mask image that the pixel value in the first mask image is default value
Element value is the pixel of default value, generates the pixel collection of human body image;Extraction module is configured to from target image
In, the region where the pixel in pixel collection is extracted, using region as human body image;Second output module, configuration are used
In output human body image.
The third aspect, the embodiment of the present application also provides a kind of servers, including:One or more processors;Storage dress
It sets, for storing one or more programs, when said one or multiple programs are executed by said one or multiple processors so that
Said one or multiple processors realize image output method provided by the present application.
Fourth aspect, the embodiment of the present application also provides a kind of computer readable storage mediums, are stored thereon with computer
Program, the program realize image output method provided by the present application when being executed by processor.
Image output method and device provided by the present application, are primarily based on the foreground point of the target image comprising human body image
Collection and background point set, determine the pixel value of each pixel in the first mask image, pass through Face Detection, super-pixel later
Segmentation scheduling algorithm determines the pixel value of each pixel in the second mask image, finally, based on each in the first mask image
The pixel value of each pixel, exports the human figure in above-mentioned target image in the pixel value of a pixel and the second mask image
Picture, to be combined prospect background region segmentation with detection of skin regions, the segmentation knot of better human body image can be obtained
Fruit realizes the extracting mode of more accurate reliable human body image.
Description of the drawings
By reading a detailed description of non-restrictive embodiments in the light of the attached drawings below, the application's is other
Feature, objects and advantages will become more apparent upon:
Fig. 1 is that this application can be applied to exemplary system architecture figures therein;
Fig. 2 is the flow chart according to one embodiment of the image output method of the application;
Fig. 3 is the flow chart according to another embodiment of the image output method of the application;
Fig. 4 A are the schematic diagrames according to a target image comprising human body image of the image output method of the application;
Fig. 4 B are the schematic diagrames according to an edge feature image of the image output method of the application;
Fig. 4 C are an edge feature images for including continuous contour line according to the image output method of the application
Schematic diagram;
Fig. 4 D are showing according to an edge feature image comprising largest contours line of the image output method of the application
It is intended to;
Fig. 4 E are the schematic diagrames of the target image after an image completion according to the image output method of the application;
Fig. 4 F are showing according to the target image of a profile comprising human body image of the image output method of the application
It is intended to;
Fig. 4 G are the schematic diagrames according to a target image comprising background point set of the image output method of the application;
Fig. 4 H are the schematic diagrames according to a target image comprising foreground point set of the image output method of the application;
Fig. 4 I are the schematic diagrames according to a first mask image of the image output method of the application;
Fig. 4 J are a targets for marking skin area and non-skin region according to the image output method of the application
The schematic diagram of image;
Fig. 4 K are the schematic diagrames according to a target image including super-pixel of the image output method of the application;
Fig. 4 L are the schematic diagrames according to a second mask image of the image output method of the application;
Fig. 4 M are the schematic diagrames according to a human body image to be output of the image output method of the application;
Fig. 4 N are the schematic diagrames according to the image at the edge of a zigzag fashion of the image output method of the application;
Fig. 4 O are the schematic diagrames of the image at the edge after a smoothing processing according to the image output method of the application;
Fig. 4 P are the schematic diagrames of the human body image after a smoothing processing according to the image output method of the application;
Fig. 5 is the structural schematic diagram according to one embodiment of the image output device of the application;
Fig. 6 is adapted for the structural schematic diagram of the computer system of the server for realizing the embodiment of the present application.
Specific implementation mode
The application is described in further detail with reference to the accompanying drawings and examples.It is understood that this place is retouched
The specific embodiment stated is used only for explaining related invention, rather than the restriction to the invention.It also should be noted that in order to
Convenient for description, is illustrated only in attached drawing and invent relevant part with related.
It should be noted that in the absence of conflict, the features in the embodiments and the embodiments of the present application can phase
Mutually combination.The application is described in detail below with reference to the accompanying drawings and in conjunction with the embodiments.
Fig. 1 shows the exemplary system of the embodiment of the image output method or image output device that can apply the application
System framework 100.
As shown in Figure 1, system architecture 100 may include terminal device 1011,1012, network 102, server 103 and letter
Cease display device 104.Wherein, network 102 between terminal device 1011,1012 and server 103 providing communication link
Medium.Network 102 may include various connection types, such as wired, wireless communication link or fiber optic cables etc..
Server 103 can be interacted by network 102 with terminal device 1011,1012, to send or receive target image
Information etc.;Server 103 can also be interacted with local information display device 104, to export image etc..Terminal device 1011,
Various client applications, such as camera-type application, image processing class application etc. can be installed on 1012.
Terminal device 1011,1012 can be had display screen and camera and the various electronics of information exchange is supported to set
It is standby, including but not limited to smart mobile phone, tablet computer, pocket computer on knee and desktop computer etc..
Server 103 can be to provide the server of various services, such as the target for including human body image to getting
Image carries out the background server of image procossing.Background server can carry out image segmentation, skin to the target image got
The processing such as color detection, super-pixel segmentation, and handling result (such as human body image) is exported, and it is presented on terminal device
1011, it on 1012, or is presented on local information display device 104.
Information display device 104 can be the various electronics for having display screen and carrying out local interaction with server 103
Equipment can show the image that server 103 exports.
It should be noted that the image output method that the embodiment of the present application is provided generally is executed by server 103, accordingly
Ground, image output device are generally positioned in server 103.
It should be understood that the number of the terminal device, network, server and information display device in Fig. 1 is only schematic
's.According to needs are realized, can have any number of terminal device, network, server and information display device.
With continued reference to Fig. 2, the flow 200 of one embodiment of the image output method according to the application is shown.The figure
As output method, include the following steps:
Step 201, background point set and foreground point set based on the target image comprising human body image carry out target image
Segmentation, generation includes the first mask image of foreground area and background area, and determines each pixel in the first mask image
Pixel value.
In the present embodiment, the electronic equipment (such as server shown in FIG. 1) of image output method operation thereon can
To obtain the background point set and foreground point set of the target image comprising human body image first, wherein include in above-mentioned target image
Human body image can be facial image, or include the upper half of human body image of skin area, can also be include skin
The Whole Body image in skin region.Every image can all be divided into foreground and background, the background that above-mentioned background dot is concentrated
Point is the pixel for the background for belonging to above-mentioned target image, and the foreground point that above-mentioned foreground point is concentrated is to belong to above-mentioned target image
The pixel of foreground.Later, it can be based on above-mentioned background point set and above-mentioned foreground point set, using image segmentation algorithm to above-mentioned mesh
Logo image is split.
As an example, can be split to above-mentioned image using lazy Snapping algorithms, lazy Snapping are calculated
Method is a kind of interactive image segmentation method, its Basic practice is:Utilization prospects point set and background point set, establish the face of foreground
The color model of color model and background, the Ma Erke for recycling figure segmentation algorithm (Graph Cut) to be established based on by image
Carried out in husband's random field (Markov Random Field) it is energy-optimised, to determine the classification of each pixel in image, the i.e. picture
Element belongs to the pixel of foreground area or the pixel of background area, to mark off the foreground area and background area of image
Domain, wherein it is to belong to display foreground that above-mentioned figure segmentation algorithm, which is by defining an energy function come the pixel judged in image,
Or image background, the random field of above-mentioned Markov random field, that is, Markov property, Markov property refer to one with
Machine Variables Sequence is when in chronological sequence relationship arranges successively, the distribution character at N+1 moment, pervious random with n-hour
The value of variable is unrelated, and random field includes mainly two elements:Position (site) and phase space (phase space), when to every
As soon as after the value for assigning phase space in a position at random according to certain distribution, entirety is called random field.
In the present embodiment, it after being split to above-mentioned target image, can generate comprising foreground area and background
The first mask image in region.In image procossing, if carry out processing depending in the mask image of the image to image
Whether the masked bits of pixel are shielding, if shielding, are not then handled the pixel.In above-mentioned first mask image
In, the pixel value of the pixel of background area can be set as 0;The pixel value of the pixel of foreground area is set as 255.
It, then can covering pixel (i.e. the pixel of background area) that pixel value is 0 when needing to handle foreground area
Code bit is set as shielding;When needing to handle background area, then pixel (the i.e. foreground that can be 255 by pixel value
The pixel in region) masked bits be set as shielding.After generating above-mentioned first mask image, above-mentioned electronic equipment can be by
Belong to foreground area or background area according to each pixel to determine the pixel of each pixel in above-mentioned first mask image
Value.
Step 202, the pixel value of each pixel of target image is imported to the skin likelihood value being generated in advance and detects mould
It is matched to obtain the likelihood value that each pixel belongs to skin area in type, and is based on likelihood value, determined in each pixel
Belong to the pixel of skin area.
In the present embodiment, above-mentioned electronic equipment can obtain skin likelihood value detection model from other electronic equipments,
Or skin likelihood value detection model can be pre-established, above-mentioned skin likelihood value detection model is the picture for characterizing pixel
Plain value belongs to the correspondence of the likelihood value of skin area with the pixel.The step of establishing skin likelihood value detection model, can
To include:The human body skin area marked out in image and image of the preset number comprising human skin is obtained, and obtains figure
The corresponding pixel value of each pixel as in, using big data analysis algorithm and machine learning algorithm etc., based on pixel
Pixel value and the pixel belong to the likelihood value of skin area, and training obtains skin likelihood value detection model.
In the present embodiment, above-mentioned electronic equipment can obtain the pixel of each pixel of above-mentioned target image first
Value, then imports the pixel value of each pixel in skin likelihood value detection model above-mentioned getting or being generated in advance
It is matched;Later, the likelihood value that the pixel belongs to skin area can be exported;Then, above-mentioned electronic equipment can will be upper
It states pixel of the likelihood value more than 0 to be labeled as belonging to the pixel of skin area, and above-mentioned likelihood value is less than or equal to 0 picture
Vegetarian refreshments is labeled as belonging to the pixel in non-skin region.
Step 203, super-pixel segmentation is carried out to target image, generation includes the target image of super-pixel.
In the present embodiment, above-mentioned electronic equipment can carry out above-mentioned target image super-pixel segmentation, and generate and include
The target image of super-pixel.As an example, SLIC (Simple Linear Iterative Clustering, letter can be used
Single linear iteration clusters) for method to above-mentioned target image progress super-pixel segmentation, SLIC methods are that the initial color of image is empty
Between be converted into CIELAB (CIE tissues determining one include theoretically the visible institute's the colorful one color mode of human eye) color
Then 5 dimensional feature vectors under space and XY coordinates construct distance metric, to the pixel of image to 5 dimensional feature vectors
Carry out the process of Local Clustering.
Super-pixel refer to have many characteristics, such as similar grain, color, brightness adjacent pixel constitute have certain vision meaning
The irregular block of pixels of justice.It, by group pixels, is replaced largely using the similitude of feature between pixel with a small amount of super-pixel
Pixel express picture feature, largely reduce the complexity of post processing of image, so usually as partitioning algorithm
Pre-treatment step.It is widely used for the computer visions application such as image segmentation, pose estimation, target following, target identification.
Step 204, the quantity based on the pixel for belonging to skin area for including in super-pixel determines each in super-pixel
The pixel value of pixel, with the pixel value of each pixel in the second mask image of the determining target image pre-established.
In the present embodiment, the pixel for belonging to skin area is determined in step 202, and is generated in step 203
After target image including super-pixel, the second mask image with above-mentioned target image same size can be created, above-mentioned
Two mask images include the super-pixel being divided into, can be super based on this for each super-pixel in above-mentioned second mask image
Include in pixel belongs to the quantity of the pixel of skin area, determines the pixel value of each pixel in the super-pixel, and really
The pixel value of each pixel in fixed above-mentioned second mask image.
Step 205, each pixel in the pixel value and the second mask image based on each pixel in the first mask image
The pixel value of point exports the human body image in target image.
In the present embodiment, the pixel value of each pixel in above-mentioned first mask image is determined in step 201, and
And determine in step 204 in above-mentioned second mask image after the pixel value of each pixel, it may be determined that above-mentioned first
The skin area in foreground area and above-mentioned second mask image in mask image, later obtain foreground area pixel with
The pixel of skin area generates pixel collection, can be extracted in above-mentioned pixel collection from above-mentioned target image
The region as human body image and is exported the human body image by the region where pixel.
In some optional realization methods of the present embodiment, it includes the first of foreground area covering to generate in step 201
After code image, the pixel value of the pixel of the foreground area in above-mentioned first mask image can be arranged for above-mentioned electronic equipment
It is 255, and the pixel value of the pixel of the background area in above-mentioned first mask image is set as 0;It determines in step 204
The pixel value of pixel in each super-pixel gone out in the second mask image whether be skin area pixel pixel value
Later, the pixel value of the pixel of the skin area in above-mentioned second mask image can be set to 255, and by above-mentioned second
The pixel value of the pixel in the non-skin region in mask image is set as 0.Above-mentioned electronic equipment can obtain above-mentioned first and cover
The pixel that pixel value in pixel and above-mentioned second mask image that pixel value in code image is 255 is 255, in generation
State the pixel collection of human body image;Later, the pixel in above-mentioned pixel collection can be extracted from above-mentioned target image
Region where point, as human body image and the human body image is exported using the region.Even above-mentioned first mask image with it is above-mentioned
The pixel value of pixel in second mask image is 0, then the pixel for exporting image is white;Otherwise, by the pixel
The pixel value of point is set as the pixel value of the pixel on above-mentioned target image.
The method that above-described embodiment of the application provides, by by Face Detection, super-pixel segmentation and to the foreground of image
The partitioning algorithm of background is combined, and can obtain the segmentation result of better human body image, realizes more accurate reliable people
The extracting mode of body image.
With further reference to Fig. 3, it illustrates the flows 300 of another embodiment of image output method.The image exports
The flow 300 of method, includes the following steps:
Step 301, determine the profile of the human body image in target image, obtain profile and target image top edge it
Between at least one pixel, using at least one pixel as the background point set of target image.
In the present embodiment, the electronic equipment (such as server shown in FIG. 1) of image output method operation thereon can
To get first that user is inputted by terminal or locally what is got include the target image of human body image, such as Fig. 4 A
It is shown;Later, the profile or edge of the human body image in above-mentioned target image are determined;Then, the profile and above-mentioned target are obtained
At least one pixel between the top edge of image, using at least one pixel got as above-mentioned target image
Background point set.It is to belong to above-mentioned that every image, which can all be divided into foreground and background, the background dot that above-mentioned background dot is concentrated,
The pixel of the background of target image.
In some optional realization methods of the present embodiment, above-mentioned electronic equipment can be first in above-mentioned target image
The edge of human body image be detected, determine the contour line in above-mentioned human body image, and generate the side for including above-mentioned contour line
Edge characteristic image, as shown in Figure 4 B.As an example, the edge of image can be detected using Canny edge detection algorithms,
Canny edge detection algorithms are a kind of first order differential operator detection algorithms, it is increased non-on the basis of first order differential operator
Maximum inhibits and two improvement of dual threshold, and multiple response edge can be not only effectively inhibited using non-maxima suppression, but also
The positioning accuracy at edge can also be improved;The omission factor at edge can be effectively reduced using dual threshold.
In some optional realization methods of the present embodiment, generate comprising above-mentioned contour line edge feature image it
Afterwards, closed operation can be carried out to above-mentioned edge feature image, the discontinuous contour line in above-mentioned edge feature image is turned
It is changed to continuous contour line, as shown in Figure 4 C.Closed operation refers to the process of carrying out first expansive working post-etching operation to image, is closed
Operation can make contour line more smooth, and the narrow interruption that can usually diminish and long thin wide gap eliminate small cavity, and fill up wheel
Fracture in profile.The concrete operations of expansion are:With a structural element (the generally size of 3*3 or 5*5) scan image
In each element, then the pixel covered with it with each pixel in structural element does with operation, if result is all 1,
Then the pixel value of the pixel is 1, and otherwise the pixel value of the pixel is 0.The concrete operations of corrosion are:It is scanned with a structural element
Each element in image, then the pixel covered with it with each pixel in structural element do with operation, if result is all
It is 0, then the pixel value of the pixel is 0, and otherwise the pixel value of the pixel is 1.
In some optional realization methods of the present embodiment, after carrying out closed operation to above-mentioned edge feature image,
It can determine the contour line carried out to above-mentioned edge feature image in the above-mentioned target image after closed operation, above-mentioned contour line one
As be maximum contour line, that is, include the most contour line of pixel in image, as shown in Figure 4 D.
It, can be with after the contour line after determining closed operation in some optional realization methods of the present embodiment
The target image comprising the contour line after execution closed operation is filled using image completion algorithm, wherein above-mentioned image
Filling algorithm is referred to as plane domain filling algorithm, and area filling is the boundary for providing a region, will be in bounds
All pixels unit be all modified as specified color.As an example, unrestrained water filling algorithm (Flood Fill can be used
Algorithm), it is referred to as injection filling algorithm, so-called unrestrained water filling algorithm is the point given in a UNICOM domain,
A kind of algorithm for finding remaining all the points of this UNICOM domain for starting point and being filled with as designated color is put with this.In this realization
In mode, the pixel in the lower left corner of the target image comprising largest contours line can be set to starting point, fill image, will fill out
The color mark for the part being charged to is 0, is 255 by the color mark for the part being not filled by, as shown in Figure 4 E, in Fig. 4 E,
White area is the part being filled into.Later, edge detection algorithm (e.g., Canny edge detection algorithms) acquisition may be used to fill out
The profile of the human body image in target image after filling, as illustrated in figure 4f.
In some optional realization methods of the present embodiment, above-mentioned electronic equipment can determine above-mentioned human body image first
Profile in the nearest point in top edge of the above-mentioned target image of distance determine above-mentioned nearest point and above-mentioned target figure later
Midpoint on the line at any point on the top edge of picture, finally, by by the midpoint and with above-mentioned target image
Background point set of the point set as above-mentioned target image on the parallel line segment in top edge, as shown in Figure 4 G, in Fig. 4 G, white
Point set on straight line is the background point set of above-mentioned target image.
Step 302, at least one pixel on the face in target image is obtained, by least one pixel on face
Foreground point set of the point as target image.
In the present embodiment, above-mentioned electronic equipment can identify target image using the face detection system increased income first
In face position, later, obtain face at least one pixel, then, at least one pixel that will be got
Foreground point set as above-mentioned target image.Above-mentioned at least one pixel can be any pixel point on face, for example, mouth
The pixel on pixel, nose on bar, the pixel etc. on forehead.
In some optional realization methods of the present embodiment, above-mentioned electronic equipment can obtain the picture on two on face
Vegetarian refreshments, and using the point set of the pixel on the line of the pixel on above-mentioned two as the foreground point set of above-mentioned target image,
As shown at figure 4h, in Fig. 4 H, the point set on white straight line is the foreground point set of above-mentioned target image.
Step 303, background point set and foreground point set based on the target image comprising human body image carry out target image
Segmentation, generation includes the first mask image of foreground area and background area, and determines each pixel in the first mask image
Pixel value.
In the present embodiment, in step 301 with got respectively in step 302 the background point set of above-mentioned target image with
After foreground point set, it can be based on above-mentioned background point set and above-mentioned foreground point set, using image segmentation algorithm to above-mentioned target figure
As being split, for example, being split to above-mentioned image using lazy Snapping algorithms;Later, it can generate comprising foreground
The first mask image in region and background area, as shown in fig. 41, in Fig. 4 I, white area be the first mask image in before
Scene area, black region are the background area in the first mask image.In image procossing, if carry out processing to image and depend on
Whether the masked bits of the pixel in the mask image of the image are shielding, if shielding, then not to the pixel at
Reason.After generating above-mentioned first mask image, above-mentioned electronic equipment can belong to foreground area still according to each pixel
Background area determines the pixel value of each pixel in above-mentioned first mask image.
Step 304, the pixel value of each pixel of target image is imported to the skin likelihood value being generated in advance and detects mould
It is matched to obtain the likelihood value that each pixel belongs to skin area in type, and is based on likelihood value, determined in each pixel
Belong to the pixel of skin area.
In the present embodiment, above-mentioned electronic equipment can obtain skin likelihood value detection model from other electronic equipments,
Or skin likelihood value detection model can be pre-established, above-mentioned skin likelihood value detection model is the picture for characterizing pixel
Plain value belongs to the correspondence of the likelihood value of skin area with the pixel.The step of establishing skin likelihood value detection model, can
To include:The human body skin area marked out in image and image of the preset number comprising human skin is obtained, and obtains figure
The corresponding pixel value of each pixel as in, using big data analysis algorithm and machine learning algorithm etc., based on pixel
Pixel value and the pixel belong to the likelihood value of skin area, and training obtains skin likelihood value detection model.
In the present embodiment, above-mentioned electronic equipment can import the pixel value of each pixel of above-mentioned target image
It states and is matched in skin likelihood value detection model getting or being generated in advance;Later, the pixel category can be exported
In the likelihood value of skin area;Then, the pixel that above-mentioned electronic equipment can be by above-mentioned likelihood value more than 0 is labeled as belonging to skin
The pixel in skin region, and the pixel by above-mentioned likelihood value less than or equal to 0 is labeled as belonging to the pixel in non-skin region,
As shown in fig. 4j, in Fig. 4 J, white area is the pixel of the skin area marked, and black region is the non-skin marked
The pixel in skin region.
Step 305, super-pixel segmentation is carried out to target image, generation includes the target image of super-pixel.
In the present embodiment, above-mentioned electronic equipment can carry out above-mentioned target image super-pixel segmentation, and generate and include
The target image of super-pixel, as shown in Figure 4 K, the number of super-pixel is set as 255 in Fig. 4 K, and therefore, target image is divided
At 255 pieces.Super-pixel refer to have many characteristics, such as similar grain, color, brightness adjacent pixel constitute have certain visual meaningaaa
Irregular block of pixels.As an example, SLIC methods can be used to carry out super-pixel segmentation, SLIC methods are by the initial of image
Color space is converted into 5 dimensional feature vectors under CIELAB color spaces and XY coordinates, then constructs distance to 5 dimensional feature vectors
Module carries out the pixel of image the process of Local Clustering.
Step 306, the quantity based on the pixel for belonging to skin area for including in super-pixel determines each in super-pixel
The pixel value of pixel, with the pixel value of each pixel in the second mask image of the determining target image pre-established.
In the present embodiment, it determines to generate in the pixel for belonging to skin area and step 305 in step 304 and includes
After the target image of super-pixel, the second mask image with above-mentioned target image same size can be created, above-mentioned second covers
Code image includes the super-pixel being divided into can be based on the super-pixel for each super-pixel in above-mentioned second mask image
In include the pixel for belonging to skin area quantity, determine the pixel value of each pixel in the super-pixel, and determine
State the pixel value of each pixel in the second mask image.
In some optional realization methods of the present embodiment, for each super-pixel in above-mentioned second mask image,
Above-mentioned electronic equipment can obtain the total quantity of the pixel in the super-pixel first, and obtain belonging to of including in the super-pixel
The quantity of the pixel of skin area;Later, it may be determined that the quantity of the above-mentioned pixel for belonging to skin area and above-mentioned sum
The ratio of amount, and determine whether above-mentioned ratio is more than preset fractional threshold;If above-mentioned ratio is more than preset fractional threshold,
The pixel value of each pixel in the super-pixel can be then disposed as to the pixel value of the pixel of skin area, such as Fig. 4 L
Shown, in Fig. 4 L, white area is the skin area in the second mask image, and black region is non-in the second mask image
Skin area.
Step 307, each pixel in the pixel value and the second mask image based on each pixel in the first mask image
The pixel value of point, generates human body image to be output.
In the present embodiment, in step 303 generate include foreground area the first mask image after, above-mentioned electronics is set
It is standby to set the pixel value of the pixel of the foreground area in above-mentioned first mask image to 255, and above-mentioned first is covered
The pixel value of the pixel of background area in code image is set as 0;It determines within step 306 each in the second mask image
The pixel value of pixel in a super-pixel whether be skin area the pixel value of pixel after, above-mentioned second can be covered
The pixel value of the pixel of skin area in code image is set as 255, and by the non-skin area in above-mentioned second mask image
The pixel value of the pixel in domain is set as 0.It is 255 that above-mentioned electronic equipment, which can obtain the pixel value in above-mentioned first mask image,
Pixel and above-mentioned second mask image in pixel value be 255 pixel, generate the pixel point set of above-mentioned human body image
It closes;Later, the region where the pixel in above-mentioned pixel collection can be extracted from above-mentioned target image, by the area
Domain is as human body image to be output, and as shown in fig. 4m, in Fig. 4 M, the non-white region in figure is human figure to be output
Picture.
Step 308, centered on the every bit on the profile of human body image to be output, the rectangle for presetting the length of side is built.
In the present embodiment, it since the edge of the human body image to be output generated in step 307 is rougher, might have
It the edge of zigzag fashion therefore, can be to each pixel on the profile of above-mentioned human body image to be output as shown in Fig. 4 N
Point is smoothed.Above-mentioned electronic equipment can centered on the every bit on the profile of above-mentioned human body image to be output,
Structure presets the rectangle (for example, structural element of 5*5 sizes) of the length of side.
Step 309, each pixel in each rectangle is smoothed using gaussian filtering, smoothly to be located
Human body image after reason.
In the present embodiment, gaussian filtering (Gauss filter) can be used to each picture in each rectangle of structure
Vegetarian refreshments is smoothed, to obtain the human body image after smoothing processing, as shown in Fig. 4 O.Gaussian filtering is substantially a kind of letter
Number filter, purposes be signal smoothing processing.The concrete operations of gaussian filtering are:It (or convolution, is covered with a template
Mould) each pixel in scan image, the weighted average gray value of pixel is gone in alternate template in the neighborhood determined with template
The value of imago vegetarian refreshments.
Step 310, output smoothing treated human body image.
In the present embodiment, after being smoothed in a step 309 to each pixel, above-mentioned electronic equipment can be with
Human body image after smoothing processing is exported, the image of output is as shown in Fig. 4 P.Above-mentioned electronic equipment can be smooth
Human body image progress output on local display screen that treated, can also be sent to end by the human body image after smoothing processing
End equipment, so that above-mentioned terminal device carries out image output.
From figure 3, it can be seen that compared with the corresponding embodiments of Fig. 2, the flow of the image output method in the present embodiment
300 the step of highlighting the acquisition of the background point set of target image with foreground point set, and to the side of human body image to be output
The step of edge is smoothed.The scheme of the present embodiment description can be in the case where prosthetic marks, from target figure as a result,
Background point set and foreground point set are obtained automatically as in, and before exporting human body image, the edge of image is smoothly located
The scheme of reason, the present embodiment description is simple and efficient, and makes the edge of the image of output more smooth.
With further reference to Fig. 5, as the realization to method shown in above-mentioned each figure, this application provides a kind of outputs of image to fill
The one embodiment set, the device embodiment is corresponding with embodiment of the method shown in Fig. 2, which specifically can be applied to respectively
In kind electronic equipment.
As shown in figure 5, the image output device 500 of the present embodiment includes:First determination unit 501, the second determination unit
502, generation unit 503, third determination unit 504 and output unit 505.Wherein, the first determination unit 501 is configured to be based on
The background point set and foreground point set for including the target image of human body image, are split target image, and generation includes foreground zone
The first mask image in domain and background area, and determine the pixel value of each pixel in the first mask image, wherein background dot
For belong to target image background pixel, foreground point is the pixel for the foreground for belonging to target image;Second determination unit
502 be configured to by the pixel value of each pixel of target image import in the skin likelihood value detection model being generated in advance into
Row matching obtains the likelihood value that each pixel belongs to skin area, and is based on likelihood value, determines in each pixel and belongs to skin
The pixel in skin region, wherein skin likelihood value detection model is used to characterize the correspondence of pixel value and likelihood value;It generates single
Member 503 is configured to carry out super-pixel segmentation to target image, and generation includes the target image of super-pixel;Third determination unit
504 are configured to the quantity based on the pixel for belonging to skin area for including in super-pixel, determine each pixel in super-pixel
The pixel value of point, with the pixel value of each pixel in the second mask image of the determining target image pre-established;Output is single
Member 505 is configured to based on each pixel in the pixel value of each pixel in the first mask image and the second mask image
Pixel value exports the human body image in target image.
In the present embodiment, the first determination unit 501 of above-mentioned image output device 500 can first be obtained comprising human body
The background point set and foreground point set of the target image of image, wherein the human body image for including in above-mentioned target image can be people
Face image, or include the upper half of human body image of skin area, can also be the Whole Body figure comprising skin area
Picture.It is to belong to above-mentioned target figure that every image, which can all be divided into foreground and background, the background dot that above-mentioned background dot is concentrated,
The pixel of the background of picture, the foreground point that above-mentioned foreground point is concentrated are the pixel for the foreground for belonging to above-mentioned target image.Later,
It can be based on above-mentioned background point set and above-mentioned foreground point set, above-mentioned target image is split using image segmentation algorithm.
After being split to above-mentioned target image, the first mask image comprising foreground area and background area can be generated.In life
After above-mentioned first mask image, above-mentioned first determination unit 501 can belong to foreground area still according to each pixel
Background area determines the pixel value of each pixel in above-mentioned first mask image.
In the present embodiment, above-mentioned second determination unit 502 can obtain each pixel of above-mentioned target image first
Pixel value, the pixel value of each pixel is then imported into the get or skin likelihood value detection model that is generated in advance
In matched;Later, the likelihood value that the pixel belongs to skin area can be exported;Then, above-mentioned second determination unit 502
Pixel that can be by above-mentioned likelihood value more than 0 is labeled as belonging to the pixel of skin area, and above-mentioned likelihood value is less than or
Pixel equal to 0 is labeled as belonging to the pixel in non-skin region.
In the present embodiment, above-mentioned generation unit 503 can carry out super-pixel segmentation to above-mentioned target image, and generate packet
Include the target image of super-pixel.Super-pixel refer to have many characteristics, such as similar grain, color, brightness adjacent pixel constitute have
The irregular block of pixels of certain visual meaningaaa.
In the present embodiment, the pixel for belonging to skin area is determined in above-mentioned second determination unit 502, and above-mentioned
Generation unit 503 generate include super-pixel target image after, above-mentioned third determination unit 504 can create and above-mentioned target
Second mask image of image same size, above-mentioned second mask image include the super-pixel being divided into, and are covered for above-mentioned second
Each super-pixel in code image can be determined based on the quantity for the pixel for belonging to skin area for including in the super-pixel
The pixel value of each pixel in the super-pixel, and determine the pixel value of each pixel in above-mentioned second mask image.
In the present embodiment, each pixel in above-mentioned first determination unit 501 determines above-mentioned first mask image
Pixel value, and determine the pixel value of each pixel in above-mentioned second mask image in above-mentioned third determination unit 504
Later, above-mentioned output unit 505 can determine in foreground area and above-mentioned second mask image in above-mentioned first mask image
Skin area, obtain the pixel of the pixel and skin area of foreground area later, generate pixel collection, can be from upper
It states in target image, extracts the region where the pixel in above-mentioned pixel collection, simultaneously using the region as human body image
Export the human body image.
In some optional realization methods of the present embodiment, above-mentioned image output device 500 can also include the 4th true
Order 506 (not shown)s of member and 507 (not shown) of the 5th determination unit.Above-mentioned 4th determination unit 506 can be first
First get the target image for including human body image that is that user is inputted by terminal or locally getting;Later, in determination
State the profile or edge of the human body image in target image;Then, obtain the profile and above-mentioned target image top edge it
Between at least one pixel, using at least one pixel got as the background point set of above-mentioned target image.Above-mentioned
Five determination units 507 can identify the position of the face in target image using the face detection system increased income first, later,
Obtain face at least one pixel, then, using at least one pixel got as above-mentioned target image before
Sight spot collection.
In some optional realization methods of the present embodiment, above-mentioned 4th determination unit 506 may include detection module
5061 (not shown)s, 5062 (not shown) of closed operation module, 5063 (not shown) of determining module and fill mould
5064 (not shown) of block.Above-mentioned module 5061 can first examine the edge of the human body image in above-mentioned target image
It surveys, determines the contour line in above-mentioned human body image, and generate the edge feature image for including above-mentioned contour line.It is generating comprising upper
After the edge feature image for stating contour line, above-mentioned closed operation module 5062 can carry out closing behaviour to above-mentioned edge feature image
Make, the discontinuous contour line in above-mentioned edge feature image is converted into continuous contour line.Closed operation refers to image
The process of first expansive working post-etching operation is carried out, closed operation can make contour line more smooth, and can usually diminish narrow interruption
The thin wide gap with length, eliminates small cavity, and fills up the fracture in contour line.Closed operation is being carried out to above-mentioned edge feature image
Later, above-mentioned determining module 5063 can determine in the above-mentioned target image carried out to above-mentioned edge feature image after closed operation
Contour line, above-mentioned contour line is generally maximum contour line, that is, includes the most contour line of pixel in image.In determination
After going out to execute the contour line after closed operation, above-mentioned filling module 5064 can use image completion algorithm to being closed comprising execution
The target image of contour line after operation is filled, wherein above-mentioned image completion algorithm is referred to as plane domain and fills out
Algorithm is filled, area filling is the boundary for providing a region, all pixels unit in bounds is all modified as specified
The profile of the human body image in the target image after edge detection algorithm obtains filling may be used later in color.
In some optional realization methods of the present embodiment, above-mentioned 4th determination unit 506 can also determine above-mentioned people
The nearest point in the top edge of the above-mentioned target image of distance in the profile of body image, later, determine above-mentioned nearest point with it is above-mentioned
Midpoint on the line at any point on the top edge of target image, finally, will by the midpoint and with above-mentioned target
Background point set of the point set as above-mentioned target image on the parallel line segment in the top edge of image.
In some optional realization methods of the present embodiment, above-mentioned 5th determination unit 507 can obtain two on face
Pixel on eye, and using the point set of the pixel on the line of the pixel on above-mentioned two as above-mentioned target image before
Sight spot collection.
In some optional realization methods of the present embodiment, above-mentioned output unit 505 can also include:First generates mould
5051 (not shown) of block, structure module 5052 (not shown), 5053 (not shown) of smoothing module and the
One output module, 5054 (not shown).Above-mentioned first generation module 5051 can be extracted from above-mentioned target image
The region where the pixel in pixel collection is stated, using the region as human body image to be output.Above-mentioned structure module
5052 can be smoothed each pixel on the profile of above-mentioned human body image to be output, above-mentioned structure module
5052 can build the rectangle for presetting the length of side centered on the every bit on the profile of above-mentioned human body image to be output.It is above-mentioned
Smoothing module 5053 can be smoothed each pixel in each rectangle of structure using gaussian filtering, with
Obtain the human body image after smoothing processing.It is above-mentioned after smoothing module 5053 is smoothed each pixel
First output module 5054 can be exported by the human body image after smoothing processing.
In some optional realization methods of the present embodiment, above-mentioned third determination unit 504 may include acquisition module
5043 (not shown) of 5041 (not shown)s, 5042 (not shown) of determining module and setup module.For above-mentioned
Each super-pixel in second mask image, above-mentioned acquisition module 5041 can obtain the total of the pixel in the super-pixel first
Quantity, and obtain the quantity for the pixel for belonging to skin area for including in the super-pixel;Later, above-mentioned determining module 5042 can
With the ratio of the quantity and above-mentioned total quantity of the above-mentioned pixel for belonging to skin area of determination, and determine whether above-mentioned ratio is more than
Preset fractional threshold;If above-mentioned ratio is more than preset fractional threshold, this can be surpassed picture by above-mentioned setup module 5043
The pixel value of each pixel in element is disposed as the pixel value of the pixel of skin area.
In some optional realization methods of the present embodiment, above-mentioned output unit 505 can also include:Second generates mould
5057 (not shown) of 5055 (not shown) of block, 5056 (not shown) of extraction module and the second output module.
Above-mentioned first determination unit 501 generate include foreground area the first mask image after, above-mentioned first determination unit 501 can be with
The pixel value of the pixel of foreground area in above-mentioned first mask image is set as 255;In above-mentioned third determination unit 504
Determine the pixel in each super-pixel in the second mask image pixel value whether be skin area pixel picture
After element value, the pixel value of the pixel of the skin area in above-mentioned second mask image can be set as 255.Above-mentioned second
Generation module 5055 can obtain pixel and the above-mentioned second mask image that the pixel value in above-mentioned first mask image is 255
In pixel value be 255 pixel, generate the pixel collection of above-mentioned human body image;Later, said extracted module 5056 can
From above-mentioned target image, to extract the region where the pixel in above-mentioned pixel collection, above-mentioned second output module
The region as human body image and can be exported the human body image by 5057.
Below with reference to Fig. 6, it illustrates the computer systems 600 suitable for the server for realizing the embodiment of the present invention
Structural schematic diagram.Server shown in Fig. 6 is only an example, should not be to the function and use scope band of the embodiment of the present application
Carry out any restrictions.
As shown in fig. 6, computer system 600 includes central processing unit (CPU) 601, it can be read-only according to being stored in
Program in memory (ROM) 602 or be loaded into the program in random access storage device (RAM) 603 from storage section 608 and
Execute various actions appropriate and processing.In RAM 603, also it is stored with system 600 and operates required various programs and data.
CPU 601, ROM 602 and RAM 603 are connected with each other by bus 604.Input/output (I/O) interface 605 is also connected to always
Line 604.
It is connected to I/O interfaces 605 with lower component:Importation 606 including keyboard, mouse etc.;It is penetrated including such as cathode
The output par, c 607 of spool (CRT), liquid crystal display (LCD) etc. and loud speaker etc.;Storage section 608 including hard disk etc.;
And the communications portion 609 of the network interface card including LAN card, modem etc..Communications portion 609 via such as because
The network of spy's net executes communication process.Driver 610 is also according to needing to be connected to I/O interfaces 605.Detachable media 611, such as
Disk, CD, magneto-optic disk, semiconductor memory etc. are mounted on driver 610, as needed in order to be read from thereon
Computer program be mounted into storage section 608 as needed.
Particularly, in accordance with an embodiment of the present disclosure, it may be implemented as computer above with reference to the process of flow chart description
Software program.For example, embodiment of the disclosure includes a kind of computer program product comprising be carried on computer-readable medium
On computer program, which includes the program code for method shown in execution flow chart.In such reality
It applies in example, which can be downloaded and installed by communications portion 609 from network, and/or from detachable media
611 are mounted.When the computer program is executed by central processing unit (CPU) 601, limited in execution the present processes
Above-mentioned function.It should be noted that the above-mentioned computer-readable medium of the application can be computer-readable signal media or
Computer readable storage medium either the two arbitrarily combines.Computer readable storage medium for example can be --- but
Be not limited to --- electricity, magnetic, optical, electromagnetic, infrared ray or semiconductor system, device or device, or arbitrary above combination.
The more specific example of computer readable storage medium can include but is not limited to:Electrical connection with one or more conducting wires,
Portable computer diskette, hard disk, random access storage device (RAM), read-only memory (ROM), erasable type may be programmed read-only deposit
Reservoir (EPROM or flash memory), optical fiber, portable compact disc read-only memory (CD-ROM), light storage device, magnetic memory
Part or above-mentioned any appropriate combination.In this application, computer readable storage medium can any be included or store
The tangible medium of program, the program can be commanded the either device use or in connection of execution system, device.And
In the application, computer-readable signal media may include the data letter propagated in a base band or as a carrier wave part
Number, wherein carrying computer-readable program code.Diversified forms may be used in the data-signal of this propagation, including but not
It is limited to electromagnetic signal, optical signal or above-mentioned any appropriate combination.Computer-readable signal media can also be computer
Any computer-readable medium other than readable storage medium storing program for executing, the computer-readable medium can send, propagate or transmit use
In by instruction execution system, device either device use or program in connection.Include on computer-readable medium
Program code can transmit with any suitable medium, including but not limited to:Wirelessly, electric wire, optical cable, RF etc., Huo Zheshang
Any appropriate combination stated.
Flow chart in attached drawing and block diagram, it is illustrated that according to the system of various embodiments of the invention, method and computer journey
The architecture, function and operation in the cards of sequence product.In this regard, each box in flowchart or block diagram can generation
A part for a part for one module, program segment, or code of table, the module, program segment, or code includes one or more uses
The executable instruction of the logic function as defined in realization.It should also be noted that in some implementations as replacements, being marked in box
The function of note can also occur in a different order than that indicated in the drawings.For example, two boxes succeedingly indicated are actually
It can be basically executed in parallel, they can also be executed in the opposite order sometimes, this is depended on the functions involved.Also it to note
Meaning, the combination of each box in block diagram and or flow chart and the box in block diagram and or flow chart can be with holding
The dedicated hardware based system of functions or operations as defined in row is realized, or can use specialized hardware and computer instruction
Combination realize.
Being described in unit involved in the embodiment of the present invention can be realized by way of software, can also be by hard
The mode of part is realized.Described unit can also be arranged in the processor, for example, can be described as:A kind of processor packet
Include the first determination unit, the second determination unit, generation unit, third determination unit and output unit.Wherein, the name of these units
Claim not constituting the restriction to the unit itself under certain conditions.For example, generation unit is also described as " to target figure
As carrying out super-pixel segmentation, the unit for the target image for including super-pixel is generated ".
As on the other hand, present invention also provides a kind of computer-readable medium, which can be
Included in device described in above-described embodiment;Can also be individualism, and without be incorporated the device in.Above-mentioned calculating
Machine readable medium carries one or more program, when said one or multiple programs are executed by the device so that should
Device:Background point set based on the target image comprising human body image and foreground point set, are split target image, generate packet
The first mask image of foreground area and background area is included, and determines the pixel value of each pixel in the first mask image,
In, background dot is the pixel for the background for belonging to target image, and foreground point is the pixel for the foreground for belonging to target image;By mesh
It is matched to obtain in the skin likelihood value detection model that the pixel value importing of each pixel of logo image is generated in advance each
Pixel belongs to the likelihood value of skin area, and is based on likelihood value, determines the pixel for belonging to skin area in each pixel,
Wherein, skin likelihood value detection model is used to characterize the correspondence of pixel value and likelihood value;Super-pixel is carried out to target image
Segmentation, generation include the target image of super-pixel;Based on the quantity for the pixel for belonging to skin area for including in super-pixel, really
The pixel value for determining each pixel in super-pixel, with each pixel in the second mask image of the determining target image pre-established
The pixel value of point;The picture of each pixel in pixel value and the second mask image based on each pixel in the first mask image
Element value, exports the human body image in target image.
Above description is only presently preferred embodiments of the present invention and the explanation to institute's application technology principle.People in the art
Member should be appreciated that invention scope involved in the present invention, however it is not limited to technology made of the specific combination of above-mentioned technical characteristic
Scheme, while should also cover in the case where not departing from foregoing invention design, it is carried out by above-mentioned technical characteristic or its equivalent feature
Other technical solutions of arbitrary combination and formation.Such as features described above has similar work(with (but not limited to) disclosed in the present invention
Can technical characteristic replaced mutually and the technical solution that is formed.
Claims (18)
1. a kind of image output method, which is characterized in that the method includes:
Background point set based on the target image comprising human body image and foreground point set, are split the target image, raw
At the first mask image for including foreground area and background area, and determine the picture of each pixel in the first mask image
Element value, wherein background dot is the pixel for the background for belonging to the target image, and foreground point is before belonging to the target image
The pixel of scape;
The pixel value of each pixel of the target image is imported in the skin likelihood value detection model being generated in advance and is carried out
Matching obtains the likelihood value that each pixel belongs to skin area, and is based on the likelihood value, determines and belongs in each pixel
The pixel of skin area, wherein the skin likelihood value detection model is used to characterize the correspondence of pixel value and likelihood value;
Super-pixel segmentation is carried out to the target image, generation includes the target image of super-pixel;
Based on the quantity for the pixel for belonging to skin area for including in the super-pixel, each pixel in the super-pixel is determined
The pixel value of point, with the pixel value of each pixel in the second mask image of the determining target image pre-established;
Each pixel in pixel value and the second mask image based on each pixel in the first mask image
Pixel value exports the human body image in the target image.
2. according to the method described in claim 1, it is characterized in that, in the back of the body based on the target image comprising human body image
Sight spot collection and foreground point set, before being split to the target image, the method further includes:
The profile for determining the human body image in the target image, obtain the profile and the target image top edge it
Between at least one pixel, using at least one pixel as the background point set of the target image;
At least one pixel on the face in the target image is obtained, at least one pixel on the face is made
For the foreground point set of the target image.
3. according to the method described in claim 2, it is characterized in that, the wheel of the human body image in the determination target image
Exterior feature, including:
The edge of human body image in the target image is detected, the side of the contour line comprising the human body image is generated
Edge characteristic image;
Closed operation is carried out to the edge feature image, the discontinuous contour line in the edge feature image is converted to
Continuous contour line;
Determine the contour line of the target image after closed operation;
The target image comprising the contour line after closed operation is filled using image completion algorithm, and after obtaining filling
Target image in human body image profile.
4. according to the method in claim 2 or 3, which is characterized in that described to obtain the profile and the target image
At least one pixel between top edge, using at least one pixel as the background point set of the target image,
Including:
Determine the nearest point in the top edge of target image described in distance in the profile and any point on the top edge
The target image is regard the point set by the midpoint and on the line segment parallel with the top edge as in the midpoint of line
Background point set.
5. according to the method described in claim 2, it is characterized in that, on the face obtained in the target image at least
One pixel, using at least one pixel on the face as the foreground point set of the target image, including:
Two pixels on the face in the target image are obtained, by the point set on the line of two pixels
Foreground point set as the target image.
6. method according to claim 1 or 2, which is characterized in that the human body image in the output target image,
Including:
Generate human body image to be output;
Centered on the every bit on the profile of the human body image to be output, the rectangle for presetting the length of side is built;
Each pixel in each rectangle is smoothed using gaussian filtering, to obtain the human figure after smoothing processing
Picture;
Export the human body image after the smoothing processing.
7. according to the method described in claim 1, it is characterized in that, described belong to skin region based on include in the super-pixel
The quantity of the pixel in domain determines the pixel value of each pixel in the super-pixel, including:
Obtain the total quantity of the pixel in the super-pixel;
It determines that include in the super-pixel belongs to the ratio of the quantity and the total quantity of the pixel of skin area, and determines
Whether the ratio is more than preset fractional threshold;
If so, setting the pixel value of each pixel in the super-pixel to the pixel value of the pixel of skin area.
8. method according to claim 1 or claim 7, which is characterized in that the picture of the foreground area in the first mask image
The pixel value of vegetarian refreshments is default value, and the pixel value of the pixel of the skin area in the second mask image is present count
Value;And
Each pixel in the pixel value based on each pixel in the first mask image and the second mask image
The pixel value of point, exports the human body image in the target image, including:
Obtain the picture in the pixel and the second mask image that the pixel value in the first mask image is default value
Element value is the pixel of default value, generates the pixel collection of the human body image;
From the target image, the region where the pixel in the pixel collection is extracted, using the region as people
Body image;
Export the human body image.
9. a kind of image output device, which is characterized in that described device includes:
First determination unit is configured to the background point set based on the target image comprising human body image and foreground point set, to institute
It states target image to be split, generation includes the first mask image of foreground area and background area, and determines that described first covers
The pixel value of each pixel in code image, wherein background dot is the pixel for the background for belonging to the target image, foreground point
For belong to the target image foreground pixel;
Second determination unit is configured to the pixel value of each pixel of the target image importing the skin being generated in advance
It is matched to obtain the likelihood value that each pixel belongs to skin area in likelihood value detection model, and is based on the likelihood value,
Determine the pixel for belonging to skin area in each pixel, wherein the skin likelihood value detection model is for characterizing pixel
The correspondence of value and likelihood value;
Generation unit is configured to carry out super-pixel segmentation to the target image, and generation includes the target image of super-pixel;
Third determination unit is configured to the quantity based on the pixel for belonging to skin area for including in the super-pixel, really
The pixel value of each pixel in the fixed super-pixel, in the second mask image of the determining target image pre-established
The pixel value of each pixel;
Output unit is configured to based on the pixel value of each pixel in the first mask image and the second mask figure
The pixel value of each pixel, exports the human body image in the target image as in.
10. device according to claim 9, which is characterized in that described device further includes:
4th determination unit is configured to determine the profile of the human body image in the target image, obtains the profile and institute
At least one pixel between the top edge of target image is stated, using at least one pixel as the target image
Background point set;
5th determination unit is configured to obtain at least one pixel on the face in the target image, by the people
Foreground point set of at least one pixel on the face as the target image.
11. device according to claim 10, which is characterized in that the 4th determination unit, including:
Detection module is configured to be detected the edge of the human body image in the target image, and it includes the people to generate
The edge feature image of the contour line of body image;
Closed operation module is configured to carry out closed operation to the edge feature image, will be in the edge feature image
Discontinuous contour line is converted to continuous contour line;
Determining module is configured to determine the contour line of the target image after closed operation;
Module is filled, is configured to fill out the target image comprising the contour line after closed operation using image completion algorithm
It fills, and obtains the profile of the human body image in the target image after filling.
12. the device according to claim 10 or 11, which is characterized in that the 4th determination unit further configures use
In:
Determine the nearest point in the top edge of target image described in distance in the profile and any point on the top edge
The target image is regard the point set by the midpoint and on the line segment parallel with the top edge as in the midpoint of line
Background point set.
13. device according to claim 10, which is characterized in that the 5th determination unit is further configured to:
Two pixels on the face in the target image are obtained, by the point set on the line of two pixels
Foreground point set as the target image.
14. device according to claim 9 or 10, which is characterized in that the output unit, including:
First generation module is configured to generate human body image to be output;
Module is built, is configured to centered on the every bit on the profile of the human body image to be output, builds and presets side
Long rectangle;
Smoothing module is configured to be smoothed each pixel in each rectangle using gaussian filtering, with
Obtain the human body image after smoothing processing;
First output module is configured to export the human body image after the smoothing processing.
15. device according to claim 9, which is characterized in that the third determination unit, including:
Acquisition module is configured to obtain the total quantity of the pixel in the super-pixel;
Determining module, be configured to determine the quantity of the pixel for belonging to skin area for including in the super-pixel with it is described total
The ratio of quantity, and determine whether the ratio is more than preset fractional threshold;
Setup module is configured to if so, setting the pixel value of each pixel in the super-pixel to skin area
Pixel pixel value.
16. the device according to claim 9 or 15, which is characterized in that foreground area in the first mask image
The pixel value of pixel is default value, and the pixel value of the pixel of the skin area in the second mask image is present count
Value;And
The output unit further includes:
Second generation module is configured to obtain pixel and institute of the pixel value in the first mask image for default value
The pixel that the pixel value in the second mask image is default value is stated, the pixel collection of the human body image is generated;
Extraction module is configured to from the target image, extracts the region where the pixel in the pixel collection,
Using the region as human body image;
Second output module is configured to export the human body image.
17. a kind of server, including:
One or more processors;
Storage device, for storing one or more programs,
When one or more of programs are executed by one or more of processors so that one or more of processors are real
Now such as method according to any one of claims 1-8.
18. a kind of computer readable storage medium, is stored thereon with computer program, which is characterized in that the program is by processor
Such as method according to any one of claims 1-8 is realized when execution.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710217139.5A CN108694719B (en) | 2017-04-05 | 2017-04-05 | Image output method and device |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710217139.5A CN108694719B (en) | 2017-04-05 | 2017-04-05 | Image output method and device |
Publications (2)
Publication Number | Publication Date |
---|---|
CN108694719A true CN108694719A (en) | 2018-10-23 |
CN108694719B CN108694719B (en) | 2020-11-03 |
Family
ID=63841957
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201710217139.5A Active CN108694719B (en) | 2017-04-05 | 2017-04-05 | Image output method and device |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN108694719B (en) |
Cited By (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109754379A (en) * | 2018-12-29 | 2019-05-14 | 北京金山安全软件有限公司 | Image processing method and device |
CN109934812A (en) * | 2019-03-08 | 2019-06-25 | 腾讯科技(深圳)有限公司 | Image processing method, device, server and storage medium |
CN110827371A (en) * | 2019-11-05 | 2020-02-21 | 厦门美图之家科技有限公司 | Certificate photo generation method and device, electronic equipment and storage medium |
CN111179276A (en) * | 2018-11-12 | 2020-05-19 | 北京京东尚科信息技术有限公司 | Image processing method and device |
CN111292335A (en) * | 2018-12-10 | 2020-06-16 | 北京地平线机器人技术研发有限公司 | Method and device for determining foreground mask feature map and electronic equipment |
CN112967301A (en) * | 2021-04-08 | 2021-06-15 | 北京华捷艾米科技有限公司 | Self-timer image matting method and device |
US20220130047A1 (en) * | 2019-02-07 | 2022-04-28 | Commonwealth Scientific And Industrial Research Organisation | Diagnostic imaging for diabetic retinopathy |
WO2023077650A1 (en) * | 2021-11-02 | 2023-05-11 | 北京鸿合爱学教育科技有限公司 | Three-color image generation method and related device |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20130322754A1 (en) * | 2012-05-30 | 2013-12-05 | Samsung Techwin Co., Ltd. | Apparatus and method for extracting target, and recording medium storing program for performing the method |
CN105184787A (en) * | 2015-08-31 | 2015-12-23 | 广州市幸福网络技术有限公司 | Identification camera capable of automatically carrying out portrait cutout and method thereof |
JP2016095849A (en) * | 2014-11-12 | 2016-05-26 | 株式会社リコー | Method and device for dividing foreground image, program, and recording medium |
CN105631455A (en) * | 2014-10-27 | 2016-06-01 | 阿里巴巴集团控股有限公司 | Image main body extraction method and system |
CN106529432A (en) * | 2016-11-01 | 2017-03-22 | 山东大学 | Hand area segmentation method deeply integrating significance detection and prior knowledge |
-
2017
- 2017-04-05 CN CN201710217139.5A patent/CN108694719B/en active Active
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20130322754A1 (en) * | 2012-05-30 | 2013-12-05 | Samsung Techwin Co., Ltd. | Apparatus and method for extracting target, and recording medium storing program for performing the method |
CN105631455A (en) * | 2014-10-27 | 2016-06-01 | 阿里巴巴集团控股有限公司 | Image main body extraction method and system |
JP2016095849A (en) * | 2014-11-12 | 2016-05-26 | 株式会社リコー | Method and device for dividing foreground image, program, and recording medium |
CN105184787A (en) * | 2015-08-31 | 2015-12-23 | 广州市幸福网络技术有限公司 | Identification camera capable of automatically carrying out portrait cutout and method thereof |
CN106529432A (en) * | 2016-11-01 | 2017-03-22 | 山东大学 | Hand area segmentation method deeply integrating significance detection and prior knowledge |
Non-Patent Citations (1)
Title |
---|
王金庭等: "基于YCbCr空间的亮度自适应肤色检测", 《计算机系统应用》 * |
Cited By (13)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111179276B (en) * | 2018-11-12 | 2024-02-06 | 北京京东尚科信息技术有限公司 | Image processing method and device |
CN111179276A (en) * | 2018-11-12 | 2020-05-19 | 北京京东尚科信息技术有限公司 | Image processing method and device |
CN111292335B (en) * | 2018-12-10 | 2023-06-13 | 北京地平线机器人技术研发有限公司 | Method and device for determining foreground mask feature map and electronic equipment |
CN111292335A (en) * | 2018-12-10 | 2020-06-16 | 北京地平线机器人技术研发有限公司 | Method and device for determining foreground mask feature map and electronic equipment |
CN109754379A (en) * | 2018-12-29 | 2019-05-14 | 北京金山安全软件有限公司 | Image processing method and device |
US20220130047A1 (en) * | 2019-02-07 | 2022-04-28 | Commonwealth Scientific And Industrial Research Organisation | Diagnostic imaging for diabetic retinopathy |
CN109934812B (en) * | 2019-03-08 | 2022-12-09 | 腾讯科技(深圳)有限公司 | Image processing method, image processing apparatus, server, and storage medium |
US11715203B2 (en) | 2019-03-08 | 2023-08-01 | Tencent Technology (Shenzhen) Company Limited | Image processing method and apparatus, server, and storage medium |
CN109934812A (en) * | 2019-03-08 | 2019-06-25 | 腾讯科技(深圳)有限公司 | Image processing method, device, server and storage medium |
CN110827371B (en) * | 2019-11-05 | 2023-04-28 | 厦门美图之家科技有限公司 | Certificate generation method and device, electronic equipment and storage medium |
CN110827371A (en) * | 2019-11-05 | 2020-02-21 | 厦门美图之家科技有限公司 | Certificate photo generation method and device, electronic equipment and storage medium |
CN112967301A (en) * | 2021-04-08 | 2021-06-15 | 北京华捷艾米科技有限公司 | Self-timer image matting method and device |
WO2023077650A1 (en) * | 2021-11-02 | 2023-05-11 | 北京鸿合爱学教育科技有限公司 | Three-color image generation method and related device |
Also Published As
Publication number | Publication date |
---|---|
CN108694719B (en) | 2020-11-03 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN108694719A (en) | image output method and device | |
Fang et al. | Bottom-up saliency detection model based on human visual sensitivity and amplitude spectrum | |
CN110930297B (en) | Style migration method and device for face image, electronic equipment and storage medium | |
US20180204052A1 (en) | A method and apparatus for human face image processing | |
CN110503703A (en) | Method and apparatus for generating image | |
CN108961369A (en) | The method and apparatus for generating 3D animation | |
CN108765278A (en) | A kind of image processing method, mobile terminal and computer readable storage medium | |
CN109952594A (en) | Image processing method, device, terminal and storage medium | |
CN108388878A (en) | The method and apparatus of face for identification | |
CN105354248A (en) | Gray based distributed image bottom-layer feature identification method and system | |
CN109409994A (en) | The methods, devices and systems of analog subscriber garments worn ornaments | |
KR100896643B1 (en) | Method and system for modeling face in three dimension by means of aam, and apparatus applied to the same | |
CN106682632A (en) | Method and device for processing face images | |
CN108846792A (en) | Image processing method, device, electronic equipment and computer-readable medium | |
CN109840881A (en) | A kind of 3D special efficacy image generating method, device and equipment | |
CN110082135A (en) | Equipment fault recognition methods, device and terminal device | |
CN108509892A (en) | Method and apparatus for generating near-infrared image | |
CN109003224A (en) | Strain image generation method and device based on face | |
CN108198130A (en) | Image processing method, device, storage medium and electronic equipment | |
CN110472460A (en) | Face image processing process and device | |
CN110349135A (en) | Object detection method and device | |
CN109492601A (en) | Face comparison method and device, computer-readable medium and electronic equipment | |
KR20230097157A (en) | Method and system for personalized 3D head model transformation | |
CN109816694A (en) | Method for tracking target, device and electronic equipment | |
CN108182457A (en) | For generating the method and apparatus of information |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |