CN108960167A - Hair style recognition methods, device, computer readable storage medium and computer equipment - Google Patents
Hair style recognition methods, device, computer readable storage medium and computer equipment Download PDFInfo
- Publication number
- CN108960167A CN108960167A CN201810758353.6A CN201810758353A CN108960167A CN 108960167 A CN108960167 A CN 108960167A CN 201810758353 A CN201810758353 A CN 201810758353A CN 108960167 A CN108960167 A CN 108960167A
- Authority
- CN
- China
- Prior art keywords
- hair
- hair style
- image
- images
- recognized
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/20—Image preprocessing
- G06V10/26—Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
- G06V10/267—Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion by performing operations on regions, e.g. growing, shrinking or watersheds
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/11—Region-based segmentation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/90—Determination of colour characteristics
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/44—Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/168—Feature extraction; Face representation
- G06V40/171—Local features and components; Facial parts ; Occluding parts, e.g. glasses; Geometrical relationships
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20112—Image segmentation details
- G06T2207/20132—Image cropping
Abstract
This application involves a kind of hair style recognition methods, device, computer readable storage medium and computer equipments, which comprises obtains images to be recognized;From images to be recognized segmenting hair region, obtain include pixel in the hair zones hair style image;Extract the shared hair style feature in the hair style image;The shared hair style is characterized in the feature that multiple and different identification missions shares;Multiple and different identification missions is executed respectively according to the shared hair style feature;Output executes multiple hair style attribute classifications that the different identification mission respectively obtains.Hair style recognition efficiency can be improved in scheme provided by the present application.
Description
Technical field
This application involves field of computer technology, more particularly to a kind of hair style recognition methods, device, computer-readable deposit
Storage media and computer equipment.
Background technique
With the rapid development of computer technology, computer technology brings many conveniences to people's lives, greatly
Improve people's lives quality.In people's daily life, the usually scene in need that hair style is identified, for example,
When needing to restyle the hair when do moulding, or needing to analyze character features under the monitoring scenes such as intelligent security guard etc.
Deng.
Traditional carries out knowledge method for distinguishing, usually pre-set image library and a variety of hair styles to personage's hair style, wraps in image library
Containing the corresponding image of a variety of hair styles.Image to be identified is obtained by being compared one by one with the image in pre-set image library
The similarity with correspondence image is obtained, to determine hair style to be identified according to similarity.However traditional hair style recognition methods, by
In needing one by one to be compared image to be identified and all images in pre-set image library, ask there are recognition efficiency is low
Topic.
Summary of the invention
Based on this, it is necessary to for the low technical problem of hair style recognition efficiency in traditional hair style recognition methods, provide one
Kind hair style recognition methods, device, computer readable storage medium and computer equipment.
A kind of hair style recognition methods, comprising:
Obtain images to be recognized;
From images to be recognized segmenting hair region, obtain include pixel in the hair zones hair style image;
Extract the shared hair style feature in the hair style image;The shared hair style is characterized in multiple and different identification missions
The feature shared;
Multiple and different identification missions is executed respectively according to the shared hair style feature;
Output executes multiple hair style attribute classifications that the different identification mission respectively obtains.
A kind of hair style identification device, comprising:
Module is obtained, for obtaining images to be recognized;
Divide module, for from images to be recognized segmenting hair region, obtain include pixel in the hair zones hair
Type image;
Extraction module, for extracting the shared hair style feature in the hair style image;The shared hair style is characterized in multiple
The feature that different identification missions shares;
Execution module, for executing multiple and different identification missions respectively according to the shared hair style feature;
Output module, for exporting the multiple hair style attribute classifications for executing the different identification mission and respectively obtaining.
A kind of computer readable storage medium is stored with computer program, when the computer program is executed by processor,
So that the processor executes the step of hair style recognition methods.
A kind of computer equipment, including memory and processor, the memory are stored with computer program, the calculating
When machine program is executed by the processor, so that the step of processor executes the hair style recognition methods.
Above-mentioned hair style recognition methods, device, computer readable storage medium and computer equipment, by from images to be recognized
In be partitioned into hair zones, with obtain include pixel in hair zones hair style image, can effectively filter background information
The content of hair zones is absorbed in interference in subsequent processing, and the efficiency and accuracy of hair style identification can be improved.Hair style is extracted again
Shared hair style feature in image executes multiple and different identification missions according to shared hair style feature respectively, and exports execution not
Multiple hair style attribute classifications that same identification mission respectively obtains.In this way, being executed respectively according to shared hair style feature multiple and different
Identification mission, multiple attribute classifications belonging to hair in hair style image can directly and be simultaneously identified, without will be to
Identification image is compared one by one with all images in pre-set image library, substantially increases hair style recognition efficiency.
Detailed description of the invention
Fig. 1 is the applied environment figure of hair style recognition methods in one embodiment;
Fig. 2 is the flow diagram of hair style recognition methods in one embodiment;
Fig. 3 is the schematic network structure of multitask convolutional neural networks in one embodiment in one embodiment;
Fig. 4 is flow diagram the step of obtaining images to be recognized in one embodiment;
Fig. 5 is that images to be recognized is cut out from input picture according to the facial image detected in one embodiment
The flow diagram of step;
Fig. 6 be one embodiment in from images to be recognized segmenting hair region, obtain include pixel in hair zones hair
The flow diagram of the step of type image;
Fig. 7 is shown in one embodiment from images to be recognized segmenting hair region, obtains including pixel in hair zones
Hair style image comparison schematic diagram;
Fig. 8 is the interface schematic diagram at population surveillance interface in one embodiment;
Fig. 9 is the flow diagram of the training step of multitask convolutional neural networks in one embodiment;
Figure 10 is the flow diagram of hair style recognition methods in another embodiment;
Figure 11 is the structural block diagram of hair style identification device in one embodiment;
Figure 12 is the structural block diagram of hair style identification device in another embodiment;
Figure 13 is the structural block diagram of hair style identification device in another embodiment;
Figure 14 is the structural block diagram of computer equipment in one embodiment;
Figure 15 is the structural block diagram of computer equipment in another embodiment.
Specific embodiment
It is with reference to the accompanying drawings and embodiments, right in order to which the objects, technical solutions and advantages of the application are more clearly understood
The application is further elaborated.It should be appreciated that specific embodiment described herein is only used to explain the application, and
It is not used in restriction the application.
Fig. 1 is the applied environment figure of hair style recognition methods in one embodiment.Referring to Fig.1, the hair style recognition methods application
In hair style identifying system.The hair style identifying system includes terminal 110 and server 120.Terminal 110 and server 120 pass through net
Network connection.Terminal 110 specifically can be terminal console or mobile terminal, and mobile terminal specifically can be with mobile phone, tablet computer, notes
At least one of this computer and monitoring device etc..Server 120 can use independent server either multiple server groups
At server cluster realize.
As shown in Fig. 2, in one embodiment, providing a kind of hair style recognition methods.The present embodiment is mainly in this way
Applied in above-mentioned Fig. 1 terminal 110 or server 120 illustrate.Referring to Fig. 2, which is specifically included
Following steps:
S202 obtains images to be recognized.
Specifically, terminal can acquire image under the current visual field of camera by camera, to obtain images to be recognized.
Terminal can save the image of acquisition into local memory device to execute hair style recognition methods by terminal.Or terminal can incite somebody to action
The image of acquisition is sent to other terminals or server, so that other terminals or server obtain images to be recognized.Wherein, it images
The visual field of head can change because of the variation of the posture of terminal and position.
S204, from images to be recognized segmenting hair region, obtain include pixel in hair zones hair style image.
Wherein, hair style is the form for the hair being visually observed, such as hair color, hair lengths, hair shape etc.
Deng.Hair style image is the image for including pixel in hair zones.Wherein, pixel collectively forms table in digital picture in hair zones
Show the content of hair.
It specifically, include hair zones in images to be recognized, computer equipment can be partitioned into hair from images to be recognized
Region, obtain include pixel in hair zones hair style image.
In one embodiment, computer equipment can carry out each pixel class to images to be recognized by convolutional neural networks
Judgement, determine classification belonging to each pixel in images to be recognized, such as hair classification, face classification, skin classification and background
Classification etc..Further, computer equipment can obtain the pixel for belonging to hair classification in images to be recognized.Computer equipment can mention
All pixels for belonging to hair classification are taken out to be partitioned into hair zones from images to be recognized, obtain hair style image.Alternatively, meter
It calculates in the maskable images to be recognized of machine equipment and is not belonging to the pixel of hair classification, to be partitioned into hair area from images to be recognized
Domain obtains hair style image.
S206 extracts the shared hair style feature in hair style image;Shared hair style is characterized in multiple and different identification mission institutes
Shared feature.
Wherein, hair style is characterized in the characteristics of image about hair extracted from hair style image.Shared hair style is characterized in
The hair style feature that multiple and different identification missions share.Wherein, characteristics of image is the color for indicating image, texture, shape or sky
Between relationship etc. feature.In the present embodiment, shared hair style feature specifically can be computer equipment and extract from hair style image
The data of the color, length or shape that can indicate hair out etc. can be regarded as the expression of " non-image " of hair style image
Or description, such as numerical value, vector, matrix or symbol.
Specifically, computer equipment can be handled hair style image, extract multiple and different identification missions in hair style image
The shared hair style feature shared.The shared hair style extracted is characterized in the feature that multiple and different identification missions shares,
It all can be according to the shared hair style feature extracted when executing multiple and different identification missions respectively.In one embodiment, it calculates
Machine equipment can handle hair style image by convolutional neural networks, to extract the shared hair style feature in hair style image.Computer is set
It is standby how to identify image according to the shared hair style feature learning of extraction, and export and execute what different identification missions respectively obtained
Multiple hair style attribute classifications.
In one embodiment, images to be recognized can be input to multitask convolutional neural networks, pass through multitask convolution
The shared hair style feature in common layer structure extraction hair style image in neural network.Wherein, common layer structure refers to for more
For a different identification mission, all shared layer of each identification mission, such as convolutional layer and sub-sampling layer etc..
Wherein, convolutional neural networks (Convolutional Neural Network, abbreviation CNN) are a kind of artificial neurons
Network.Convolutional neural networks include convolutional layer (Convolutional Layer) and sub-sampling layer (Pooling Layer).
In the convolutional layer of convolutional neural networks, there are multiple characteristic patterns (Feature Map), each characteristic pattern includes more
All neurons of a neuron, the same characteristic pattern share a convolution kernel.Convolution kernel is exactly the weight of corresponding neuron, volume
Product core represents a feature.Convolution kernel initializes generally in the form of random decimal matrix, will learn in the training process of network
Acquistion is to reasonable convolution kernel.Convolutional layer can reduce the connection in neural network between each layer, while reduce over-fitting again
Risk.In the present embodiment, convolutional layer can have one layer or have multilayer.
Sub-sampling is also referred to as pond (Pooling), usually has mean value sub-sampling (Mean Pooling) and maximum value to adopt
Two kinds of forms of sample (Max Pooling).Pondization operation is a kind of effective dimensionality reduction mode, can prevent over-fitting.Convolution sum
Sampling enormously simplifies the complexity of neural network, reduces the parameter of neural network.
Multitask convolutional neural networks are the convolutional neural networks that can carry out multi-task learning.Multitask convolutional Neural net
The network structure of network and the structure of single task convolutional neural networks are slightly different.For single task convolutional neural networks, that is,
Independent neural network, each network are the function of only one output for input.Error back-propagating is applied to this
A little networks individually train each network, since these networks are between each other without any connection, one of network science
The feature practised can not help the study of another network.
And multitask convolutional neural networks are then directed to input can multiple outputs, the corresponding task of each output.It can
With understanding, these outputs can connect all neurons for the hidden layer that they share, and be used for some in these hidden layers
The feature of task can also be utilized by other tasks, and multiple tasks is promoted to learn jointly, in this way, the feature that single network learns
It can help the study of another network.
S208 executes multiple and different identification missions according to shared hair style feature respectively.
Wherein, identification mission is the identification other task of hair style Attribute class.Specifically, computer equipment being total to according to extraction
It enjoys hair style feature and executes multiple and different identification missions respectively, to obtain hair style attribute classification corresponding with each identification mission.
In one embodiment, computer equipment passes through the common layer structure extraction hair style in multitask convolutional neural networks
Shared hair style feature in image, then by multitask convolutional neural networks from the corresponding independence of different identification missions
Layer structure, executes multiple and different identification missions respectively.Wherein, from the corresponding independent stratum structure of different identification missions,
Be individually handle it is corresponding from different identification missions in corresponding identification mission, such as multitask convolutional neural networks
Full articulamentum.
In above-described embodiment, the shared hair style feature in hair style image is extracted by multitask convolutional neural networks, and
Multiple and different identification missions is executed according to shared hair style feature, to obtain multiple hair style attribute classifications.It in this way can simultaneously, directly
It is grounded and carries out fine hair style identification to hair, further improve the efficiency of hair style identification.
As shown in figure 3, Fig. 3 shows the schematic network structure of multitask convolutional neural networks in one embodiment.It is public
Co-layer structure may include multiple convolutional layers and pond layer, and one or more full articulamentums.Independent stratum structure may include
Handle the full articulamentum of corresponding identification mission.The corresponding independent stratum structure of different identification missions can execute accordingly
Identification mission, such as task A, task B, task C and task D, to export the hair style attribute for executing each identification mission and respectively obtaining
Classification.
In one embodiment, multitask convolutional neural networks be using hair style image and corresponding hair style attribute classification as
Training data carries out the model for classifying to hair style that learning training obtains.Computer equipment after getting hair style image,
Hair style image is inputted into multitask convolutional neural networks, shared hair style is carried out to hair style image using multitask convolutional neural networks
The extraction of feature, and multiple and different identification missions is executed according to shared hair style feature respectively, to obtain multiple hair style Attribute class
Not.
In one embodiment, multiple and different identification missions includes at least two in task identified below: for knowing
The identification mission of other hair lengths;The identification mission of hair color for identification;The identification that whether hair crimps for identification is appointed
Business;Whether hair has the identification mission of designated shape for identification.
Wherein, the identification mission of hair lengths specifically may recognize that hair lengths classification for identification.Hair lengths classification
It may include long hair classification, middle hair classification, bob classification, ultrashort hair classification, bare headed classification and tie up hair classification etc..For identification
The identification mission of hair color specifically may recognize that hair color classification.Hair color classification may include: black classification, brown
Classification, golden classification, canescence classification, red classification etc..The identification mission that whether hair crimps for identification specifically can recognize
Hair is curling or is not curling out.In one embodiment, the identification mission that whether hair crimps also may recognize that curly hair
Type, such as big wave curly hair classification, lamb wadding of wool hair classification etc..Whether hair has the identification of designated shape for identification
Task specifically may recognize that whether hair has fringe.In one embodiment, whether hair has the identification mission of designated shape
Also the type of hair fringe, such as air fringe classification or straight fringe classification etc. be may recognize that.
S210, output execute multiple hair style attribute classifications that different identification missions respectively obtains.
Specifically, it is exportable after computer equipment executes multiple and different identification missions according to shared hair style feature respectively
Execute multiple hair style attribute classifications that different identification missions respectively obtains.
For example, being directed to the identification mission of hair lengths for identification for an images to be recognized, computer is set
Standby exportable hair lengths classification, such as long hair classification.It is directed to the identification mission of hair color for identification, computer equipment
Exportable hair color classification, such as black classification.It is directed to the identification mission that whether hair crimps for identification, computer is set
The hair style attribute classification whether standby exportable hair crimps, such as curly headed.It is directed to whether hair for identification has finger
The identification mission of setting shape is specific, and the exportable hair of computer equipment is that have fringe with designated shape, such as hair.
Above-mentioned hair style recognition methods includes hair zones to obtain by being partitioned into hair zones from images to be recognized
The hair style image of middle pixel, can effectively filter background information interference, be absorbed in the content of hair zones in subsequent processing,
The efficiency and accuracy of hair style identification can be improved.The shared hair style feature in hair style image is extracted again, it is special according to shared hair style
Sign executes multiple and different identification missions respectively, and exports and execute multiple hair style Attribute class that different identification missions respectively obtains
Not.In this way, executing multiple and different identification missions respectively according to shared hair style feature, hair style can directly and be simultaneously identified
Multiple attribute classifications belonging to hair in image, without all images in images to be recognized and pre-set image library are carried out one
One comparison, substantially increases hair style recognition efficiency.
In one embodiment, step S202, i.e. the step of acquisition images to be recognized include:
S402 obtains input picture.
Wherein, input picture is initial pictures, specifically can be terminal and passes through the image that camera directly acquires, either
Terminal truncated picture from video file.For example, acquiring monitor video by monitoring device, computer equipment can be regarded from monitoring
Partial video frame is intercepted in frequency as input picture.Alternatively, user can carry out image photographic by mobile terminal, camera is adopted
The image of collection is as input picture.
Specifically, computer equipment can obtain the input picture locally acquired.Or other terminals acquire image and send
To computer equipment, so that computer equipment obtains input picture.
S404 carries out Face datection to input picture.
Specifically, computer equipment can carry out Face datection to the input picture of acquisition.Wherein, Face datection refer to for
The given image of any one width uses certain strategy to be scanned for it to determine whether containing face, if then mentioning
Take the process of the information such as the position of face.
In one embodiment, computer equipment can pass through template matching mode, colour of skin matching way, ANN
(Artificial Neural Network, artificial neural network) model, (Support Vector Machine is supported SVM
Vector machine) model or Adaboost (Meta algorithm classifier) model etc. detect facial image from input picture.Wherein, template
Matched basic thought is search and the matched position of face template in image to be detected, and face is detected if searching
Image.
S406 cuts out images to be recognized according to the facial image detected from input picture;Images to be recognized includes
Facial image and corresponding hair zones.
Specifically, images to be recognized includes facial image and corresponding hair zones.When computer equipment detects face
When image, according to the facial image detected, images to be recognized is cut out from input picture.
In one embodiment, computer equipment can be according to the facial image detected, by the preset ratio of facial image
Size cuts out the images to be recognized including facial image and corresponding hair zones from input picture.Wherein, preset ratio
It specifically can be the size including facial image and hair zones, the ratio with the size of facial image.
In one embodiment, computer equipment can be according to the facial image detected, according to neighbouring with facial image
Connected region determines the range size of images to be recognized, cut out from input picture including facial image and corresponding head
Send out the images to be recognized in region.
In above-described embodiment, by carrying out Face datection to input picture, to cut out from input picture including face
The images to be recognized of image and corresponding hair zones can effectively filter out other irrelevant background elements, improve
The accuracy rate of hair style identification.
In one embodiment, step S406 is cut out from input picture wait know that is, according to the facial image detected
The step of other image specifically includes the following steps:
S502, recognition detection to facial image in human face characteristic point.
Wherein, human face characteristic point is the key point in facial image, for example can be the spy that face are indicated in facial image
Sign point.Specifically, it identifies that the human face characteristic point in facial image can be the critical zone locations oriented in facial image, wraps
Include eyebrow, eyes, nose, mouth, face mask etc..Identification human face characteristic point is determined for the position of face, direction, partially
To or the size of face etc..
In one embodiment, computer equipment can identify the face characteristic in facial image by critical point detection algorithm
Point.Critical point detection algorithm, such as ASM (Active Shape Model, active shape model) algorithm, AAM (Active
Appearance Model, active appearance models) algorithm, based on cascade shape return method or based on the method for deep learning
Deng.
S504 determines that face is biased to according to human face characteristic point.
Wherein, face is biased to be the deviation between facial image and reference direction.Specifically, computer equipment can be according to people
Face characteristic point determines that the face in facial image is biased to.
In one embodiment, computer equipment can determine the positive direction of face in facial image according to human face characteristic point,
And then judge deviation angle and direction between the positive direction of face and reference direction, to determine that face is biased to.For example, working as
After computer equipment identifies human face characteristic point, for example determine the components such as eyes, nose and mouth in facial image in face
Position in image can calculate the positive direction of face.In the present embodiment, the positive direction of face is pair in facial image
Claim the direction where line.
S506 is biased to based on face, carries out face correction to input picture.
Specifically, computer equipment can be biased to according to face, carried out face correction to input picture, obtained in facial image
Face and reference direction correction in the same direction after image.
In one embodiment, computer equipment can according between face and reference direction deviation angle and direction, it is right
Facial image is reversely rotated accordingly, to carry out face correction to input picture.
In one embodiment, computer equipment can be determined according to human face characteristic point two in facial image, nose and
The corners of the mouth corresponding position in facial image.Input picture is corrected according to the position of two, nose and the corners of the mouth.
S508 cuts out images to be recognized from the image obtained after face correction.
Specifically, computer equipment can be from cutting out in obtained image after face correction including facial image and corresponding
The images to be recognized of hair zones.
In above-described embodiment, by the human face characteristic point in identification facial image, to determine that face is biased to.It is based on face again
It is biased to carry out face correction to input picture, then cuts out images to be recognized from the image after correction, it can be ensured that is to be identified
Face in image be it is positive, substantially increase hair style identification efficiency and accuracy.
In one embodiment, step S202 obtains including in hair zones that is, from images to be recognized segmenting hair region
The step of hair style image of pixel specifically includes the following steps:
S602 determines the hair zones in images to be recognized and non-hair region.
Specifically, computer equipment can detect images to be recognized, to determine the hair zones in images to be recognized
With non-hair region.
In one embodiment, computer equipment can determine the face figure in images to be recognized by Face datection algorithm
Picture.The connected domain being connected again by the acquisition of the methods of edge detection algorithm with facial image.It will be with facial image adjacent boundary
Longest connected domain is as hair zones, other regions are as non-hair region.
In one embodiment, computer equipment can carry out each pixel class to images to be recognized by convolutional neural networks
Judgement, determine whether the pixel in images to be recognized belongs to hair classification.Computer equipment can will belong to the institute of hair classification
There is region composed by pixel to be determined as hair zones, other regions are then determined as non-hair region.
In one embodiment, computer equipment can recognize hair zones, background area and the human body in images to be recognized
Component area.
Specifically, computer equipment can carry out the judgement of each pixel class by convolutional neural networks to images to be recognized,
Determine classification belonging to the pixel in images to be recognized, such as hair classification, face classification, skin classification and background classification etc..
All pixels for belonging to hair classification collectively form hair zones, belong to the other pixel of background classes and collectively form background area, belong to
Human part region is collectively formed in face classification and the other all pixels of skin.
S604 is based on hair zones and non-hair Area generation mask image.
Wherein, mask image is the specific image for carrying out image masks processing to image to be processed, alternatively referred to as mould
Plate.Specifically, in Digital Image Processing, mask images can be two-dimensional matrix array or multivalue image.Specifically, it calculates
Machine equipment can be according to hair zones and the corresponding mask image of non-hair Area generation, for dividing the hair in image to be processed
Region obtains hair style image.
In one embodiment, hair zones are labeled as the first value, by the unified mark in background area and human part region
Be denoted as second value, obtain include the first value and second value mask image.
Specifically, the corresponding value of the pixel of hair zones can be set to the first value, such as 1 by computer equipment, by background area
The corresponding primary system one of the pixel in domain and human part region is set to second value, such as 0, obtains including covering for the first value and second value
Film image, that is, the two-dimensional matrix array of 0 and 1 composition.
S606 carries out image masks processing to images to be recognized, obtains including pixel in hair zones according to mask image
Hair style image.
Wherein, image masks processing is to use selected image, figure or object, is blocked to image to be processed, to control
The region of imaged processing or treatment process.Specifically, computer equipment can carry out figure to images to be recognized according to mask image
As mask process, block non-hair region, with obtain include pixel in hair zones hair style image.
In one embodiment, can mask image and image to be processed be carried out and is operated, retained and sent out in image to be processed
The pixel in type region obtains hair style image.It is namely handled by image masks, retains the picture in hair style region in image to be processed
The value of element, rather than the processing of the value of the pixel in hair style region is zero, to obtain hair style image.
It shows in one embodiment with reference to Fig. 7, Fig. 7 from images to be recognized segmenting hair region, obtains including hair area
The comparison schematic diagram of the hair style image of pixel in domain.The figure on the left side illustrates an images to be recognized in Fig. 7.Among in Fig. 7
Figure, which shows, handles images to be recognized, determines hair zones, background area and human part in images to be recognized
Region.The figure on the right, which illustrates, in Fig. 7 is split hair style region, obtains the hair style image comprising hair zones.
In above-described embodiment, according to hair zones and non-hair Area generation mask image, treated according still further to mask image
It identifies that image carries out image masks processing, the interference of the pixels such as face and background can be effectively filtered out, be further noted that
The efficiency and accuracy of hair style identification.
In one embodiment, step S205, i.e. output execute multiple hair style categories that different identification missions respectively obtains
Property classification the step of specifically include: show population surveillance interface;Images to be recognized is shown in population surveillance interface;It is supervised in crowd
It controls in interface, corresponds to images to be recognized, show multiple and different hair style attribute classifications.
Specifically, when images to be recognized is the monitoring image of monitoring device acquisition, computer equipment can show that crowd supervises
Interface is controlled, and shows images to be recognized in population surveillance interface, corresponds to images to be recognized, shows multiple and different hair style categories
Property classification.
The interface schematic diagram at population surveillance interface in one embodiment is shown with reference to Fig. 8, Fig. 8.As shown in figure 8, crowd
The top of monitoring interface shows having time, and left side, which is shown, images to be recognized, and right side, which is shown, to be had and the task in images to be recognized
Relevant information, for example, group, gender, the age, whether there is or not wear glasses, whether there is or not wearing masks, and by executing above-mentioned hair style identification
Multiple and different hair style attribute classifications (long hair, black, curling) that method obtains.
In above-described embodiment, by showing population surveillance interface, and images to be recognized is shown in population surveillance interface, with
And multiple and different hair style attribute classifications, the distribution attribute that clearly images to be recognized will can obtain after distribution identification
Classification is shown.
In one embodiment, identification mission is executed by multitask convolutional neural networks;Multitask convolutional neural networks
Training step includes:
S902 is obtained and the multiple and different respective hair style sample image of identification mission and tag along sort respectively.
Wherein, hair style sample image is the hair style image for being labeled with tag along sort, can be used as multitask convolutional Neural net
Training data during the model training of network.Specifically, computer equipment can obtain and multiple and different identification missions respectively
Respective hair style sample image and tag along sort.
For example, such as multiple and different identification missions includes the identification mission of hair lengths and for knowing for identification
The identification mission of other hair color.The hair style figure for being labeled as long hair classification corresponding with the identification mission of hair lengths for identification
As can be used as hair style sample image, corresponding tag along sort is long hair.It is corresponding to the identification mission of hair color for identification
The hair style image for being labeled as black classification can be used as hair style sample image, corresponding tag along sort is black.
S904 is executed multiple and different identification missions according to hair style sample image, is obtained by multitask convolutional neural networks
To intermediate recognition result.
Wherein, intermediate recognition result is in the training process, to input hair style sample image to multitask convolutional neural networks
Afterwards, the recognition result exported by the multitask convolutional neural networks.
In one embodiment, after hair style sample image can be input to multitask convolutional neural networks by computer equipment,
Multiple hair style attribute classifications belonging to hair style sample image are determined as the multitask convolutional neural networks, thus by multiple hair style categories
Property classification is as intermediate recognition result.
S906 adjusts multitask according to the difference between the intermediate recognition result and tag along sort of corresponding same identification task
The model parameter of convolutional neural networks simultaneously continues to train, until deconditioning when meeting training stop condition.
Wherein, training stop condition is the condition for terminating model training.Training stop condition can be reach it is preset repeatedly
The classification performance index of multitask convolutional neural networks after generation number, or adjustment model parameter reaches pre-set level.It adjusts
The model parameter of whole multitask convolutional neural networks is adjusted to the model parameter of multitask convolutional neural networks.
Specifically, the difference between the intermediate recognition result and tag along sort of the comparable corresponding same identification task of computer equipment
It is different, to adjust the model parameter of multitask convolutional neural networks towards the direction for reducing difference.If after adjusting model parameter,
It is unsatisfactory for training stop condition, then return step S906 continues to train, deconditioning when meeting training stop condition.
In one embodiment, computer equipment can be according to the intermediate recognition result and classification for corresponding to same identification task
Difference between label adjusts separately the model parameter of independent stratum structure corresponding with corresponding identification mission.Computer equipment can
With comprehensive according to the difference between the corresponding intermediate recognition result of each identification mission and tag along sort, multitask convolutional Neural net is adjusted
The model parameter of common layer structure in network.
In above-described embodiment, obtain and multiple and different respective hair style sample images of identification mission and classification respectively
Label carries out model training to multitask convolutional neural networks according to hair style sample image, by adjusting model parameter, so that it may
It is trained as soon as possible to the higher multitask convolutional neural networks of the classification accuracy of hair style image, improves training effectiveness.
As shown in Figure 10, in one specifically embodiment, hair style recognition methods the following steps are included:
S1002 obtains input picture.
S1004 carries out Face datection to input picture.
S1006, recognition detection to facial image in human face characteristic point.
S1008 determines that face is biased to according to human face characteristic point.
S1010 is biased to based on face, carries out face correction to input picture.
S1012 cuts out images to be recognized from the image obtained after face correction;Images to be recognized includes facial image
With corresponding hair zones.
S1014 identifies the hair zones in images to be recognized and background area and human part region.
Hair zones are labeled as the first value, background area and human part region are designated generally as second by S1016
Value, obtain include the first value and second value mask image.
S1018 carries out image masks processing to images to be recognized, obtains including picture in hair zones according to mask image
The hair style image of element.
It is special to extract shared hair style by the common layer structure in multitask convolutional neural networks from hair style image by S1020
Sign;Shared hair style is characterized in the feature that multiple and different identification missions shares.
S1022, by, from the corresponding independent stratum structure of different identification missions, dividing in multitask convolutional neural networks
Multiple and different identification missions is not executed.
S1024 shows population surveillance interface.
S1026 shows images to be recognized in population surveillance interface.
S1028 corresponds to images to be recognized in population surveillance interface, shows that execute different identification missions obtains respectively
The multiple hair style attribute classifications arrived.
Above-mentioned hair style recognition methods includes hair zones to obtain by being partitioned into hair zones from images to be recognized
The hair style image of middle pixel, can effectively filter background information interference, be absorbed in the content of hair zones in subsequent processing,
The efficiency and accuracy of hair style identification can be improved.The shared hair style feature in hair style image is extracted again, it is special according to shared hair style
Sign executes multiple and different identification missions respectively, and exports and execute multiple hair style Attribute class that different identification missions respectively obtains
Not.In this way, executing multiple and different identification missions respectively according to shared hair style feature, hair style can directly and be simultaneously identified
Multiple attribute classifications belonging to hair in image, without all images in images to be recognized and pre-set image library are carried out one
One comparison, substantially increases hair style recognition efficiency.
Figure 10 is the flow diagram of hair style recognition methods in one embodiment.Although should be understood that the stream of Figure 10
Each step in journey figure is successively shown according to the instruction of arrow, but these steps are not inevitable according to the suitable of arrow instruction
Sequence successively executes.Unless expressly stating otherwise herein, there is no stringent sequences to limit for the execution of these steps, these steps
It can execute in other order.Moreover, at least part step in Figure 10 may include multiple sub-steps or multiple ranks
Section, these sub-steps or stage are not necessarily to execute completion in synchronization, but can execute at different times, this
The execution sequence in a little step perhaps stage be also not necessarily successively carry out but can be with other steps or other steps
Sub-step or at least part in stage execute in turn or alternately.
In concrete application scene, above-mentioned hair style recognition methods can be applied to the monitoring scenes such as the super, intelligent security guard of wisdom quotient
Under.Above-mentioned hair style recognition methods is executed by the monitoring image acquired to monitoring device, personnel can be carried out in monitoring field
The analysis of hair feature, convenient for for statistical analysis to personnel characteristics.
As shown in figure 11, in one embodiment, a kind of hair style identification device 1100 is provided, comprising: obtain module
1101, divide module 1102, extraction module 1103, execution module 1104 and output module 1105.
Module 1101 is obtained, for obtaining images to be recognized.
Divide module 1102, for from images to be recognized segmenting hair region, obtain include pixel in hair zones hair
Type image.
Extraction module 1103, for extracting the shared hair style feature in hair style image;Shared hair style is characterized in multiple and different
The feature that shares of identification mission.
Execution module 1104, for executing multiple and different identification missions respectively according to shared hair style feature.
Output module 1105, for exporting the multiple hair style attribute classifications for executing different identification missions and respectively obtaining.
In one embodiment, it obtains module 1101 to be also used to: obtaining input picture;Face inspection is carried out to input picture
It surveys;According to the facial image detected, images to be recognized is cut out from input picture;Images to be recognized include facial image and
Corresponding hair zones.
In one embodiment, obtain module 1101 be also used to: recognition detection to facial image in human face characteristic point;
Determine that face is biased to according to human face characteristic point;It is biased to based on face, face correction is carried out to input picture;After face correction
To image in cut out images to be recognized.
As shown in figure 12, in one embodiment, segmentation module 1102 comprises determining that module 11021, generation module
11022 and image masks processing module 11023:
Determining module 11021, for determining hair zones and non-hair region in images to be recognized.
Generation module 11022, for being based on hair zones and non-hair Area generation mask image.
Image masks processing module 11023, for carrying out image masks processing to images to be recognized according to mask image,
Obtain include pixel in hair zones hair style image.
In one embodiment, determining module 11021 is also used to identify hair zones in images to be recognized, background area
With human part region.Generation module 11022 is also used to hair zones labeled as the first value, by background area and human part
Region is designated generally as second value, obtain include the first value and second value mask image.
In one embodiment, extraction module 1103 is also used to through the common layer knot in multitask convolutional neural networks
Structure extracts shared hair style feature from hair style image.Execution module 1104 be also used to by multitask convolutional neural networks with
The corresponding independent stratum structure of different identification missions, executes multiple and different identification missions respectively.
In one embodiment, multiple and different identification missions includes at least two in task identified below: for knowing
The identification mission of other hair lengths;The identification mission of hair color for identification;The identification that whether hair crimps for identification is appointed
Business;Whether hair has the identification mission of designated shape for identification.
In one embodiment, output module 1105 includes display module 11051, and display module 11051 is for showing people
Group's monitoring interface;Images to be recognized is shown in population surveillance interface;In population surveillance interface, correspond to images to be recognized,
Show multiple and different hair style attribute classifications.
As shown in figure 13, in one embodiment, identification mission is executed by multitask convolutional neural networks;Hair style identification dress
Setting 1100 further includes adjustment module 1106:
Module 1101 is obtained to be also used to obtain respectively and the respective hair style sample image of multiple and different identification missions
And tag along sort.
Execution module 1102 is also used to be executed according to hair style sample image multiple and different by multitask convolutional neural networks
Identification mission, obtain intermediate recognition result.
Module 1106 is adjusted, for the difference between the intermediate recognition result and tag along sort according to corresponding same identification task
It is different, it adjusts the model parameter of multitask convolutional neural networks and continues to train, until deconditioning when meeting training stop condition.
Above-mentioned hair style identification device includes hair zones to obtain by being partitioned into hair zones from images to be recognized
The hair style image of middle pixel, can effectively filter background information interference, be absorbed in the content of hair zones in subsequent processing,
The efficiency and accuracy of hair style identification can be improved.The shared hair style feature in hair style image is extracted again, it is special according to shared hair style
Sign executes multiple and different identification missions respectively, and exports and execute multiple hair style Attribute class that different identification missions respectively obtains
Not.In this way, executing multiple and different identification missions respectively according to shared hair style feature, hair style can directly and be simultaneously identified
Multiple attribute classifications belonging to hair in image, without all images in images to be recognized and pre-set image library are carried out one
One comparison, substantially increases hair style recognition efficiency.
Figure 14 shows the internal structure chart of computer equipment in one embodiment.The computer equipment specifically can be figure
Terminal 110 in 1.As shown in figure 14, it includes the place connected by system bus which, which includes the computer equipment,
Manage device, memory, network interface, input unit and display screen.Wherein, memory includes non-volatile memory medium and interior storage
Device.The non-volatile memory medium of the computer equipment is stored with operating system, can also be stored with computer program, the computer
When program is executed by processor, processor may make to realize route method for digging.Computer can also be stored in the built-in storage
Program when the computer program is executed by processor, may make processor to execute route method for digging.The display of computer equipment
Screen can be liquid crystal display or electric ink display screen, and the input unit of computer equipment can be to be covered on display screen
Touch layer is also possible to the key being arranged on computer equipment shell, trace ball or Trackpad, can also be external keyboard,
Trackpad or mouse etc..
Figure 15 shows the internal structure chart of computer equipment in one embodiment.The computer equipment specifically can be figure
Server 120 in 1.As shown in figure 15, it includes being connected by system bus which, which includes the computer equipment,
Processor, memory and network interface.Wherein, memory includes non-volatile memory medium and built-in storage.The computer
The non-volatile memory medium of equipment is stored with operating system, can also be stored with computer program, and the computer program is processed
When device executes, processor may make to realize route method for digging.Computer program can also be stored in the built-in storage, the calculating
When machine program is executed by processor, processor may make to execute route method for digging.
It will be understood by those skilled in the art that structure shown in Figure 14 and Figure 15, only with application scheme phase
The block diagram of the part-structure of pass does not constitute the restriction for the computer equipment being applied thereon to application scheme, specifically
Computer equipment may include perhaps combining certain components or with different than more or fewer components as shown in the figure
Component layout.
In one embodiment, hair style identification device provided by the present application can be implemented as a kind of shape of computer program
Formula, computer program are run in the computer equipment shown in as shown in Figure 14 or Figure 15.In the memory of computer equipment
The each program module for forming the hair style identification device can be stored, for example, acquisition module, segmentation module, extraction shown in Figure 11
Module, execution module and output module.The computer program that each program module is constituted executes processor in this specification
Step in the hair style recognition methods of each embodiment of the application of description.
For example, Figure 14 or computer equipment shown in figure 15 can pass through obtaining in hair style identification device as shown in figure 11
Modulus block executes step S202.Computer equipment can execute step S204 by segmentation module.Computer equipment can pass through extraction
Module executes step S206.Computer equipment can execute step S208 by execution module.Computer equipment can be by exporting mould
Block executes step S210.
In one embodiment, a kind of computer equipment, including memory and processor are provided, is stored in memory
Computer program, when computer program is executed by processor, so that processor executes following steps: obtaining images to be recognized;From
Images to be recognized segmenting hair region, obtain include pixel in hair zones hair style image;It extracts shared in hair style image
Hair style feature;Shared hair style is characterized in the feature that multiple and different identification missions shares;It is held respectively according to shared hair style feature
The multiple and different identification mission of row;Output executes multiple hair style attribute classifications that different identification missions respectively obtains.
In one embodiment, computer program holds processor specifically when executing the step for obtaining images to be recognized
Row following steps: input picture is obtained;Face datection is carried out to input picture;According to the facial image detected, scheme from input
Images to be recognized is cut out as in;Images to be recognized includes facial image and corresponding hair zones.
In one embodiment, computer program is executing processor according to the facial image detected, from input
Specifically execute following steps when cutting out the step of images to be recognized in image: recognition detection to facial image in face it is special
Sign point;Determine that face is biased to according to human face characteristic point;It is biased to based on face, face correction is carried out to input picture;It is rectified from face
Images to be recognized is cut out in the image just obtained afterwards.
In one embodiment, computer program is executing processor from images to be recognized segmenting hair region, obtains
Following steps are specifically executed when to the hair style image for including the steps that pixel in hair zones: determining the hair in images to be recognized
Region and non-hair region;Based on hair zones and non-hair Area generation mask image;According to mask image, to figure to be identified
As carry out image masks processing, obtain include pixel in hair zones hair style image.
In one embodiment, computer program make processor execute determine images to be recognized in hair zones and
Following steps are specifically executed when the step in non-hair region: hair zones, background area and human body in identification images to be recognized
Component area.Computer program makes processor the step of executing based on hair zones and non-hair Area generation mask image
When specifically execute following steps: by hair zones labeled as the first value, background area and human part region are designated generally as
Second value, obtain include the first value and second value mask image.
In one embodiment, computer program makes processor execute the shared hair style feature extracted in hair style image
Step when specifically execute following steps: by the common layer structure in multitask convolutional neural networks, mentioned from hair style image
Take shared hair style feature;Executing multiple and different identification missions respectively according to shared hair style feature includes: by multitask convolution
From the corresponding independent stratum structure of different identification missions in neural network, multiple and different identification missions is executed respectively.
In one embodiment, multiple and different identification missions includes at least two in task identified below: for knowing
The identification mission of other hair lengths;The identification mission of hair color for identification;The identification that whether hair crimps for identification is appointed
Business;Whether hair has the identification mission of designated shape for identification.
In one embodiment, computer program obtains processor respectively in the different identification mission of execution output execution
To multiple hair style Attribute class other step when specifically execute following steps: show population surveillance interface;At population surveillance interface
Middle displaying images to be recognized;In population surveillance interface, corresponds to images to be recognized, show multiple and different hair style Attribute class
Not.
In one embodiment, identification mission is executed by multitask convolutional neural networks;Computer program makes processor
It also executes following steps: obtaining respectively and the multiple and different respective hair style sample image of identification mission and tag along sort;
By multitask convolutional neural networks, multiple and different identification missions is executed according to hair style sample image, obtains intermediate identification knot
Fruit;According to the difference between the intermediate recognition result and tag along sort of corresponding same identification task, multitask convolutional Neural net is adjusted
The model parameter of network simultaneously continues to train, until deconditioning when meeting training stop condition.
Above-mentioned computer equipment includes in hair zones to obtain by being partitioned into hair zones from images to be recognized
The hair style image of pixel, can effectively filter background information interference, be absorbed in the content of hair zones in subsequent processing, can
To improve the efficiency and accuracy of hair style identification.The shared hair style feature in hair style image is extracted again, according to shared hair style feature
Multiple and different identification missions is executed respectively, and is exported and executed multiple hair style Attribute class that different identification missions respectively obtains
Not.In this way, executing multiple and different identification missions respectively according to shared hair style feature, hair style can directly and be simultaneously identified
Multiple attribute classifications belonging to hair in image, without all images in images to be recognized and pre-set image library are carried out one
One comparison, substantially increases hair style recognition efficiency.
A kind of computer readable storage medium, is stored with computer program, real when which is executed by processor
Existing following steps: images to be recognized is obtained;From images to be recognized segmenting hair region, obtain include pixel in hair zones hair
Type image;Extract the shared hair style feature in hair style image;Shared hair style is characterized in what multiple and different identification missions shared
Feature;Multiple and different identification missions is executed respectively according to shared hair style feature;Output executes different identification missions and obtains respectively
The multiple hair style attribute classifications arrived.
In one embodiment, computer program holds processor specifically when executing the step for obtaining images to be recognized
Row following steps: input picture is obtained;Face datection is carried out to input picture;According to the facial image detected, scheme from input
Images to be recognized is cut out as in;Images to be recognized includes facial image and corresponding hair zones.
In one embodiment, computer program is executing processor according to the facial image detected, from input
Specifically execute following steps when cutting out the step of images to be recognized in image: recognition detection to facial image in face it is special
Sign point;Determine that face is biased to according to human face characteristic point;It is biased to based on face, face correction is carried out to input picture;It is rectified from face
Images to be recognized is cut out in the image just obtained afterwards.
In one embodiment, computer program is executing processor from images to be recognized segmenting hair region, obtains
Following steps are specifically executed when to the hair style image for including the steps that pixel in hair zones: determining the hair in images to be recognized
Region and non-hair region;Based on hair zones and non-hair Area generation mask image;According to mask image, to figure to be identified
As carry out image masks processing, obtain include pixel in hair zones hair style image.
In one embodiment, computer program make processor execute determine images to be recognized in hair zones and
Following steps are specifically executed when the step in non-hair region: hair zones, background area and human body in identification images to be recognized
Component area.Computer program makes processor the step of executing based on hair zones and non-hair Area generation mask image
When specifically execute following steps: by hair zones labeled as the first value, background area and human part region are designated generally as
Second value, obtain include the first value and second value mask image.
In one embodiment, computer program makes processor execute the shared hair style feature extracted in hair style image
Step when specifically execute following steps: by the common layer structure in multitask convolutional neural networks, mentioned from hair style image
Take shared hair style feature;Executing multiple and different identification missions respectively according to shared hair style feature includes: by multitask convolution
From the corresponding independent stratum structure of different identification missions in neural network, multiple and different identification missions is executed respectively.
In one embodiment, multiple and different identification missions includes at least two in task identified below: for knowing
The identification mission of other hair lengths;The identification mission of hair color for identification;The identification that whether hair crimps for identification is appointed
Business;Whether hair has the identification mission of designated shape for identification.
In one embodiment, computer program obtains processor respectively in the different identification mission of execution output execution
To multiple hair style Attribute class other step when specifically execute following steps: show population surveillance interface;At population surveillance interface
Middle displaying images to be recognized;In population surveillance interface, corresponds to images to be recognized, show multiple and different hair style Attribute class
Not.
In one embodiment, identification mission is executed by multitask convolutional neural networks;Computer program makes processor
It also executes following steps: obtaining respectively and the multiple and different respective hair style sample image of identification mission and tag along sort;
By multitask convolutional neural networks, multiple and different identification missions is executed according to hair style sample image, obtains intermediate identification knot
Fruit;According to the difference between the intermediate recognition result and tag along sort of corresponding same identification task, multitask convolutional Neural net is adjusted
The model parameter of network simultaneously continues to train, until deconditioning when meeting training stop condition.
Above-mentioned computer readable storage medium includes head to obtain by being partitioned into hair zones from images to be recognized
Send out the hair style image of pixel in region, can effectively filter background information interference, be absorbed in hair zones in subsequent processing
Content, can be improved hair style identification efficiency and accuracy.The shared hair style feature in hair style image is extracted again, according to shared
Hair style feature executes multiple and different identification missions respectively, and exports and execute multiple hair styles that different identification missions respectively obtains
Attribute classification.In this way, executing multiple and different identification missions respectively according to shared hair style feature, can directly and simultaneously identify
Multiple attribute classifications belonging to hair in hair style image out, without by all images in images to be recognized and pre-set image library
It is compared one by one, substantially increases hair style recognition efficiency.
Those of ordinary skill in the art will appreciate that realizing all or part of the process in above-described embodiment method, being can be with
Relevant hardware is instructed to complete by computer program, the program can be stored in a non-volatile computer and can be read
In storage medium, the program is when being executed, it may include such as the process of the embodiment of above-mentioned each method.Wherein, provided herein
Each embodiment used in any reference to memory, storage, database or other media, may each comprise non-volatile
And/or volatile memory.Nonvolatile memory may include that read-only memory (ROM), programming ROM (PROM), electricity can be compiled
Journey ROM (EPROM), electrically erasable ROM (EEPROM) or flash memory.Volatile memory may include random access memory
(RAM) or external cache.By way of illustration and not limitation, RAM is available in many forms, such as static state RAM
(SRAM), dynamic ram (DRAM), synchronous dram (SDRAM), double data rate sdram (DDRSDRAM), enhanced SDRAM
(ESDRAM), synchronization link (Synchlink) DRAM (SLDRAM), memory bus (Rambus) directly RAM (RDRAM), straight
Connect memory bus dynamic ram (DRDRAM) and memory bus dynamic ram (RDRAM) etc..
Each technical characteristic of above embodiments can be combined arbitrarily, for simplicity of description, not to above-described embodiment
In each technical characteristic it is all possible combination be all described, as long as however, the combination of these technical characteristics be not present lance
Shield all should be considered as described in this specification.
The several embodiments of the application above described embodiment only expresses, the description thereof is more specific and detailed, but simultaneously
The limitation to the application the scope of the patents therefore cannot be interpreted as.It should be pointed out that for those of ordinary skill in the art
For, without departing from the concept of this application, various modifications and improvements can be made, these belong to the guarantor of the application
Protect range.Therefore, the scope of protection shall be subject to the appended claims for the application patent.
Claims (15)
1. a kind of hair style recognition methods, comprising:
Obtain images to be recognized;
From images to be recognized segmenting hair region, obtain include pixel in the hair zones hair style image;
Extract the shared hair style feature in the hair style image;The shared hair style is characterized in that multiple and different identification missions is total to
Feature;
Multiple and different identification missions is executed respectively according to the shared hair style feature;
Output executes multiple hair style attribute classifications that the different identification mission respectively obtains.
2. the method according to claim 1, wherein the acquisition images to be recognized includes:
Obtain input picture;
Face datection is carried out to the input picture;
According to the facial image detected, images to be recognized is cut out from the input picture;The images to be recognized includes
The facial image and corresponding hair zones.
3. according to the method described in claim 2, it is characterized in that, the facial image that the foundation detects, from the input
Images to be recognized is cut out in image includes:
Recognition detection to facial image in human face characteristic point;
Determine that face is biased to according to the human face characteristic point;
It is biased to based on the face, face correction is carried out to the input picture;
Images to be recognized is cut out from the image obtained after face correction.
4. being wrapped the method according to claim 1, wherein described from images to be recognized segmenting hair region
The hair style image for including pixel in the hair zones includes:
Determine the hair zones in the images to be recognized and non-hair region;
Based on the hair zones and the non-hair Area generation mask image;
According to the mask image, image masks processing is carried out to the images to be recognized, obtains including in the hair zones
The hair style image of pixel.
5. according to the method described in claim 4, it is characterized in that, hair zones in the determination images to be recognized and
Non-hair region includes:
Identify hair zones, background area and the human part region in the images to be recognized;
It is described to include: based on the hair zones and the non-hair Area generation mask image
The hair zones are labeled as the first value, the background area and human part region are designated generally as second value,
Obtain include first value and the second value mask image.
6. the method according to claim 1, wherein the shared hair style feature extracted in the hair style image
Include:
By the common layer structure in multitask convolutional neural networks, shared hair style feature is extracted from the hair style image;
It is described multiple and different identification missions is executed according to the shared hair style feature respectively to include:
By, from the corresponding independent stratum structure of different identification missions, being executed respectively in the multitask convolutional neural networks
Multiple and different identification missions.
7. the method according to claim 1, wherein the plurality of different identification mission includes identified below
At least two in business:
The identification mission of hair lengths for identification;
The identification mission of hair color for identification;
The identification mission whether hair crimps for identification;
Whether hair has the identification mission of designated shape for identification.
8. being obtained respectively the method according to claim 1, wherein the output executes the different identification mission
To multiple hair style attribute classifications include:
Show population surveillance interface;
The images to be recognized is shown in the population surveillance interface;
In the population surveillance interface, corresponds to the images to be recognized, show multiple and different hair style attribute classifications.
9. method according to any one of claim 1 to 8, which is characterized in that the identification mission is by multitask convolution
Neural network executes;The training step of the multitask convolutional neural networks includes:
It obtains respectively and the multiple and different respective hair style sample image of identification mission and tag along sort;
By multitask convolutional neural networks, multiple and different identification missions is executed according to the hair style sample image, obtained
Between recognition result;
According to the difference between the intermediate recognition result and tag along sort of corresponding same identification task, the multitask convolution mind is adjusted
Model parameter through network simultaneously continues to train, until deconditioning when meeting training stop condition.
10. a kind of hair style identification device, comprising:
Module is obtained, for obtaining images to be recognized;
Divide module, for from images to be recognized segmenting hair region, obtain include pixel in the hair zones hair style figure
Picture;
Extraction module, for extracting the shared hair style feature in the hair style image;The shared hair style is characterized in multiple and different
The feature that shares of identification mission;
Execution module, for executing multiple and different identification missions respectively according to the shared hair style feature;
Output module, for exporting the multiple hair style attribute classifications for executing the different identification mission and respectively obtaining.
11. according to the method described in claim 10, it is characterized in that, the acquisition module is also used to: obtaining input picture;It is right
The input picture carries out Face datection;According to the facial image detected, figure to be identified is cut out from the input picture
Picture;The images to be recognized includes the facial image and corresponding hair zones.
12. according to the method described in claim 10, it is characterized in that, the segmentation module includes:
Determining module, for determining hair zones and non-hair region in the images to be recognized;
Generation module, for being based on the hair zones and the non-hair Area generation mask image;
Image masks processing module, for carrying out image masks processing to the images to be recognized, obtaining according to the mask image
To the hair style image for including pixel in the hair zones.
13. according to the method described in claim 10, it is characterized in that, the output module includes:
Display module, for showing population surveillance interface;The images to be recognized is shown in the population surveillance interface;Institute
It states in population surveillance interface, corresponds to the images to be recognized, show multiple and different hair style attribute classifications.
14. a kind of computer readable storage medium is stored with computer program, when the computer program is executed by processor,
So that the processor is executed such as the step of any one of claims 1 to 9 the method.
15. a kind of computer equipment, including memory and processor, the memory is stored with computer program, the calculating
When machine program is executed by the processor, so that the processor executes the step such as any one of claims 1 to 9 the method
Suddenly.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810758353.6A CN108960167B (en) | 2018-07-11 | 2018-07-11 | Hairstyle identification method, device, computer readable storage medium and computer equipment |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810758353.6A CN108960167B (en) | 2018-07-11 | 2018-07-11 | Hairstyle identification method, device, computer readable storage medium and computer equipment |
Publications (2)
Publication Number | Publication Date |
---|---|
CN108960167A true CN108960167A (en) | 2018-12-07 |
CN108960167B CN108960167B (en) | 2023-08-18 |
Family
ID=64483003
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201810758353.6A Active CN108960167B (en) | 2018-07-11 | 2018-07-11 | Hairstyle identification method, device, computer readable storage medium and computer equipment |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN108960167B (en) |
Cited By (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109919668A (en) * | 2019-02-21 | 2019-06-21 | 杭州数为科技有限公司 | A kind of objective feelings management method of intelligence based on hair style identification and system |
CN110135582A (en) * | 2019-05-09 | 2019-08-16 | 北京市商汤科技开发有限公司 | Neural metwork training, image processing method and device, storage medium |
CN110728318A (en) * | 2019-10-09 | 2020-01-24 | 安徽萤瞳科技有限公司 | Hair color identification method based on deep learning |
CN111325173A (en) * | 2020-02-28 | 2020-06-23 | 腾讯科技(深圳)有限公司 | Hair type identification method and device, electronic equipment and storage medium |
CN111340043A (en) * | 2018-12-19 | 2020-06-26 | 北京京东尚科信息技术有限公司 | Key point detection method, system, device and storage medium |
CN111507994A (en) * | 2020-04-24 | 2020-08-07 | Oppo广东移动通信有限公司 | Portrait extraction method, portrait extraction device and mobile terminal |
CN112101479A (en) * | 2020-09-27 | 2020-12-18 | 杭州海康威视数字技术股份有限公司 | Hair style identification method and device |
JP2021051352A (en) * | 2019-09-20 | 2021-04-01 | ヤフー株式会社 | Media system, information provision method and program |
CN113255561A (en) * | 2021-06-10 | 2021-08-13 | 平安科技(深圳)有限公司 | Hair information identification method, device, equipment and storage medium |
CN116610922A (en) * | 2023-07-13 | 2023-08-18 | 浙江大学滨江研究院 | Non-invasive load identification method and system based on multi-strategy learning |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106203395A (en) * | 2016-07-26 | 2016-12-07 | 厦门大学 | Face character recognition methods based on the study of the multitask degree of depth |
CN106295620A (en) * | 2016-08-28 | 2017-01-04 | 乐视控股(北京)有限公司 | Hair style recognition methods and hair style identification device |
CN106503669A (en) * | 2016-11-02 | 2017-03-15 | 重庆中科云丛科技有限公司 | A kind of based on the training of multitask deep learning network, recognition methods and system |
CN106780512A (en) * | 2016-11-30 | 2017-05-31 | 厦门美图之家科技有限公司 | The method of segmentation figure picture, using and computing device |
CN106815566A (en) * | 2016-12-29 | 2017-06-09 | 天津中科智能识别产业技术研究院有限公司 | A kind of face retrieval method based on multitask convolutional neural networks |
-
2018
- 2018-07-11 CN CN201810758353.6A patent/CN108960167B/en active Active
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106203395A (en) * | 2016-07-26 | 2016-12-07 | 厦门大学 | Face character recognition methods based on the study of the multitask degree of depth |
CN106295620A (en) * | 2016-08-28 | 2017-01-04 | 乐视控股(北京)有限公司 | Hair style recognition methods and hair style identification device |
CN106503669A (en) * | 2016-11-02 | 2017-03-15 | 重庆中科云丛科技有限公司 | A kind of based on the training of multitask deep learning network, recognition methods and system |
CN106780512A (en) * | 2016-11-30 | 2017-05-31 | 厦门美图之家科技有限公司 | The method of segmentation figure picture, using and computing device |
CN106815566A (en) * | 2016-12-29 | 2017-06-09 | 天津中科智能识别产业技术研究院有限公司 | A kind of face retrieval method based on multitask convolutional neural networks |
Non-Patent Citations (3)
Title |
---|
HU HAN等: "Heterogeneous Face Attribute Estimation: A Deep Multi-Task Learning Approach", 《IEEE HTTPS://IEEEXPLORE.IEEE.ORG/DOCUMENT/8007264》 * |
YASER YACOOB等: "Detection and Analysis of Hair", 《IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE》 * |
王江涛等: "基于几何约束的人脸检测研究", 《工程图学学报》 * |
Cited By (13)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111340043A (en) * | 2018-12-19 | 2020-06-26 | 北京京东尚科信息技术有限公司 | Key point detection method, system, device and storage medium |
CN109919668A (en) * | 2019-02-21 | 2019-06-21 | 杭州数为科技有限公司 | A kind of objective feelings management method of intelligence based on hair style identification and system |
CN110135582A (en) * | 2019-05-09 | 2019-08-16 | 北京市商汤科技开发有限公司 | Neural metwork training, image processing method and device, storage medium |
JP2021051352A (en) * | 2019-09-20 | 2021-04-01 | ヤフー株式会社 | Media system, information provision method and program |
CN110728318A (en) * | 2019-10-09 | 2020-01-24 | 安徽萤瞳科技有限公司 | Hair color identification method based on deep learning |
CN111325173A (en) * | 2020-02-28 | 2020-06-23 | 腾讯科技(深圳)有限公司 | Hair type identification method and device, electronic equipment and storage medium |
CN111507994B (en) * | 2020-04-24 | 2023-10-03 | Oppo广东移动通信有限公司 | Portrait extraction method, portrait extraction device and mobile terminal |
CN111507994A (en) * | 2020-04-24 | 2020-08-07 | Oppo广东移动通信有限公司 | Portrait extraction method, portrait extraction device and mobile terminal |
CN112101479A (en) * | 2020-09-27 | 2020-12-18 | 杭州海康威视数字技术股份有限公司 | Hair style identification method and device |
CN112101479B (en) * | 2020-09-27 | 2023-11-03 | 杭州海康威视数字技术股份有限公司 | Hair style identification method and device |
WO2022257456A1 (en) * | 2021-06-10 | 2022-12-15 | 平安科技(深圳)有限公司 | Hair information recognition method, apparatus and device, and storage medium |
CN113255561A (en) * | 2021-06-10 | 2021-08-13 | 平安科技(深圳)有限公司 | Hair information identification method, device, equipment and storage medium |
CN116610922A (en) * | 2023-07-13 | 2023-08-18 | 浙江大学滨江研究院 | Non-invasive load identification method and system based on multi-strategy learning |
Also Published As
Publication number | Publication date |
---|---|
CN108960167B (en) | 2023-08-18 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN108960167A (en) | Hair style recognition methods, device, computer readable storage medium and computer equipment | |
Han et al. | Heterogeneous face attribute estimation: A deep multi-task learning approach | |
CN106503687B (en) | Merge the monitor video system for identifying figures and its method of face multi-angle feature | |
Liang et al. | Deep human parsing with active template regression | |
CN111597870B (en) | Human body attribute identification method based on attention mechanism and multi-task learning | |
Shi et al. | Image manipulation detection and localization based on the dual-domain convolutional neural networks | |
CN109344742A (en) | Characteristic point positioning method, device, storage medium and computer equipment | |
Yan et al. | Age estimation based on convolutional neural network | |
CN108090406A (en) | Face identification method and system | |
Sun et al. | A discriminatively deep fusion approach with improved conditional GAN (im-cGAN) for facial expression recognition | |
CN113205002B (en) | Low-definition face recognition method, device, equipment and medium for unlimited video monitoring | |
Ng et al. | Pedestrian gender classification using combined global and local parts-based convolutional neural networks | |
Chakma et al. | Improved face detection system | |
Kumar et al. | Periocular Region based Gender Identification using Transfer Learning | |
Zahid et al. | A Multi Stage Approach for Object and Face Detection using CNN | |
Moran | Classifying emotion using convolutional neural networks | |
de Souza et al. | Efficient width-extended convolutional neural network for robust face spoofing detection | |
Said et al. | Face Recognition System | |
König | Deep learning for person detection in multi-spectral videos | |
Bikku et al. | Deep Residual Learning for Unmasking DeepFake | |
Yavuzkiliç et al. | DeepFake face video detection using hybrid deep residual networks and LSTM architecture | |
Borza et al. | All-in-one “HairNet”: A Deep Neural Model for Joint Hair Segmentation and Characterization | |
Lefebvre et al. | Supervised image classification by SOM activity map comparison | |
Dixit et al. | Comparative Study on Image Detection using Variants of CNN and YOLO | |
Venkateswarlu et al. | AI-based Gender Identification using Facial Features |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |