CN109145877A - Image classification method, device, electronic equipment and storage medium - Google Patents

Image classification method, device, electronic equipment and storage medium Download PDF

Info

Publication number
CN109145877A
CN109145877A CN201811151621.4A CN201811151621A CN109145877A CN 109145877 A CN109145877 A CN 109145877A CN 201811151621 A CN201811151621 A CN 201811151621A CN 109145877 A CN109145877 A CN 109145877A
Authority
CN
China
Prior art keywords
image
classification
feature
facial image
face value
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201811151621.4A
Other languages
Chinese (zh)
Inventor
李宣平
李岩
吴丽军
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Dajia Internet Information Technology Co Ltd
Original Assignee
Beijing Dajia Internet Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Dajia Internet Information Technology Co Ltd filed Critical Beijing Dajia Internet Information Technology Co Ltd
Priority to CN201811151621.4A priority Critical patent/CN109145877A/en
Publication of CN109145877A publication Critical patent/CN109145877A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Evolutionary Computation (AREA)
  • General Engineering & Computer Science (AREA)
  • Molecular Biology (AREA)
  • Software Systems (AREA)
  • Mathematical Physics (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Computing Systems (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Biology (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Human Computer Interaction (AREA)
  • Multimedia (AREA)
  • Image Analysis (AREA)

Abstract

The disclosure is directed to a kind of image classification method, device, electronic equipment and storage mediums, comprising: obtain facial image to be identified;The facial image is input in face value judgment models, the classification of face value is carried out to the facial image;Obtain the face value classification data of the facial image of the face value judgment models output.When progress face value judges, the feature set that face value judgment models pass through extraction facial image different dimensions, then judged according to face value of the feature set of different dimensions to user, pass through the various dimensions feature extraction of same facial image, the training of face value judgment models can be made more quick, the feature extraction of various dimensions can make model learning to the face characteristic of different dimensions simultaneously, when various dimensions feature being made to combine the face value for judging facial image, with more judgment basis, therefore, it can make face value judgment models strong robustness, judging nicety rate is higher.

Description

Image classification method, device, electronic equipment and storage medium
Technical field
This disclosure relates to field of image processing, especially a kind of image classification method, device, electronic equipment and storage are situated between Matter.
Background technique
With the development of depth learning technology, convolutional neural networks have become the powerful for extracting face characteristic, right For the fixed convolutional neural networks of model, most crucial technology be how allowable loss function, can effectively supervise The training of convolutional neural networks, to make convolutional neural networks that there is the ability for extracting key point translation specifications in facial image.
The face key point technical field of the relevant technologies, traditional method includes being based on shape constraining method, based on cascade The method of recurrence, for example, the feature that classical model has active shape model to pass through training image sample acquisition training image sample The statistical information of point distribution, and the existing change direction of characteristic point permission is obtained, realization is found corresponding on target image The position of characteristic point.
Summary of the invention
The inventor of the disclosure has found under study for action, by above-mentioned model training method in the related technology, causes to train The neural metwork training time is longer, and the robustness of model is poor, and judging nicety rate is low.
The embodiment of the present disclosure provides a kind of facial image pass that the degree of convergence is improved by multichannel convolutive neural network model Key point detecting method, device, computer equipment and storage medium.
According to the first aspect of the embodiments of the present disclosure, a kind of image classification method is provided, comprising:
Obtain facial image to be identified;
The facial image is input in face value judgment models, the classification of face value is carried out to the facial image, wherein institute The feature set that face value judgment models extract at least two different images dimensions of the facial image is stated, and according at least two spies Collection carries out the classification of face value to the facial image;
Obtain the face value classification data of the facial image of the face value judgment models output.
Optionally, the face value judgment models include fisrt feature channel, second feature channel and classification channel;It is described to incite somebody to action The facial image is input in face value judgment models, carries out the classification of face value to the facial image, comprising:
First dimension image of the facial image is input in the fisrt feature channel, and it is special to obtain described first Levy the fisrt feature collection of channel output;
Second dimension image of the facial image is input in the second feature channel, and it is special to obtain described second Levy the second feature collection of channel output;
The fisrt feature collection and second feature collection are input to progress face value classification in the classification channel.
Optionally, the first dimension image is the original image of the facial image, and the fisrt feature channel is convolution mind Through network channel;It is described that first dimension image of the facial image is input in the fisrt feature channel, and obtain institute State the fisrt feature collection of fisrt feature channel output, comprising:
The original image of the facial image is input in the convolutional neural networks channel;
Obtain the fisrt feature collection of the convolutional neural networks channel output.
Optionally, the second dimension image is the face key point image of the facial image, and the second feature is logical Road is deep neural network channel;It is described that second dimension image of the facial image is input to the second feature channel It is interior, and obtain the second feature collection of the second feature channel output, comprising:
The original image of the face key point image is input in the deep neural network channel;
Obtain the second feature collection of the deep neural network channel output.
Optionally, the fisrt feature collection includes: the first weight parameter and fisrt feature information;The second feature Ji Bao It includes: the second weight parameter and second feature information;It is described that the fisrt feature collection and second feature collection are input to the classification The classification of face value is carried out in channel, comprising:
Calculate the second weight parameter and second described in the first sum of products of first weight parameter and fisrt feature information Second product of characteristic information;
It is asked according to the characteristic weighing that second product of the first sum of products calculates the fisrt feature collection and second feature collection And value;
The characteristic weighing summing value is input to progress face value classification in the classification channel.
Optionally, after the acquisition facial image to be identified, further includes:
The facial image is input in preset first nerves network model;
Obtain the face key point image of the convolutional layer output of the first nerves network model.
Optionally, the training method of the face value judgment models, comprising:
It obtains and is marked with the training sample data that classification judges information, wherein the training sample data include face figure Picture and the face key point image for being derived from the facial image;
The training sample data are input to the classification that preset neural network model obtains the training sample data Referring to information;
The category of model for comparing different samples in the training sample data judges that information is referring to information and the classification It is no consistent;
When the category of model judges that information is inconsistent referring to information and the classification, the update institute of iterative cycles iteration The weight in neural network model is stated, until the comparison result terminates when judging that information is consistent with the classification, obtains the face It is worth judgment models.
According to the second aspect of an embodiment of the present disclosure, a kind of image classification device is provided, comprising:
Acquiring unit is configured as obtaining facial image to be identified;
Processing unit is configured as the facial image being input in face value judgment models, to the facial image into Row face value classification, wherein the face value judgment models extract the feature of at least two different images dimensions of the facial image Collection, and the classification of face value is carried out to the facial image according at least two feature sets;
Execution unit is configured as obtaining the face value classification number of the facial image of the face value judgment models output According to.
Optionally, the face value judgment models include fisrt feature channel, second feature channel and classification channel;The figure As sorter further include:
First processing subelement, is configured as the first dimension image of the facial image being input to the fisrt feature In channel, and obtain the fisrt feature collection of the fisrt feature channel output;
Second processing subelement is configured as the second dimension image of the facial image being input to the second feature In channel, and obtain the second feature collection of the second feature channel output;
First executes subelement, is configured as the fisrt feature collection and second feature collection being input to the classification channel Middle progress face value classification.
Optionally, the first dimension image is the original image of the facial image, and the fisrt feature channel is convolution mind Through network channel;Described image sorter further include:
Third handles subelement, is configured as the original image of the facial image being input to the convolutional neural networks channel In;
Second executes subelement, is configured as obtaining the fisrt feature collection of the convolutional neural networks channel output.
Optionally, the second dimension image is the face key point image of the facial image, and the second feature is logical Road is deep neural network channel;Described image sorter further include:
Fourth process subelement is configured as the original image of the face key point image being input to the depth nerve net In network channel;
Third executes subelement, is configured as obtaining the second feature collection of the deep neural network channel output.
Optionally, the fisrt feature collection includes: the first weight parameter and fisrt feature information;The second feature Ji Bao It includes: the second weight parameter and second feature information;Described image sorter further include:
First computation subunit is configured as calculating the first sum of products of first weight parameter Yu fisrt feature information Second product of second weight parameter and second feature information;
Second computation subunit, be configured as being calculated according to second product of the first sum of products the fisrt feature collection and The characteristic weighing summing value of second feature collection;
4th executes subelement, is configured as the characteristic weighing summing value being input in the classification channel and carries out face Value classification.
Optionally, described image sorter further include:
5th processing subelement, is configured as the facial image being input in preset first nerves network model;
5th executes subelement, and the face for being configured as obtaining the convolutional layer output of the first nerves network model is crucial Point image.
Optionally, described image sorter further include:
First obtains subelement, is configured as acquisition and is marked with the training sample data that classification judges information, wherein is described Training sample data include facial image and the face key point image for being derived from the facial image;
6th processing subelement is configured as the training sample data being input to preset neural network model acquisition The classification of the training sample data is referring to information;
First comparison subunit is configured as comparing the category of model of different samples in the training sample data referring to letter Breath judges whether information is consistent with the classification;
6th executes subelement, is configured as judging that information is inconsistent referring to information and the classification when the category of model When, the weight of iterative cycles iteration updated in the neural network model, until the comparison result and classification judgement are believed Terminate when ceasing consistent, obtains the face value judgment models.
According to the third aspect of an embodiment of the present disclosure, a kind of electronic equipment is provided, comprising:
Processor;
Memory for storage processor executable instruction;
Wherein, the processor is configured to executing image classification method described above.
According to a fourth aspect of embodiments of the present disclosure, a kind of non-transitorycomputer readable storage medium is provided, when described When instruction in storage medium is executed by the processor of electronic equipment, so that electronic equipment is able to carry out image described above point Class method.
According to a fifth aspect of the embodiments of the present disclosure, an application program is provided, when the application program is by electronic equipment Processor execute when so that electronic equipment is able to carry out a kind of image classification method described above.
The beneficial effect of the embodiment of the present disclosure is: when progress face value judges, face value judgment models are by extracting face figure As the feature set of different dimensions, is then judged according to face value of the feature set of different dimensions to user, pass through same face The various dimensions feature extraction of image can make the training of face value judgment models more quick, while the feature extraction of various dimensions It can make model learning to the face characteristic of different dimensions, when various dimensions feature being made to combine the face value for judging facial image, have More judgment basis, and keep characteristic distribution more reasonable, the probability that random error occurs is smaller, therefore, can make Face value judgment models strong robustness, judging nicety rate are higher.
It should be understood that above general description and following detailed description be only it is exemplary and explanatory, not The disclosure can be limited.
Detailed description of the invention
In order to illustrate more clearly of the technical solution in the embodiment of the present disclosure, will make below to required in embodiment description Attached drawing is briefly described, it should be apparent that, drawings in the following description are only some embodiments of the invention, for For those skilled in the art, without creative efforts, it can also be obtained according to these attached drawings other attached Figure.
Fig. 1 is the basic structure schematic diagram according to an exemplary embodiment image classification method;
Fig. 2 is a kind of flow diagram for the embodiment classified according to an exemplary embodiment feature extraction and face value;
Fig. 3 is a kind of flow diagram that feature is extracted according to an exemplary embodiment fisrt feature channel;
Fig. 4 is a kind of structural schematic diagram according to an exemplary embodiment face value judgment models;
Fig. 5 is a kind of flow diagram that feature is extracted according to an exemplary embodiment second feature channel;
Fig. 6 is a kind of flow diagram classified according to exemplary embodiment classification channel;
Fig. 7 is the flow diagram that face key point image is obtained according to an exemplary embodiment;
Fig. 8 is the training flow diagram according to an exemplary embodiment face value judgment models;
Fig. 9 is the block diagram according to an exemplary embodiment image classification device;
Figure 10 is the block diagram that a kind of electronic equipment of image classification method is executed according to an exemplary embodiment;
Figure 11 is the block diagram that a kind of another electronic equipment of image classification method is executed according to an exemplary embodiment.
Specific embodiment
Example embodiments are described in detail here, and the example is illustrated in the accompanying drawings.Following description is related to When attached drawing, unless otherwise indicated, the same numbers in different drawings indicate the same or similar elements.Following exemplary embodiment Described in embodiment do not represent all embodiments consistented with the present invention.On the contrary, they be only with it is such as appended The example of device and method being described in detail in claims, some aspects of the invention are consistent.
Referring to Fig. 1, Fig. 1 is the basic structure schematic diagram of the present embodiment image classification method.
As shown in Figure 1, a kind of image classification method, including
S1100, facial image to be identified is obtained;
The method for obtaining facial image includes two methods of acquisition in real time and extraction storage image/video data.Acquisition in real time It is generally configured to the real-time of intelligent terminal (mobile phone, tablet computer and monitoring device) and applies (such as: face value and similarity).It mentions Storage image/video data is taken to be generally configured to that the image and video data of storage is further processed, it also can be by Intelligent terminal is configured to apply historical photograph.
The acquisition of facial image can be extracted by photo, or by the frame picture to video data, obtain Facial image.
S1200, the facial image is input in face value judgment models, the classification of face value is carried out to the facial image, Wherein, the face value judgment models extract the feature set of at least two different images dimensions of the facial image, and according to extremely Few two feature sets carry out the classification of face value to the facial image;
Facial image is input in face value judgment models and carries out image recognition.Optionally, in some embodiments, face It is worth judgment models by fisrt feature channel, second feature channel and classification channel.Wherein, fisrt feature channel can be depth mind Through any one in network channel, convolutional neural networks channel, Recognition with Recurrent Neural Network channel or length time memory unit.The Two feature channels can be deep neural network channel, convolutional neural networks channel, Recognition with Recurrent Neural Network channel or long short time Any one in memory unit.Classify channel for weighted sum value computing unit and layer of classifying, in some embodiments, divides Class layer can be full articulamentum.
In some embodiments, face value judgment models are by fisrt feature channel, second feature channel, third feature channel With classification channel composition.But not limited to this, the number of channels for forming face value judgment models in some embodiments can be more.
Facial image is input in face value judgment models, the different channels of face value judgment models are to respectively to facial image The feature of different images dimension extract, for example, the original image feature of facial image, second feature are extracted in fisrt feature channel The contour images of facial image are extracted in channel, and the crucial point image of facial image is extracted in third feature channel.But face value judges mould The image of the different images dimension for the facial image that type can extract is not limited to this, according to the difference of concrete application scene, face Being worth judgment models can be to the image progress feature extraction of other different dimensions of facial image.
In some embodiments, the fisrt feature channel in face value judgment models carries out the original image feature of facial image It extracts, second feature channel extracts the feature of the crucial point image of facial image, calculates two channel spies after extracting The weighted sum value of value indicative obtains the characteristic weighing summing value of facial image.But it is not limited to time, in some embodiments, when the After the feature of facial image different dimensions is extracted in one feature channel and second feature channel respectively, compare two channel datas from The degree of dissipating, and the final characteristic using the lesser characteristic of dispersion as facial image.
S1300, the face value classification data for obtaining the facial image that the face value judgment models export.
It is defeated in response to the input face value judgment models when facial image is input to training into convergent face value judgment models Out to the characteristic of the facial image, the face value score of the facial image is calculated according to this feature data.For example, face value is drawn It is divided into 4 classes, respectively high face value, general high face value, general low face value, low face value.Wherein, high face value is 100-80 points, generally High face value is 80-60 points, and general low face value is 60-40 points, and low face value is 40-10 parts.Facial image is inputted, is normalized The confidence level that classification data afterwards belongs to high face value is 0.85, then face value score is that 0.85*100 is equal to 85 points.
In above embodiment when progress face value judges, face value judgment models are by extracting facial image different dimensions Then feature set judges according to face value of the feature set of different dimensions to user, passes through the various dimensions of same facial image Feature extraction can make the training of face value judgment models more quick, while the feature extraction of various dimensions can make model The face characteristic for practising different dimensions, when various dimensions feature being made to combine the face value for judge facial image, have it is more judge according to According to, and keeping characteristic distribution more reasonable, the probability that random error occurs is smaller, therefore, can make face value judgment models Strong robustness, judging nicety rate are higher.
In some embodiments, face value judgment models include two channels, are that fisrt feature channel and second are special respectively Levy channel.The fisrt feature collection and second feature channel that face value judgment models are extracted by fisrt feature channel extract the Two feature sets classify to facial image.Specifically, referring to Fig. 2, Fig. 2 is that the present embodiment feature extraction and face value are classified A kind of flow diagram of embodiment.
As shown in Fig. 2, step S1200 further include:
S1210, the first dimension image of the facial image is input in the fisrt feature channel, and described in acquisition The fisrt feature collection of fisrt feature channel output;
In present embodiment, fisrt feature channel can be deep neural network channel, convolutional neural networks channel, circulation Any one in neural network channel or length time memory unit.The original image that facial image is extracted in fisrt feature channel is special Any one in sign, the crucial point image of the contour images of facial image or facial image.But fisrt feature channel can The image of the different images dimension of the facial image of extraction is not limited to this, according to the difference of concrete application scene, the judgement of face value Model can be to the image progress feature extraction of other different dimensions of facial image.
In some embodiments, fisrt feature channel is convolutional neural networks channel, and the first dimension image is the people The original image of face image.Referring specifically to Fig. 3, Fig. 3 is a kind of flow diagram that feature is extracted in this implementation fisrt feature channel.
As shown in figure 3, step S1210 further include:
S1211, the original image of the facial image is input in the convolutional neural networks channel;
In present embodiment, fisrt feature channel is convolutional neural networks channel (CNN), and the first dimension image is the people The original image of facial image is input to convolutional neural networks channel and carries out feature extraction by the original image of face image.Referring to Fig. 4, Fig. 4 For a kind of structural schematic diagram of the present embodiment face value judgment models.As shown in figure 4, the output end in convolutional neural networks channel is one A full articulamentum is defined as the first full articulamentum, and convolutional neural networks model extracts the feature of facial image original image, the The feature of input is carried out convolution algorithm by one full articulamentum, exports a 1x1xN feature and the corresponding weight system of feature The set of number, above-mentioned weight coefficient and feature is respectively defined as the first weight parameter and fisrt feature information.
In some embodiments, the first full articulamentum exports the feature of one group of 1x1x4096, but the first full articulamentum is defeated Characteristic dimension out is not limited to 4096 dimensions, according to the difference of concrete application scene, the characteristic dimension of the first full articulamentum output It can be any integer value.
S1212, the fisrt feature collection for obtaining the convolutional neural networks channel output.
The feature of input is carried out convolution algorithm by full articulamentum, export a 1x1xN feature and a weight system Number, weight coefficient represent the significance level of this feature, and coefficient is bigger, and to represent this feature more important.The spy of first full articulamentum output Sign and weight coefficient collectively constitute fisrt feature collection.
S1220, the second dimension image of the facial image is input in the second feature channel, and described in acquisition The second feature collection of second feature channel output;
In present embodiment, second feature channel can be deep neural network channel, convolutional neural networks channel, circulation Any one in neural network channel or length time memory unit.The original image that facial image is extracted in second feature channel is special Any one in sign, the crucial point image of the contour images of facial image or facial image.But second feature channel can The image of the different images dimension of the facial image of extraction is not limited to this, according to the difference of concrete application scene, the judgement of face value Model can be to the image progress feature extraction of other different dimensions of facial image.
In some embodiments, second feature channel is convolutional neural networks channel (DNN), and the second dimension image is institute State the face key point image of facial image.It is that feature is extracted in the present embodiment second feature channel referring specifically to Fig. 5, Fig. 5 A kind of flow diagram.
As shown in figure 5, step S1220 includes:
S1221, the original image of the face key point image is input in the deep neural network channel;
The acquisition modes of face key point image are as follows: facial image is input to and is trained in advance to convergent nerve net Network model, the neural network model in present embodiment can be CNN convolutional neural networks model, VGG convolutional neural networks mould In type or insightface human face recognition model, and obtain the characteristic image of above-mentioned model convolutional layer output, this feature image As face key point image.
In present embodiment, second feature channel is deep neural network channel, and the second dimension image is the face figure Face key point image is input to deep neural network channel and carries out feature extraction by the face key point image of picture.Such as Fig. 4 institute Show, the output end in deep neural network channel is a full articulamentum, is defined as the second full articulamentum, deep neural network model The feature of facial image original image is extracted, the second full articulamentum exports a 1x1xN feature and feature is one corresponding The set of weight coefficient, above-mentioned weight coefficient and feature is respectively defined as the second weight parameter and second feature information.
In some embodiments, the first full articulamentum exports the feature of one group of 1x1x4096, but the first full articulamentum is defeated Characteristic dimension out is not limited to 4096 dimensions, according to the difference of concrete application scene, the characteristic dimension of the first full articulamentum output It can be any integer value.
It should be pointed out that the first full articulamentum and the characteristic dimension of the second full articulamentum output are phases in present embodiment With.
S1222, the second feature collection for obtaining the deep neural network channel output.
The feature of input is carried out convolution algorithm by the second full articulamentum, exports the feature and a weight of a 1x1xN Coefficient, weight coefficient represent the significance level of this feature, and coefficient is bigger, and to represent this feature more important.Second full articulamentum output Feature and weight coefficient collectively constitute fisrt feature collection.
S1230, the fisrt feature collection and second feature collection are input to progress face value classification in the classification channel.
Fisrt feature collection and second feature collection are input in classification channel, classification channel includes computing unit and classification Layer.Classification layer can be full articulamentum in some embodiments.In some embodiments, computing unit is characterized dispersion Calculating model can calculate fisrt feature collection and the data discrete of second feature collection, and in fisrt feature collection and The small feature set of data discrete degree is chosen in two feature sets carries out data classification.
In some embodiments, computing unit is weighted sum value computation model, can be to fisrt feature collection and second Feature set is weighted the calculating of summing value.The calculated result of weighted sum value is carried out data classification by classification layer.
In some embodiments, fisrt feature collection includes: the first weight parameter and fisrt feature information;Second feature collection It include: the second weight parameter and second feature information.And classification channel is weighted sum value computing unit and classification layer, layer of classifying It is defined as third classification layer.Referring specifically to Fig. 6, Fig. 6 is a kind of flow diagram that the present embodiment classification channel is classified.
As shown in fig. 6, step S1230 includes:
S1231, the second weight parameter described in the first sum of products of first weight parameter and fisrt feature information is calculated With the second product of second feature information;
When being classified, the first product of the first weight parameter and fisrt feature information that fisrt feature is concentrated.Wherein, First product is the set that N number of feature carries out product with corresponding first weight parameter.The second weight that second feature is concentrated Second product of parameter and second feature information.Wherein, the second product is also that N number of feature is carried out with corresponding second weight parameter The set of product.
S1232, the feature that the fisrt feature collection and second feature collection are calculated according to second product of the first sum of products Weighted sum value;
The characteristic weighing that fisrt feature collection and second feature collection is calculated with the second product in the first product is calculated to ask And value.
For example, x1 is a characteristic parameter in fisrt feature, α is corresponding first weight parameter of x1.Y1 is A characteristic parameter corresponding with x1 in two feature sets, β are corresponding second weight parameter of y1, and z is that the weighting of two features is asked And value.
Z=(α * x1+ β * y1)
The weighted sum value of N-dimensional feature is similarly calculated.
S1233, the characteristic weighing summing value is input to progress face value classification in the classification channel.
The weighted sum value that N-dimensional feature is calculated is input to progress face value classification in the full articulamentum of third.Third is complete Articulamentum, which is equivalent to, carries out the classification that highest score is calculated in classification score, the class of highest score for the weighted sum value of N-dimensional Other is then the classification results of facial image.
For example, face value is divided into 4 classes, and respectively high face value, general high face value, general low face value, low face value.Wherein, High face value is 100-80 points, and general high face value is 80-60 points, and general low face value is 60-40 points, and low face value is 40-10 parts.By people Face image input, the confidence level that the classification data after being normalized belongs to high face value is 0.85, then face value score is 0.85* 100 are equal to 85 points.
By seeking the weighted sum value of facial image original image and facial image key point characteristics of image, can effectively keep away Exempt from the random error that face value categorizing system occurs in characteristic extraction procedure, keep face value judgment models more stable, output is accurate Rate is effectively improved.
In some embodiments, it after obtaining facial image, needs to pre-process facial image, to obtain face pass Key point image.It is the flow diagram that the present embodiment obtains face key point image referring specifically to Fig. 7, Fig. 7.
As shown in fig. 7, after step S1100 further include:
S1110, the facial image is input in preset first nerves network model;
First nerves network model can be (being not limited to) in present embodiment: CNN convolutional neural networks model, VGG volumes Product neural network model or insightface human face recognition model any one.
Facial image is input to training in advance into convergent first nerves network model.
The face key point image that S1120, the convolutional layer for obtaining the first nerves network model export.
Facial image is input to training in advance into convergent first nerves network model, obtains first nerves network mould The face key point image of the convolutional layer output of type.The convolutional layer of first nerves network model in present embodiment refers to first The characteristic image of the last one convolutional layer of neural network model output.
Referring to Fig. 8, Fig. 8 is the training flow diagram of the present embodiment face value judgment models.
As shown in figure 8, the training method of face value judgment models is as follows:
S2100, acquisition are marked with classification and judge the training sample data of information, wherein the training sample data include Facial image and the face key point image for being derived from the facial image;
Training sample data are the component units of entire training set, and training set is made of several training sample data , training sample data include: facial image original image, the face key point image and facial image for being derived from facial image original image Classification judge information.
Classification judges that information refers to that people according to the training direction of input face value judgment models, pass through the judgement mark of universality The artificial judgement that quasi- and true state makes training sample data, that is, people are to face value judgment models output numerical value Expectation target.Such as, in a training sample data, the artificial face value score for demarcating facial image in training sample, face value point Several is then the expectation target of face value judgment models output category data.In some embodiments, classification judge information be have it is pre- The neural network to convergent judgement facial image face value is first trained to be demarcated.Neural network can be (being not limited to) CNN Face value judgment models, VGG face value judgment models or insightface human face recognition model any one.
S2200, the training sample data are input to the preset neural network model acquisition training sample data Classification referring to information;
Training sample set is sequentially inputted in face value judgment models, training sample have respectively entered fisrt feature channel and In second feature channel, fisrt feature channel and second feature channel are respectively to the original image of facial image and face key point image Carry out feature extraction and classification, wherein the full articulamentum output of the full articulamentum and second feature channel in fisrt feature channel Data are to classify referring to information.
Category of model is referring to the excited data that information is that face value judgment models are exported according to the facial image of input, in face Value judgment models are not trained to before convergence, classification referring to information be the biggish numerical value of discreteness, when face value judgment models not It is trained to convergence, classification is metastable data referring to information.
S2300, the category of model for comparing different samples in the training sample data judge referring to information and the classification Whether information is consistent;
Loss function is configured as category of model in detection face value judgment models and judges referring to information with desired classification The whether consistent detection function of information.When the output result of face value judgment models and classification judge the expected result of information It when inconsistent, needs to be corrected the weight in face value judgment models, so that the output result of face value judgment models and classification Judge that the expected result of information is identical.
S2400, when the category of model judges that information is inconsistent referring to information and the classification, iterative cycles iteration The weight in the neural network model is updated, until the comparison result terminates when judging that information is consistent with the classification, is obtained The face value judgment models.
When face value judgment models classification output output result and classification judge information expected result it is inconsistent when, need Weight in face value judgment models is corrected, so that the output result of face value judgment models and classification judge the expectation of information As a result identical.When the classification data that first passage and second channel export judges that information is consistent with preset classification, and first After the Euclidean distance between characteristic that the characteristic and second channel of the training sample of the extraction in channel export is up to standard, stop Only to the training of the training sample.(such as 1,000,000 facial images) are trained using multiple training samples when training, are passed through Training and correction repeatedly, when face value judgment models output category data and the classification of each training sample reach referring to information comparison When (being not limited to) 99.9%, training terminates.
In order to solve the above technical problems, the embodiment of the present disclosure also provides a kind of image classification device.Referring specifically to Fig. 9, Fig. 9 is the block diagram of the present embodiment image classification device.
As shown in figure 9, a kind of image classification device, comprising: acquiring unit 2100, processing unit 2200 and execution unit 2300.Wherein, acquiring unit 2100 is configured as obtaining facial image to be identified;Processing unit 2200 is configured as face Image, which is input in preset face value judgment models, carries out feature extraction and the classification of face value to facial image, wherein the judgement of face value The feature set of at least two different images dimensions of model extraction facial image, and according at least two feature sets to facial image Carry out the classification of face value;Execution unit 2300 is configured as obtaining the face value classification number of the facial image of face value judgment models output According to.
For image classification device when progress face value judges, face value judgment models pass through the spy for extracting facial image different dimensions Collection, then judges according to face value of the feature set of different dimensions to user, special by the various dimensions of same facial image Sign is extracted, and the training of face value judgment models can be made more quick, while the feature extraction of various dimensions can make model learning There are more judgment basis when various dimensions feature being made to combine the face value for judging facial image to the face characteristic of different dimensions, And keeping characteristic distribution more reasonable, the probability that random error occurs is smaller, therefore, can make face value judgment models robust Property is strong, and judging nicety rate is higher.
In some embodiments, face value judgment models include fisrt feature channel, second feature channel and classification channel; Image classification device further include: the first processing subelement, second processing subelement and first execute subelement.Wherein, at first Reason subelement is configured as the first dimension image of facial image being input in fisrt feature channel, and it is logical to obtain fisrt feature The fisrt feature collection of road output;Second processing subelement is configured as the second dimension image of facial image being input to the second spy It levies in channel, and obtains the second feature collection of second feature channel output;First execution subelement is configured as fisrt feature Collection and second feature collection are input in classification channel and carry out the classification of face value.
In some embodiments, the first dimension image is the original image of facial image, and fisrt feature channel is convolutional Neural Network channel;Image classification device further include: third handles subelement and second and executes subelement.Wherein, third processing is single Member is configured as the original image of facial image being input in convolutional neural networks channel;Second execution subelement is configured as obtaining The fisrt feature collection of convolutional neural networks channel output.
In some embodiments, the second dimension image is the face key point image of facial image, second feature channel For deep neural network channel;Image classification device further include: fourth process subelement and third execute subelement.Wherein, Four processing subelements are configured as the original image of face key point image being input in deep neural network channel;Third executes son Unit is configured as obtaining the second feature collection of deep neural network channel output.
In some embodiments, fisrt feature collection includes: the first weight parameter and fisrt feature information;Second feature collection It include: the second weight parameter and second feature information;Image classification device further include: the first computation subunit, second calculate son Unit and the 4th executes subelement.Wherein, the first computation subunit is configured as calculating the first weight parameter and fisrt feature is believed Second product of the first the second weight parameter of sum of products and second feature information of breath;Second computation subunit is configured as basis The characteristic weighing summing value of first the second product of sum of products calculating fisrt feature collection and second feature collection;4th executes subelement quilt It is configured to be input to characteristic weighing summing value in classification channel and carries out the classification of face value.
In some embodiments, image classification device further include: the 5th processing subelement and the 5th executes subelement.Its In, the 5th processing subelement is configured as being input to facial image in preset first nerves network model;5th executes son Unit is configured as obtaining the face key point image of the convolutional layer output of first nerves network model.
In some embodiments, image classification device further include: first obtains subelement, the 6th processing subelement, the One comparison subunit and the 6th executes subelement.Wherein, the first acquisition subelement, which is configured as obtaining, is marked with classification judgement letter The training sample data of breath, wherein training sample data include facial image and the face key point for being derived from facial image Image;6th processing subelement is configured as inputting training sample data into point that face value judgment models obtain training sample data Class is referring to information;First comparison subunit is configured as comparing the category of model of different samples in training sample data referring to information Judge whether information is consistent with classification;6th execution subelement is configured as judging information referring to information and classification when category of model When inconsistent, weight in the update face value judgment models of iterative cycles iteration, until comparison result judges that information is consistent with classification When terminate.
In the present embodiment, image classification device can be: the end PC, intelligent sliding moved end or server end.Work as image Sorter is: the end PC or intelligent sliding moved end please refer to Figure 10.Figure 11 is please referred to when image classification device is server end.
Figure 10 is a kind of block diagram of the electronic equipment shown according to an exemplary embodiment for executing image classification method.Example Such as, electronic equipment 1000 can be mobile phone, computer, digital broadcasting terminal, messaging device, game console, put down Panel device, Medical Devices, body-building equipment, personal digital assistant etc..
Referring to Fig.1 0, electronic equipment 1000 may include following one or more components: processing component 1002, memory 1004, power supply module 1006, multimedia component 1008, audio component 1010, the interface 1012 of input/output (I/O), sensor Component 1014 and communication component 1016.
The integrated operation of the usual controlling electronic devices 1000 of processing component 1002, such as with display, call, data are logical Letter, camera operation and record operate associated operation.Processing component 1002 may include one or more processors 1020 It executes instruction, to perform all or part of the steps of the methods described above.In addition, processing component 1002 may include one or more Module, convenient for the interaction between processing component 1002 and other assemblies.For example, processing component 1002 may include multimedia mould Block, to facilitate the interaction between multimedia component 1008 and processing component 1002.
Memory 1004 is configured as storing various types of data to support the operation in electronic equipment 1000.These numbers According to example include being configured as the instruction of any application or method operated on electronic equipment 1000, contact number According to, telephone book data, message, picture, video etc..Memory 1004 can be by any kind of volatibility or non-volatile memories Equipment or their combination are realized, such as static random access memory (SRAM), electrically erasable programmable read-only memory (EEPROM), Erasable Programmable Read Only Memory EPROM (EPROM), programmable read only memory (PROM), read-only memory (ROM), magnetic memory, flash memory, disk or CD.
Power supply module 1006 provides electric power for the various assemblies of electronic equipment 1000.Power supply module 1006 may include power supply Management system, one or more power supplys and other with for electronic equipment 1000 generate, manage, and distribute associated group of electric power Part.
Multimedia component 1008 includes the screen of one output interface of offer between the electronic equipment 1000 and user Curtain.In some embodiments, screen may include liquid crystal display (LCD) and touch panel (TP).If screen includes touching Panel, screen may be implemented as touch screen, to receive input signal from the user.Touch panel includes one or more touchings Sensor is touched to sense the gesture on touch, slide, and touch panel.The touch sensor can not only sense touch or cunning The boundary of movement, but also detect duration and pressure associated with the touch or slide operation.In some embodiments In, multimedia component 1008 includes a front camera and/or rear camera.When electronic equipment 1000 is in operation mould Formula, such as in a shooting mode or a video mode, front camera and/or rear camera can receive external multi-medium data. Each front camera and rear camera can be a fixed optical lens system or have focal length and optical zoom energy Power.
Audio component 1010 is configured as output and/or input audio signal.For example, audio component 1010 includes a wheat Gram wind (MIC), when electronic equipment 1000 is in operation mode, when such as call mode, recording mode, and voice recognition mode, Mike Wind is configured as receiving external audio signal.The received audio signal can be further stored in memory 1004 or via Communication component 1016 is sent.In some embodiments, audio component 1010 further includes a loudspeaker, is configured as output audio Signal.
I/O interface 1012 provides interface, above-mentioned peripheral interface module between processing component 1002 and peripheral interface module It can be keyboard, click wheel, button etc..These buttons may include, but are not limited to: home button, volume button, start button and Locking press button.
Sensor module 1014 includes one or more sensors, is configured as providing various aspects for electronic equipment 1000 Status assessment.For example, sensor module 1014 can detecte the state that opens/closes of electronic equipment 1000, the phase of component To positioning, such as the component is the display and keypad of electronic equipment 1000, and sensor module 1014 can also detect electricity The position change of 1,000 1 components of sub- equipment 1000 or electronic equipment, the presence or do not deposit that user contacts with electronic equipment 1000 In the temperature change in 1000 orientation of electronic equipment or acceleration/deceleration and electronic equipment 1000.Sensor module 1014 may include Proximity sensor is configured to detect the presence of nearby objects without any physical contact.Sensor module 1014 It can also include that optical sensor is configured as using in imaging applications such as CMOS or ccd image sensor.In some implementations In example, which can also include acceleration transducer, gyro sensor, Magnetic Sensor, pressure sensor Or temperature sensor.
Communication component 1016 is configured to facilitate the logical of wired or wireless way between electronic equipment 1000 and other equipment Letter.Electronic equipment 1000 can access the wireless network based on communication standard, such as WiFi, carrier network (such as 2G, 3G, 4G or 5G) or their combination.In one exemplary embodiment, communication component 1016 is received via broadcast channel from external wide The broadcast singal or broadcast related information of broadcast management system.In one exemplary embodiment, the communication component 1016 also wraps Near-field communication (NFC) module is included, to promote short range communication.For example, it can be based on radio frequency identification (RFID) technology in NFC module, it is red Outer data association (IrDA) technology, ultra wide band (UWB) technology, bluetooth (BT) technology and other technologies are realized.
In the exemplary embodiment, electronic equipment 1000 can by one or more application specific integrated circuit (ASIC), Digital signal processor (DSP), digital signal processing appts (DSPD), programmable logic device (PLD), field-programmable gate array It arranges (FPGA), controller, microcontroller, microprocessor or other electronic components to realize, is configured as executing above-mentioned image classification Method.
In the exemplary embodiment, a kind of non-transitorycomputer readable storage medium including instruction, example are additionally provided It such as include the memory 1004 of instruction, above-metioned instruction can be executed by the processor 1020 of electronic equipment 1000 to complete above-mentioned side Method.For example, the non-transitorycomputer readable storage medium can be ROM, random access memory (RAM), CD-ROM, magnetic Band, floppy disk and optical data storage devices etc..
Figure 11 is another electronic equipment shown according to an exemplary embodiment for executing a kind of image classification method Block diagram.For example, electronic equipment 1100 may be provided as a server.Referring to Fig.1 1, electronic equipment 1100 includes processing component 1122, further comprise one or more processors, and the memory resource as representated by memory 1132, is configured as Storage can be by the instruction of the execution of processing component 1122, such as application program.The application program stored in memory 1132 can be with Including it is one or more each correspond to one group of instruction module.In addition, processing component 1122 is configured as executing Instruction, to execute above-mentioned image classification method.
Electronic equipment 1100 can also include that a power supply module 1126 is configured as executing the power supply of electronic equipment 1100 Management, a wired or wireless network interface 1150 is configured as electronic equipment 1100 being connected to network and an input is defeated (I/O) interface 1158 out.Electronic equipment 1100 can be operated based on the operating system for being stored in memory 1132, such as Windows ServerTM, Mac OS XTM, UnixTM, LinuxTM, FreeBSDTM or similar.
An application program is also disclosed in the embodiment of the present disclosure, when the application program is executed by the processor of electronic equipment When, so that electronic equipment is able to carry out a kind of image classification method described above.
Those skilled in the art after considering the specification and implementing the invention disclosed here, will readily occur to of the invention its Its embodiment.This application is intended to cover any variations, uses, or adaptations of the invention, these modifications, purposes or Person's adaptive change follows general principle of the invention and including the undocumented common knowledge in the art of the disclosure Or conventional techniques.The description and examples are only to be considered as illustrative, and true scope and spirit of the invention are by following Claim is pointed out.
It should be understood that the present invention is not limited to the precise structure already described above and shown in the accompanying drawings, and And various modifications and changes may be made without departing from the scope thereof.The scope of the present invention is limited only by the attached claims.

Claims (10)

1. a kind of image classification method characterized by comprising
Obtain facial image to be identified;
The facial image is input in face value judgment models, the classification of face value is carried out to the facial image, wherein the face Value judgment models extract the feature set of at least two different images dimensions of the facial image, and according at least two feature sets The classification of face value is carried out to the facial image;
Obtain the face value classification data of the facial image of the face value judgment models output.
2. image classification method according to claim 1, which is characterized in that the face value judgment models include fisrt feature Channel, second feature channel and classification channel;It is described that the facial image is input in face value judgment models, to the face Image carries out the classification of face value, comprising:
First dimension image of the facial image is input in the fisrt feature channel, and it is logical to obtain the fisrt feature The fisrt feature collection of road output;
Second dimension image of the facial image is input in the second feature channel, and it is logical to obtain the second feature The second feature collection of road output;
The fisrt feature collection and second feature collection are input to progress face value classification in the classification channel.
3. image classification method according to claim 2, which is characterized in that the first dimension image is the face figure The original image of picture, the fisrt feature channel are convolutional neural networks channel;The first dimension image by the facial image It is input in the fisrt feature channel, and obtains the fisrt feature collection of the fisrt feature channel output, comprising:
The original image of the facial image is input in the convolutional neural networks channel;
Obtain the fisrt feature collection of the convolutional neural networks channel output.
4. image classification method according to claim 2, which is characterized in that the second dimension image is the face figure The face key point image of picture, the second feature channel are deep neural network channel;It is described by the of the facial image Two-dimensions image is input in the second feature channel, and obtains the second feature collection of the second feature channel output, packet It includes:
The original image of the face key point image is input in the deep neural network channel;
Obtain the second feature collection of the deep neural network channel output.
5. image classification method according to claim 2, which is characterized in that the fisrt feature collection includes: the first weight Parameter and fisrt feature information;The second feature collection includes: the second weight parameter and second feature information;It is described by described One feature set and second feature collection are input to progress face value classification in the classification channel, comprising:
Calculate the second weight parameter and second feature described in the first sum of products of first weight parameter and fisrt feature information Second product of information;
The characteristic weighing summing value of the fisrt feature collection and second feature collection is calculated according to second product of the first sum of products;
The characteristic weighing summing value is input to progress face value classification in the classification channel.
6. image classification method according to claim 4, which is characterized in that it is described obtain facial image to be identified it Afterwards, further includes:
The facial image is input in preset first nerves network model;
Obtain the face key point image of the convolutional layer output of the first nerves network model.
7. image classification method according to claim 1, which is characterized in that the training method of the face value judgment models, Include:
Obtain and be marked with classification and judge the training sample data of information, wherein the training sample data include facial image with And it is derived from the face key point image of the facial image;
The neural network model that the training sample data are input to initial preset is obtained to the classification of the training sample data Referring to information;
Compare in the training sample data categories of model of different samples referring to information and the classification judge information whether one It causes;
When the category of model judges that information is inconsistent referring to information and the classification, the update of the iterative cycles iteration mind It obtains the face value until the comparison result terminates when judging that information is consistent with the classification through the weight in network model and sentences Disconnected model.
8. a kind of image classification device characterized by comprising
Acquiring unit is configured as obtaining facial image to be identified;
Processing unit is configured as the facial image being input in face value judgment models, carries out face to the facial image Value classification, wherein the face value judgment models extract the feature set of at least two different images dimensions of the facial image, and The classification of face value is carried out to the facial image according at least two feature sets;
Execution unit is configured as obtaining the face value classification data of the facial image of the face value judgment models output.
9. a kind of electronic equipment characterized by comprising
Processor;
Memory for storage processor executable instruction;
Wherein, the processor is configured to executing image classification method described in the claims 1-7 any one.
10. a kind of non-transitorycomputer readable storage medium, when the instruction in the storage medium is by the processing of electronic equipment When device executes, so that electronic equipment is able to carry out image classification method described in claim 1-7 any one.
CN201811151621.4A 2018-09-29 2018-09-29 Image classification method, device, electronic equipment and storage medium Pending CN109145877A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811151621.4A CN109145877A (en) 2018-09-29 2018-09-29 Image classification method, device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811151621.4A CN109145877A (en) 2018-09-29 2018-09-29 Image classification method, device, electronic equipment and storage medium

Publications (1)

Publication Number Publication Date
CN109145877A true CN109145877A (en) 2019-01-04

Family

ID=64814039

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811151621.4A Pending CN109145877A (en) 2018-09-29 2018-09-29 Image classification method, device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN109145877A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110570419A (en) * 2019-09-12 2019-12-13 杭州依图医疗技术有限公司 Method and device for acquiring characteristic information and storage medium

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20150347820A1 (en) * 2014-05-27 2015-12-03 Beijing Kuangshi Technology Co., Ltd. Learning Deep Face Representation
CN107491726A (en) * 2017-07-04 2017-12-19 重庆邮电大学 A kind of real-time expression recognition method based on multi-channel parallel convolutional neural networks
CN107527024A (en) * 2017-08-08 2017-12-29 北京小米移动软件有限公司 Face face value appraisal procedure and device
CN108121975A (en) * 2018-01-04 2018-06-05 中科汇通投资控股有限公司 A kind of face identification method combined initial data and generate data
CN108460343A (en) * 2018-02-06 2018-08-28 北京达佳互联信息技术有限公司 Image processing method, system and server

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20150347820A1 (en) * 2014-05-27 2015-12-03 Beijing Kuangshi Technology Co., Ltd. Learning Deep Face Representation
CN107491726A (en) * 2017-07-04 2017-12-19 重庆邮电大学 A kind of real-time expression recognition method based on multi-channel parallel convolutional neural networks
CN107527024A (en) * 2017-08-08 2017-12-29 北京小米移动软件有限公司 Face face value appraisal procedure and device
CN108121975A (en) * 2018-01-04 2018-06-05 中科汇通投资控股有限公司 A kind of face identification method combined initial data and generate data
CN108460343A (en) * 2018-02-06 2018-08-28 北京达佳互联信息技术有限公司 Image processing method, system and server

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
丛爽主编: "《自动化理论、技术与应用 第10卷》", 31 March 2003 *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110570419A (en) * 2019-09-12 2019-12-13 杭州依图医疗技术有限公司 Method and device for acquiring characteristic information and storage medium

Similar Documents

Publication Publication Date Title
CN106339680B (en) Face key independent positioning method and device
CN106295566B (en) Facial expression recognizing method and device
CN106295511B (en) Face tracking method and device
CN110516745A (en) Training method, device and the electronic equipment of image recognition model
CN104850828B (en) Character recognition method and device
CN106548468B (en) The method of discrimination and device of image definition
US10007841B2 (en) Human face recognition method, apparatus and terminal
CN105608425B (en) The method and device of classification storage is carried out to photo
CN109145876A (en) Image classification method, device, electronic equipment and storage medium
CN109670397A (en) Detection method, device, electronic equipment and the storage medium of skeleton key point
CN109389162B (en) Sample image screening technique and device, electronic equipment and storage medium
CN106295515B (en) Determine the method and device of the human face region in image
CN109871896A (en) Data classification method, device, electronic equipment and storage medium
CN109543714A (en) Acquisition methods, device, electronic equipment and the storage medium of data characteristics
CN106971164A (en) Shape of face matching process and device
CN105528078B (en) The method and device of controlling electronic devices
CN107368810A (en) Method for detecting human face and device
CN109063580A (en) Face identification method, device, electronic equipment and storage medium
CN109543066A (en) Video recommendation method, device and computer readable storage medium
CN109242045B (en) Image clustering processing method, device, electronic equipment and storage medium
CN111783517B (en) Image recognition method, device, electronic equipment and storage medium
CN103927545B (en) Clustering method and relevant apparatus
EP4287068A1 (en) Model training method, scene recognition method, and related device
CN109360197A (en) Processing method, device, electronic equipment and the storage medium of image
CN107220614A (en) Image-recognizing method, device and computer-readable recording medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20190104

RJ01 Rejection of invention patent application after publication