CN108446659A - Method and apparatus for detecting facial image - Google Patents

Method and apparatus for detecting facial image Download PDF

Info

Publication number
CN108446659A
CN108446659A CN201810266087.5A CN201810266087A CN108446659A CN 108446659 A CN108446659 A CN 108446659A CN 201810266087 A CN201810266087 A CN 201810266087A CN 108446659 A CN108446659 A CN 108446659A
Authority
CN
China
Prior art keywords
image
detected
probability value
class
facial image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201810266087.5A
Other languages
Chinese (zh)
Inventor
汤旭
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Baidu Online Network Technology Beijing Co Ltd
Beijing Baidu Netcom Science and Technology Co Ltd
Original Assignee
Beijing Baidu Netcom Science and Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Baidu Netcom Science and Technology Co Ltd filed Critical Beijing Baidu Netcom Science and Technology Co Ltd
Priority to CN201810266087.5A priority Critical patent/CN108446659A/en
Publication of CN108446659A publication Critical patent/CN108446659A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation

Landscapes

  • Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • General Health & Medical Sciences (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Analysis (AREA)

Abstract

The embodiment of the present application discloses the method and apparatus for detecting facial image.One specific implementation mode of this method includes:Obtain characteristic image to be detected, wherein above-mentioned characteristic image to be detected extracts from image to be detected;Above-mentioned characteristic image to be detected is imported to the first disaggregated model of training in advance, generate the probability value that above-mentioned image to be detected belongs to each classification in predefined kind of classification, wherein, above-mentioned predefined kind of a classification is at least three kinds of classifications, above-mentioned at least three kinds of classifications include facial image class and at least two inhuman face image classes, and above-mentioned first disaggregated model is used for the correspondence of characteristic feature image and at least three probability values;Belong at least two probability values of inhuman face image class according to above-mentioned image to be detected and belong to the probability value of facial image class, determines whether above-mentioned image to be detected is facial image.The embodiment enriches facial image detection mode.

Description

Method and apparatus for detecting facial image
Technical field
The invention relates to field of computer technology, the method and apparatus for more particularly, to detecting facial image.
Background technology
With the fast development of artificial intelligence, face recognition technology is using more and more extensive.The premise of recognition of face is people Face image detects.Under normal conditions, it needs first to determine whether images to be recognized is facial image, if so, determining face figure again As the identity of representative face.
Invention content
The embodiment of the present application proposes the method and apparatus for detecting facial image.
In a first aspect, the embodiment of the present application provides a kind of method for detecting facial image, this method includes:It obtains Characteristic image to be detected, wherein above-mentioned characteristic image to be detected extracts from image to be detected;Above-mentioned characteristic image to be detected is led The first disaggregated model for entering training in advance generates the probability that above-mentioned image to be detected belongs to each classification in predefined kind of classification Value, wherein above-mentioned predefined kind of a classification is at least three kinds of classifications, and above-mentioned at least three kinds of classifications include facial image class and at least two The inhuman face image class of kind, above-mentioned first disaggregated model are used for the correspondence of characteristic feature image and at least three probability values;Root Belong at least two probability values of inhuman face image class according to above-mentioned image to be detected and belong to the probability value of facial image class, determines Whether above-mentioned image to be detected is facial image.
In some embodiments, above-mentioned at least two probability values for belonging to inhuman face image class according to above-mentioned image to be detected With the probability value for belonging to facial image class, determine whether above-mentioned image to be detected is facial image, including:From above-mentioned at least two In probability value, maximum probability value is selected, to determine maximum non-face class probability value;According to the non-face class probability of above-mentioned maximum Value and the above-mentioned probability value for belonging to facial image class, determine whether above-mentioned image to be detected is facial image.
In some embodiments, above-mentioned that the general of facial image class is belonged to above-mentioned according to the non-face class probability value of above-mentioned maximum Rate value determines whether above-mentioned image to be detected is facial image, including:The non-face class probability value of above-mentioned maximum is belonged to above-mentioned The probability value of facial image class imports the second disaggregated model of training in advance, determines that above-mentioned image to be detected is facial image Facial image probability value, wherein above-mentioned second disaggregated model is for characterizing maximum non-face class probability value, belonging to facial image class Correspondence with facial image probability value of both probability values.
In some embodiments, above-mentioned that the general of facial image class is belonged to above-mentioned according to the non-face class probability value of above-mentioned maximum Rate value determines whether above-mentioned image to be detected is facial image, further includes:It is more than in response to the above-mentioned facial image probability value of determination Predetermined threshold value determines that above-mentioned image to be detected is facial image.
In some embodiments, above-mentioned acquisition characteristic image to be detected, including:Target image is imported to the figure pre-established As Feature Selection Model, target signature image and candidate region information are generated, wherein above-mentioned image characteristics extraction model is used for table The correspondence between image and characteristic image, candidate region information the two is levied, above-mentioned candidate region information is used to indicate State the candidate image area in target signature image, mapping to be checked corresponding with above-mentioned candidate image area in above-mentioned target image As the image in region is image to be detected;By the characteristic image in the candidate image area in above-mentioned target signature image, really It is set to characteristic image to be detected.
Second aspect, the embodiment of the present application provide a kind of device for detecting facial image, which includes:It obtains Unit is configured to obtain characteristic image to be detected, wherein above-mentioned characteristic image to be detected extracts from image to be detected;It generates Unit is configured to importing above-mentioned characteristic image to be detected into the first disaggregated model of training in advance, generates above-mentioned mapping to be checked Probability value as belonging to each classification in predefined kind of classification, wherein above-mentioned predefined kind of a classification is at least three kinds of classifications, above-mentioned At least three kinds of classifications include facial image class and at least two inhuman face image classes, and above-mentioned first disaggregated model is used for characteristic feature The correspondence of image and at least three probability values;Determination unit is configured to belong to non-face according to above-mentioned image to be detected At least two probability values of image class and the probability value for belonging to facial image class, determine whether above-mentioned image to be detected is face figure Picture.
In some embodiments, above-mentioned determination unit, is also configured to:From above-mentioned at least two probability value, select Maximum probability value, to determine maximum non-face class probability value;Belong to people with above-mentioned according to the non-face class probability value of above-mentioned maximum The probability value of face image class determines whether above-mentioned image to be detected is facial image.
In some embodiments, above-mentioned determination unit, is also configured to:By the non-face class probability value of above-mentioned maximum with it is above-mentioned Belong to the probability value of facial image class, imports the second disaggregated model of training in advance, determine that above-mentioned image to be detected is face figure The facial image probability value of picture, wherein above-mentioned second disaggregated model is for characterizing maximum non-face class probability value, belonging to face figure The correspondence of both probability values as class and facial image probability value.
In some embodiments, above-mentioned determination unit, is also configured to:It is big in response to the above-mentioned facial image probability value of determination In predetermined threshold value, determine that above-mentioned image to be detected is facial image.
In some embodiments, above-mentioned acquiring unit, is also configured to:It is special that target image is imported to the image pre-established Extraction model is levied, target signature image and candidate region information are generated, wherein above-mentioned image characteristics extraction model is used for phenogram Picture and the correspondence between characteristic image, candidate region information the two, above-mentioned candidate region information are used to indicate above-mentioned mesh The candidate image area in characteristic image is marked, image to be detected corresponding with above-mentioned candidate image area area in above-mentioned target image Image in domain is image to be detected;By the characteristic image in the candidate image area in above-mentioned target signature image, it is determined as Characteristic image to be detected.
The third aspect, the embodiment of the present application provide a kind of electronic equipment, which includes:One or more processing Device;Storage device, for storing one or more programs, when said one or multiple programs are by said one or multiple processors When execution so that said one or multiple processors realize the method as described in any realization method in first aspect.
Fourth aspect, the embodiment of the present application provide a kind of computer readable storage medium, are stored thereon with computer journey Sequence, wherein the method as described in any realization method in first aspect is realized when the computer program is executed by processor.
Method and apparatus provided by the embodiments of the present application for detecting facial image, by obtaining feature to be detected first Then image utilizes the first disaggregated model, generates the probability value that characteristic image to be detected belongs to facial image class, also generate to be checked The probability value that survey characteristic image belongs at least two inhuman face image classes finally belongs to non-face according to above-mentioned image to be detected At least two probability values of image class and the probability value for belonging to facial image class, determine whether above-mentioned image to be detected is face figure Picture enriches facial image detection method.
Description of the drawings
By reading a detailed description of non-restrictive embodiments in the light of the attached drawings below, the application's is other Feature, objects and advantages will become more apparent upon:
Fig. 1 is that this application can be applied to exemplary system architecture figures therein;
Fig. 2 is the flow chart according to one embodiment of the method for detecting facial image of the application;
Fig. 3 is a kind of flow chart of realization method of step 201 in flow shown in Fig. 2 according to the application;
Fig. 4 is a kind of flow chart of realization method of step 203 in flow shown in Fig. 2 according to the application;
Fig. 5 is the schematic diagram according to an application scenarios of the method for detecting facial image of the application;
Fig. 6 is the flow chart according to another embodiment of the method for detecting facial image of the application;
Fig. 7 is the structural schematic diagram according to one embodiment of the device for detecting facial image of the application;
Fig. 8 is adapted for the structural representation of the computer system for the server or terminal device of realizing the embodiment of the present application Figure.
Specific implementation mode
The application is described in further detail with reference to the accompanying drawings and examples.It is understood that this place is retouched The specific embodiment stated is used only for explaining related invention, rather than the restriction to the invention.It also should be noted that in order to Convenient for description, is illustrated only in attached drawing and invent relevant part with related.
It should be noted that in the absence of conflict, the features in the embodiments and the embodiments of the present application can phase Mutually combination.The application is described in detail below with reference to the accompanying drawings and in conjunction with the embodiments.
Fig. 1 shows the method for detecting facial image that can apply the application or the dress for detecting facial image The exemplary system architecture 100 for the embodiment set.
As shown in Figure 1, system architecture 100 may include terminal device 101,102,103, network 104 and server 105. Network 104 between terminal device 101,102,103 and server 105 provide communication link medium.Network 104 can be with Including various connection types, such as wired, wireless communication link or fiber optic cables etc..
User can be interacted by network 104 with server 105 with using terminal equipment 101,102,103, to receive or send out Send message etc..Various telecommunication customer end applications can be installed, such as image taking class is answered on terminal device 101,102,103 With, web browser applications, the application of shopping class, searching class application, instant messaging tools, mailbox client, social platform software Deng.
Terminal device 101,102,103 can be hardware, can also be software.When terminal device 101,102,103 is hard Can be with the various electronic equipments for supporting image taking, including but not limited to smart mobile phone, tablet computer, electronics when part Book reader, MP3 player (Moving Picture Experts Group Audio Layer III, dynamic image expert Compression standard audio level 3), (Moving Picture Experts Group Audio Layer IV, dynamic image are special by MP4 Family's compression standard audio level 4) player, pocket computer on knee and desktop computer etc..When terminal device 101, 102,103 when being software, may be mounted in above-mentioned cited electronic equipment.Multiple softwares or software mould may be implemented into it Block (such as providing Distributed Services), can also be implemented as single software or software module.It is not specifically limited herein.
Server 105 can be to provide the server of various services, such as to 101,102,103 image taking of terminal device Class application provides the background server supported.Background server can handle the data such as request to the picture received and analyze Deng processing, and handling result (such as the facial image detected) is fed back into terminal device.
It should be noted that the method for detecting facial image that the embodiment of the present application is provided is generally by server 105 execute.Correspondingly, it is generally positioned in server 105 for detecting the device of facial image.Optionally, the application is implemented The method for detecting facial image that example is provided can also be executed by terminal device.Correspondingly, for detecting facial image Device can also be set in terminal device.
It should be noted that server can be hardware, can also be software.When server is hardware, may be implemented At the distributed server cluster that multiple servers form, individual server can also be implemented as.It, can when server is software To be implemented as multiple softwares or software module (such as providing Distributed Services), single software or software can also be implemented as Module.It is not specifically limited herein.
It should be understood that the number of the terminal device, network and server in Fig. 1 is only schematical.According to realization need It wants, can have any number of terminal device, network and server.
With continued reference to Fig. 2, it illustrates according to one embodiment of the method for detecting facial image of the application Flow 200.The method for being used to detect facial image, may comprise steps of:
Step 201, characteristic image to be detected is obtained.
In the present embodiment, the executive agent (such as server shown in FIG. 1) of the method for detecting facial image can To obtain characteristic image to be detected.
In the present embodiment, characteristic image to be detected such as can be at the characteristic image to be detected.Herein, characteristic image (Feature Map) is referred to as characteristic pattern or characteristics of image.
In the present embodiment, characteristic image to be detected can extract from image to be detected.
In the present embodiment, image to be detected can be entire picture, can also be the part in entire picture.As Example, image to be detected can be include facial image and the entire picture of landscape image, can also be doubtful in entire picture For the part of facial image.
It is alternatively possible to first determine image to be detected, then determine characteristic image to be detected.As an example, can be in step Image to be detected is obtained before 201, and image characteristics extraction operation is carried out to the image to be detected got, obtains feature to be detected Image.
It should be noted that detection facial image is different with identification facial image.Detect the purpose of facial image:Really Determine representated by image whether to be face.And before identifying facial image, it has been determined that this image is facial image.Identify people The purpose of face image is:Determine the representative of the face in image is the specific location of who and/or facial image.
Alternatively it is also possible to directly acquire image to be detected, i.e., before getting characteristic image to be detected, there is no really Determine image to be detected.
In some optional realization methods of the present embodiment, referring to FIG. 3, it illustrates a kind of realizations of step 201 The flow of mode.Flow 201 may include:
Step 2011, target image is imported into the image characteristics extraction model that pre-establishes, generate target signature image and Candidate region information.
In the present embodiment, features described above image zooming-out model is for characterizing image and characteristic image, candidate region information Correspondence between the two.
Optionally, features described above image zooming-out model can be based on candidate region extraction network (Region Proposal Network, RPN), extracted region-convolutional network (Region Proposals Convolutional Neural Networks, R-CNN) training obtain.
As an example, can be trained to neural networks such as RPN or R-CNN.It is then possible to utilize the RPN after training Or the neural networks such as R-CNN determine the candidate region information in characteristic image while obtaining characteristic image.
Herein, above-mentioned candidate region information is used to indicate the candidate image area in above-mentioned target signature image, above-mentioned Image in target image in image to be detected corresponding with above-mentioned candidate image area region is image to be detected.It can manage Solution, between each position and each region of target image of target signature image, there is certain mapping relations.Above-mentioned candidate figure As the corresponding image to be detected region in region, can be determined by the mapping relations of target signature image and target image.
Step 2012, by the characteristic image in the candidate image area in target signature image, it is determined as feature to be detected Image.
As it can be seen that according to flow 201, characteristic image to be detected can be first determined, then determine image to be detected.
Step 202, characteristic image to be detected is imported to the first disaggregated model of training in advance, image to be detected is generated and belongs to The probability value of each classification in predefined kind of classification.
In the present embodiment, the executive agent (such as server shown in FIG. 1) of the method for detecting facial image can Above-mentioned characteristic image to be detected to be imported to the first disaggregated model of training in advance, generates during above-mentioned image to be detected belongs to and make a reservation for The probability value of each classification in adopted kind of classification.Herein, above-mentioned predefined kind of a classification is at least three kinds of classifications.
In the present embodiment, above-mentioned at least three classification includes facial image class and at least two inhuman face image classes.
As an example, above-mentioned inhuman face image class can be various classifications that may be non-face.For example, automotive-type, trees Class, road class, day empty class, etc..
In the present embodiment, above-mentioned first disaggregated model is for characteristic feature image pass corresponding at least three probability values System.
It is appreciated that above-mentioned executive agent can learn which kind of classification probability value corresponds to.As an example, above-mentioned first classification Model can predefine multiple output channels, and each output channel corresponds to a classification.Above-mentioned executive agent can be by exporting The channel of probability value, determines which classification probability value corresponds to.
In the present embodiment, the first disaggregated model can be trained and be obtained in the following manner:Sample set is obtained, sample is mark Knowing has the characteristic image of classification.Initial first disaggregated model is trained using sample set, to generate the first disaggregated model.As showing Example, initial first disaggregated model can be indiscipline or not train the support vector machines of completion, decision tree, convolutional neural networks Etc. the model or neural network that can be used for classifying.
It should be noted that the method for detecting human face of the present embodiment has refined inhuman during detecting facial image Face image class is provided at least two inhuman face image classes.When no refinement inhuman face image, facial image is detected Only it is divided into two kinds of facial image class and background classes.Due to for the less of determining type, disaggregated model only focus in whether Facial image, determining type one or the other, it is easy to cause flase drop and missing inspection.And the present embodiment has refined inhuman face image class Later, disaggregated model can be by dispersion attention, which classification detection in detail belongs to, it is possible thereby to improve facial image detection Accuracy rate.
Step 203, at least two probability values of inhuman face image class are belonged to according to image to be detected and belongs to facial image The probability value of class determines whether image to be detected is facial image.
In the present embodiment, the executive agent (such as server shown in FIG. 1) of the method for detecting facial image can To belong at least two probability values of inhuman face image class according to above-mentioned image to be detected and belong to the probability value of facial image class, Determine whether above-mentioned image to be detected is facial image.
In some optional realization methods of the present embodiment, step 203 can be accomplished by the following way:It is above-mentioned to belong to Whether the probability value of facial image class is more than the corresponding probability value of all inhuman face image classes;If it is, determination is above-mentioned to be checked Altimetric image is facial image.
In some optional realization methods of the present embodiment, referring to FIG. 4, one kind it illustrates step 203 is optional Realization method flow.Flow 203 may include:
Step 2031, from least two probability values, maximum probability value is selected, to determine maximum non-face class probability Value.
Herein, the maximum probability value selected can be determined as to maximum non-face class probability value.
Step 2032, it according to maximum non-face class probability value and the probability value for belonging to facial image class, determines above-mentioned to be checked Whether altimetric image is facial image.
Herein, step 203 can be accomplished by the following way:The non-face class probability value of above-mentioned maximum and K are determined first Product, be worth as K times.The value of K can be determined according to practice.If the above-mentioned probability value for belonging to facial image class is more than upper It states K times to be worth, it is determined that above-mentioned image to be detected is facial image.
It is one of the application scenarios of the method according to the present embodiment for detecting facial image with continued reference to Fig. 5, Fig. 5 Schematic diagram.In the application scenarios of Fig. 5:
First, server obtains characteristic image 502 to be detected.Characteristic image 502 to be detected extracts from image to be detected 501。
Then, characteristic image to be detected can be imported the first disaggregated model 503 by server, generated image to be detected and belonged to The probability value of each classification in predefined middle classification.For example, can be exported with the first disaggregated model three probability values 20%, 50%, 80%.20% for characterizing the probability that image to be detected belongs to day empty class.50% belongs to vapour for characterizing image to be detected The probability of vehicle class.80% for characterizing the probability that image to be detected belongs to facial image class.
After again, at least two probability values and corresponding facial image class that server can be according to the inhuman face image class of correspondence Probability value determines whether above-mentioned image to be detected is facial image.For example, 80% is more than the sum of 20% and 50%, determine above-mentioned Image to be detected is facial image.
Then the method that above-described embodiment of the application provides utilizes first by obtaining characteristic image to be detected first Disaggregated model generates the probability value that characteristic image to be detected belongs to facial image class, also generate characteristic image to be detected belong to The probability value of few two kinds of inhuman face image classes finally belongs at least two of inhuman face image class according to above-mentioned image to be detected Probability value and the probability value for belonging to facial image class, determine whether above-mentioned image to be detected is facial image, enrich face inspection Survey method.
With further reference to Fig. 6, it illustrates the flows 600 of another embodiment of the method for detecting facial image. This is used to detect the flow 600 of the method for facial image, includes the following steps:
Step 601, characteristic image to be detected is obtained.
In the present embodiment, the executive agent (such as server shown in FIG. 1) of the method for detecting facial image can To obtain characteristic image to be detected.
Step 602, characteristic image to be detected is imported to the first disaggregated model of training in advance, image to be detected is generated and belongs to The probability value of predefined kind of each classification of classification kind.
In the present embodiment, the executive agent (such as server shown in FIG. 1) of the method for detecting facial image can Above-mentioned characteristic image to be detected to be imported to the first disaggregated model of training in advance, generates above-mentioned image to be detected and belong at least three The probability value of each classification in kind classification.
The concrete operations of step 601 and step 602 in the present embodiment and step 201 in embodiment shown in Fig. 2 and step Rapid 202 operation is essentially identical, and details are not described herein.
Step 603, from least two probability values, maximum probability value is selected, to determine maximum non-face class probability Value.
In the present embodiment, the executive agent (such as server shown in FIG. 1) of the method for detecting facial image can From above-mentioned at least two probability value, to select maximum probability value, to determine maximum non-face class probability value.
Step 604, maximum non-face class probability value and the probability value of facial image class will be belonged to, and will import the of training in advance Two disaggregated models determine that image to be detected is the facial image probability value of facial image.
In the present embodiment, the executive agent (such as server shown in FIG. 1) of the method for detecting facial image can By the non-face class probability value of above-mentioned maximum and the above-mentioned probability value for belonging to facial image class, to import the second classification of training in advance Model determines that image to be detected is the facial image probability value of facial image.
In some optional realization methods of the present embodiment, the second disaggregated model can be trained in the following manner It arrives:Sample set is obtained, sample includes the classification belonging to sample image and sample image.Extract the sample characteristics image of sample.It will Sample characteristics image imports the first disaggregated model and obtains at least three probability values, will above-mentioned at least three probability value importing initial the Two disaggregated models obtain facial image probability value, and according to by the classification belonging to facial image probability value and sample image, update is just Begin the model parameter of the second disaggregated model, to generate the second disaggregated model.Initial second disaggregated model can be indiscipline or The model or neural network that support vector machines, decision tree, the convolutional neural networks etc. that training is not completed can be used for classifying.
In the present embodiment, above-mentioned second disaggregated model is for characterizing maximum non-face class probability value, belonging to facial image The correspondence of both probability values of class and facial image probability value.
In some optional realization methods of the present embodiment, the first disaggregated model and the second disaggregated model can be separated and be instructed Practice, can also train together.
Step 605, in response to determining that facial image probability value is more than predetermined threshold value, determine that image to be detected is face figure Picture.
In the present embodiment, the executive agent (such as server shown in FIG. 1) of the method for detecting facial image can In response to determining that facial image probability value is more than predetermined threshold value, to determine that image to be detected is facial image.
From fig. 6 it can be seen that compared with the corresponding embodiments of Fig. 2, it is used to detect facial image in the present embodiment The flow 600 of method highlights the step of using determining maximum non-face class probability value and utilizing the second disaggregated model.As a result, The scheme of the present embodiment description can utilize the second disaggregated model, to the non-face class probability value of above-mentioned maximum and corresponding facial image The probability value of class is further analyzed, to improve the accuracy rate of detection facial image.
With further reference to Fig. 7, as the realization to method shown in above-mentioned each figure, this application provides one kind for detecting people One embodiment of the device of face image, the device embodiment is corresponding with embodiment of the method shown in Fig. 2, which specifically may be used To be applied in various electronic equipments.
As shown in fig. 7, the device 700 for detecting facial image of the present embodiment includes:Acquiring unit 701 generates list Member 702 and determination unit 703.Wherein, acquiring unit is configured to obtain characteristic image to be detected, wherein above-mentioned spy to be detected Image zooming-out is levied from image to be detected;Generation unit is configured to importing above-mentioned characteristic image to be detected into the of training in advance One disaggregated model generates the probability value that above-mentioned image to be detected belongs to each classification in predefined kind of classification, wherein above-mentioned predetermined Adopted kind of a classification is at least three kinds of classifications, and above-mentioned at least three kinds of classifications include facial image class and at least two inhuman face image classes, Above-mentioned first disaggregated model is used for the correspondence of characteristic feature image and at least three probability values;Determination unit is configured to Belong at least two probability values of inhuman face image class according to above-mentioned image to be detected and belong to the probability value of facial image class, really Whether fixed above-mentioned image to be detected is facial image.
In some optional realization methods of the present embodiment, above-mentioned determination unit is also configured to:From above-mentioned at least two In a probability value, maximum probability value is selected, to determine maximum non-face class probability value;It is general according to the above-mentioned non-face class of maximum Rate value and the above-mentioned probability value for belonging to facial image class, determine whether above-mentioned image to be detected is facial image.
In some optional realization methods of the present embodiment, above-mentioned determination unit is also configured to:Above-mentioned maximum is non- Face class probability value and the above-mentioned probability value for belonging to facial image class, import the second disaggregated model of training in advance, determine above-mentioned Image to be detected is the facial image probability value of facial image, wherein above-mentioned second disaggregated model is maximum non-face for characterizing The correspondence of both class probability value, the probability value for belonging to facial image class with facial image probability value.
In some optional realization methods of the present embodiment, above-mentioned determination unit is also configured to:In response in determination It states facial image probability value and is more than predetermined threshold value, determine that above-mentioned image to be detected is facial image.
In some optional realization methods of the present embodiment, above-mentioned acquiring unit is also configured to:Target image is led Enter the image characteristics extraction model pre-established, generates target signature image and candidate region information, wherein above-mentioned characteristics of image Extraction model is used to characterize the correspondence between image and characteristic image, candidate region information the two, above-mentioned candidate region Information is used to indicate the candidate image area in above-mentioned target signature image, in above-mentioned target image with above-mentioned candidate image area Image in corresponding image to be detected region is image to be detected;It will be in the candidate image area in above-mentioned target signature image Characteristic image, be determined as characteristic image to be detected.
In the present embodiment, the acquiring unit 701, generation unit 702 of the device 700 for detecting facial image and determination The specific processing of unit 703 and its caused technique effect can be respectively with reference to step 201, steps 202 in 2 corresponding embodiment of figure With the related description of step 203, details are not described herein.
It should be noted that the realization of each unit is thin in the device provided by the embodiments of the present application for detecting facial image Section and technique effect can refer to the explanation of other embodiments in the application, and details are not described herein.
Below with reference to Fig. 8, it illustrates the calculating suitable for server or terminal device for realizing the embodiment of the present application The structural schematic diagram of machine system 800.Server shown in Fig. 8 is only an example, should not be to the function of the embodiment of the present application Any restrictions are brought with use scope.
As shown in figure 8, computer system 800 includes central processing unit (CPU, Central Processing Unit) 801, it can be according to the program being stored in read-only memory (ROM, Read Only Memory) 802 or from storage section 808 programs being loaded into random access storage device (RAM, Random Access Memory) 803 and execute various appropriate Action and processing.In RAM 803, also it is stored with system 800 and operates required various programs and data.CPU 801、ROM 802 and RAM 803 is connected with each other by bus 804.Input/output (I/O, Input/Output) interface 805 is also connected to Bus 804.
It is connected to I/O interfaces 805 with lower component:Importation 806 including keyboard, mouse etc.;It is penetrated including such as cathode Spool (CRT, Cathode Ray Tube), liquid crystal display (LCD, Liquid Crystal Display) etc. and loud speaker Deng output par, c 807;Storage section 808 including hard disk etc.;And including such as LAN (LAN, Local Area Network) the communications portion 809 of the network interface card of card, modem etc..Communications portion 809 is via such as internet Network executes communication process.Driver 810 is also according to needing to be connected to I/O interfaces 805.Detachable media 811, such as disk, CD, magneto-optic disk, semiconductor memory etc. are mounted on driver 810 as needed, in order to from the calculating read thereon Machine program is mounted into storage section 808 as needed.
Particularly, in accordance with an embodiment of the present disclosure, it may be implemented as computer above with reference to the process of flow chart description Software program.For example, embodiment of the disclosure includes a kind of computer program product comprising be carried on computer-readable medium On computer program, which includes the program code for method shown in execution flow chart.In such reality It applies in example, which can be downloaded and installed by communications portion 809 from network, and/or from detachable media 811 are mounted.When the computer program is executed by central processing unit (CPU) 801, limited in execution the present processes Above-mentioned function.It should be noted that the above-mentioned computer-readable medium of the application can be computer-readable signal media or Computer readable storage medium either the two arbitrarily combines.Computer readable storage medium for example can be --- but Be not limited to --- electricity, magnetic, optical, electromagnetic, infrared ray or semiconductor system, device or device, or arbitrary above combination. The more specific example of computer readable storage medium can include but is not limited to:Electrical connection with one or more conducting wires, Portable computer diskette, hard disk, random access storage device (RAM), read-only memory (ROM), erasable type may be programmed read-only deposit Reservoir (EPROM or flash memory), optical fiber, portable compact disc read-only memory (CD-ROM), light storage device, magnetic memory Part or above-mentioned any appropriate combination.In this application, computer readable storage medium can any be included or store The tangible medium of program, the program can be commanded the either device use or in connection of execution system, device.And In the application, computer-readable signal media may include the data letter propagated in a base band or as a carrier wave part Number, wherein carrying computer-readable program code.Diversified forms may be used in the data-signal of this propagation, including but not It is limited to electromagnetic signal, optical signal or above-mentioned any appropriate combination.Computer-readable signal media can also be computer Any computer-readable medium other than readable storage medium storing program for executing, the computer-readable medium can send, propagate or transmit use In by instruction execution system, device either device use or program in connection.Include on computer-readable medium Program code can transmit with any suitable medium, including but not limited to:Wirelessly, electric wire, optical cable, RF etc., Huo Zheshang Any appropriate combination stated.
The calculating of the operation for executing the application can be write with one or more programming languages or combinations thereof Machine program code, above procedure design language include object oriented program language-such as Java, Smalltalk, C+ +, further include conventional procedural programming language-such as " C " language or similar programming language.Program code can Fully to execute on the user computer, partly execute, executed as an independent software package on the user computer, Part executes or executes on a remote computer or server completely on the remote computer on the user computer for part. In situations involving remote computers, remote computer can pass through the network of any kind --- including LAN (LAN) Or wide area network (WAN)-is connected to subscriber computer, or, it may be connected to outer computer (such as utilize Internet service Provider is connected by internet).
Flow chart in attached drawing and block diagram, it is illustrated that according to the system of the various embodiments of the application, method and computer journey The architecture, function and operation in the cards of sequence product.In this regard, each box in flowchart or block diagram can generation A part for a part for one module, program segment, or code of table, the module, program segment, or code includes one or more uses The executable instruction of the logic function as defined in realization.It should also be noted that in some implementations as replacements, being marked in box The function of note can also occur in a different order than that indicated in the drawings.For example, two boxes succeedingly indicated are actually It can be basically executed in parallel, they can also be executed in the opposite order sometimes, this is depended on the functions involved.Also it to note Meaning, the combination of each box in block diagram and or flow chart and the box in block diagram and or flow chart can be with holding The dedicated hardware based system of functions or operations as defined in row is realized, or can use specialized hardware and computer instruction Combination realize.
Being described in unit involved in the embodiment of the present application can be realized by way of software, can also be by hard The mode of part is realized.Described unit can also be arranged in the processor, for example, can be described as:A kind of processor packet Include acquiring unit, generation unit and determination unit.Wherein, the title of these units is not constituted under certain conditions to the unit The restriction of itself, for example, acquiring unit is also described as " obtaining the unit of characteristic image to be detected ".
As on the other hand, present invention also provides a kind of computer-readable medium, which can be Included in device described in above-described embodiment;Can also be individualism, and without be incorporated the device in.Above-mentioned calculating Machine readable medium carries one or more program, when said one or multiple programs are executed by the device so that should Device:Obtain characteristic image to be detected, wherein above-mentioned characteristic image to be detected extracts from image to be detected;It will be above-mentioned to be detected Characteristic image imports the first disaggregated model of training in advance, generates above-mentioned image to be detected and belongs to each class in predefined kind of classification Other probability value, wherein above-mentioned predefined kind of a classification is at least three kinds of classifications, and above-mentioned at least three kinds of classifications include facial image class With at least two inhuman face image classes, it is corresponding at least three probability values that above-mentioned first disaggregated model is used for characteristic feature image Relationship;Belong at least two probability values of inhuman face image class according to above-mentioned image to be detected and belongs to the probability of facial image class Value, determines whether above-mentioned image to be detected is facial image.
Above description is only the preferred embodiment of the application and the explanation to institute's application technology principle.People in the art Member should be appreciated that invention scope involved in the application, however it is not limited to technology made of the specific combination of above-mentioned technical characteristic Scheme, while should also cover in the case where not departing from foregoing invention design, it is carried out by above-mentioned technical characteristic or its equivalent feature Other technical solutions of arbitrary combination and formation.Such as features described above has similar work(with (but not limited to) disclosed herein Can technical characteristic replaced mutually and the technical solution that is formed.

Claims (12)

1. a kind of method for detecting facial image, including:
Obtain characteristic image to be detected, wherein the characteristic image to be detected extracts from image to be detected;
The first disaggregated model that the characteristic image to be detected is imported to training in advance, generates described image to be detected and belongs to predetermined The probability value of each classification in adopted kind of classification, wherein predefined kind of the classification is at least three kinds of classifications, at least three types Not Bao Kuo facial image class and at least two inhuman face image classes, first disaggregated model for characteristic feature image at least The correspondence of three probability values;
Belong at least two probability values of inhuman face image class according to described image to be detected and belongs to the probability of facial image class Value, determines whether described image to be detected is facial image.
2. described to belong to inhuman face image class extremely according to described image to be detected according to the method described in claim 1, wherein Few two probability values and the probability value for belonging to facial image class, determine whether described image to be detected is facial image, including:
From at least two probability value, maximum probability value is selected, to determine maximum non-face class probability value;
According to the non-face class probability value of the maximum and the probability value for belonging to facial image class, described image to be detected is determined Whether it is facial image.
3. described to belong to people with described according to the non-face class probability value of the maximum according to the method described in claim 2, wherein The probability value of face image class determines whether described image to be detected is facial image, including:
By the non-face class probability value of the maximum and the probability value for belonging to facial image class, second point of training in advance is imported Class model determines that described image to be detected is the facial image probability value of facial image, wherein second disaggregated model is used for Both the maximum non-face class probability value of characterization, the probability value for belonging to facial image class are corresponding with facial image probability value to close System.
4. described to belong to people with described according to the non-face class probability value of the maximum according to the method described in claim 3, wherein The probability value of face image class determines whether described image to be detected is facial image, further includes:
It is more than predetermined threshold value in response to the determination facial image probability value, determines that described image to be detected is facial image.
5. according to the described method of any one of claim 1-4, wherein the acquisition characteristic image to be detected, including:
Target image is imported to the image characteristics extraction model pre-established, generates target signature image and candidate region information, Wherein, described image Feature Selection Model is corresponding between image and characteristic image, candidate region information the two for characterizing Relationship, the candidate region information are used to indicate the candidate image area in the target signature image, in the target image Image in image to be detected corresponding with candidate image area region is image to be detected;
By the characteristic image in the candidate image area in the target signature image, it is determined as characteristic image to be detected.
6. a kind of device for detecting facial image, including:
Acquiring unit is configured to obtain characteristic image to be detected, wherein the characteristic image to be detected extracts from mapping to be checked Picture;
Generation unit is configured to importing the characteristic image to be detected into the first disaggregated model of training in advance, described in generation Image to be detected belongs to the probability value of each classification in predefined kind of classification, wherein predefined kind of the classification is at least three kinds Classification, at least three kinds of classifications include facial image class and at least two inhuman face image classes, and first disaggregated model is used In the correspondence of characteristic feature image and at least three probability values;
Determination unit is configured to belong at least two probability values of inhuman face image class according to described image to be detected and belong to The probability value of facial image class determines whether described image to be detected is facial image.
7. device according to claim 6, wherein the determination unit is also configured to:
From at least two probability value, maximum probability value is selected, to determine maximum non-face class probability value;
According to the non-face class probability value of the maximum and the probability value for belonging to facial image class, described image to be detected is determined Whether it is facial image.
8. device according to claim 7, wherein the determination unit is also configured to:
By the non-face class probability value of the maximum and the probability value for belonging to facial image class, second point of training in advance is imported Class model determines that described image to be detected is the facial image probability value of facial image, wherein second disaggregated model is used for Both the maximum non-face class probability value of characterization, the probability value for belonging to facial image class are corresponding with facial image probability value to close System.
9. device according to claim 8, wherein the determination unit is also configured to:
It is more than predetermined threshold value in response to the determination facial image probability value, determines that described image to be detected is facial image.
10. according to the device described in any one of claim 6-9, wherein the acquiring unit is also configured to:
Target image is imported to the image characteristics extraction model pre-established, generates target signature image and candidate region information, Wherein, described image Feature Selection Model is corresponding between image and characteristic image, candidate region information the two for characterizing Relationship, the candidate region information are used to indicate the candidate image area in the target signature image, in the target image Image in image to be detected corresponding with candidate image area region is image to be detected;
By the characteristic image in the candidate image area in the target signature image, it is determined as characteristic image to be detected.
11. a kind of electronic equipment, including:
One or more processors;
Storage device, for storing one or more programs,
When one or more of programs are executed by one or more of processors so that one or more of processors are real The now method as described in any in claim 1-5.
12. a kind of computer-readable medium, is stored thereon with computer program, wherein real when described program is executed by processor The now method as described in any in claim 1-5.
CN201810266087.5A 2018-03-28 2018-03-28 Method and apparatus for detecting facial image Pending CN108446659A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810266087.5A CN108446659A (en) 2018-03-28 2018-03-28 Method and apparatus for detecting facial image

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810266087.5A CN108446659A (en) 2018-03-28 2018-03-28 Method and apparatus for detecting facial image

Publications (1)

Publication Number Publication Date
CN108446659A true CN108446659A (en) 2018-08-24

Family

ID=63197310

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810266087.5A Pending CN108446659A (en) 2018-03-28 2018-03-28 Method and apparatus for detecting facial image

Country Status (1)

Country Link
CN (1) CN108446659A (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109447169A (en) * 2018-11-02 2019-03-08 北京旷视科技有限公司 The training method of image processing method and its model, device and electronic system
CN110751120A (en) * 2019-10-28 2020-02-04 杭州宇泛智能科技有限公司 Detection method and device and electronic equipment
CN111339963A (en) * 2020-02-28 2020-06-26 北京百度网讯科技有限公司 Human body image scoring method and device, electronic equipment and storage medium
CN111814553A (en) * 2020-06-08 2020-10-23 浙江大华技术股份有限公司 Face detection method, model training method and related device
CN112070034A (en) * 2020-09-10 2020-12-11 北京字节跳动网络技术有限公司 Image recognition method and device, electronic equipment and computer readable medium

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103279746A (en) * 2013-05-30 2013-09-04 苏州大学 Method and system for identifying faces based on support vector machine
CN105260356A (en) * 2015-10-10 2016-01-20 西安交通大学 Chinese interactive text emotion and topic identification method based on multitask learning
CN105469400A (en) * 2015-11-23 2016-04-06 广州视源电子科技股份有限公司 Rapid identification and marking method of electronic component polarity direction and system thereof
CN105678136A (en) * 2014-11-19 2016-06-15 江苏威盾网络科技有限公司 Cloud data anti-leak access method based on face recognition technology
CN107729917A (en) * 2017-09-14 2018-02-23 北京奇艺世纪科技有限公司 The sorting technique and device of a kind of title

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103279746A (en) * 2013-05-30 2013-09-04 苏州大学 Method and system for identifying faces based on support vector machine
CN105678136A (en) * 2014-11-19 2016-06-15 江苏威盾网络科技有限公司 Cloud data anti-leak access method based on face recognition technology
CN105260356A (en) * 2015-10-10 2016-01-20 西安交通大学 Chinese interactive text emotion and topic identification method based on multitask learning
CN105469400A (en) * 2015-11-23 2016-04-06 广州视源电子科技股份有限公司 Rapid identification and marking method of electronic component polarity direction and system thereof
CN107729917A (en) * 2017-09-14 2018-02-23 北京奇艺世纪科技有限公司 The sorting technique and device of a kind of title

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109447169A (en) * 2018-11-02 2019-03-08 北京旷视科技有限公司 The training method of image processing method and its model, device and electronic system
CN109447169B (en) * 2018-11-02 2020-10-27 北京旷视科技有限公司 Image processing method, training method and device of model thereof and electronic system
CN110751120A (en) * 2019-10-28 2020-02-04 杭州宇泛智能科技有限公司 Detection method and device and electronic equipment
CN111339963A (en) * 2020-02-28 2020-06-26 北京百度网讯科技有限公司 Human body image scoring method and device, electronic equipment and storage medium
CN111814553A (en) * 2020-06-08 2020-10-23 浙江大华技术股份有限公司 Face detection method, model training method and related device
CN112070034A (en) * 2020-09-10 2020-12-11 北京字节跳动网络技术有限公司 Image recognition method and device, electronic equipment and computer readable medium

Similar Documents

Publication Publication Date Title
CN108154196B (en) Method and apparatus for exporting image
CN108446659A (en) Method and apparatus for detecting facial image
CN108898185A (en) Method and apparatus for generating image recognition model
CN107908789A (en) Method and apparatus for generating information
CN109446990A (en) Method and apparatus for generating information
CN108416310A (en) Method and apparatus for generating information
CN109993150A (en) The method and apparatus at age for identification
CN108932220A (en) article generation method and device
CN109740018A (en) Method and apparatus for generating video tab model
CN109308490A (en) Method and apparatus for generating information
CN109447156A (en) Method and apparatus for generating model
CN108229485A (en) For testing the method and apparatus of user interface
CN108989882A (en) Method and apparatus for exporting the snatch of music in video
CN109034069A (en) Method and apparatus for generating information
CN108831505A (en) The method and apparatus for the usage scenario applied for identification
CN109934242A (en) Image identification method and device
CN109558779A (en) Image detecting method and device
CN109815365A (en) Method and apparatus for handling video
CN109214501A (en) The method and apparatus of information for identification
CN109947989A (en) Method and apparatus for handling video
CN109977839A (en) Information processing method and device
CN108491825A (en) information generating method and device
CN108960110A (en) Method and apparatus for generating information
CN108509994A (en) character image clustering method and device
CN110532983A (en) Method for processing video frequency, device, medium and equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination