CN112418184A - Face detection method and device based on nose features, electronic equipment and medium - Google Patents

Face detection method and device based on nose features, electronic equipment and medium Download PDF

Info

Publication number
CN112418184A
CN112418184A CN202011474602.2A CN202011474602A CN112418184A CN 112418184 A CN112418184 A CN 112418184A CN 202011474602 A CN202011474602 A CN 202011474602A CN 112418184 A CN112418184 A CN 112418184A
Authority
CN
China
Prior art keywords
key point
face
image
detected
nose
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202011474602.2A
Other languages
Chinese (zh)
Inventor
肖传宝
陈白洁
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hangzhou Moredian Technology Co ltd
Original Assignee
Hangzhou Moredian Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hangzhou Moredian Technology Co ltd filed Critical Hangzhou Moredian Technology Co ltd
Priority to CN202011474602.2A priority Critical patent/CN112418184A/en
Publication of CN112418184A publication Critical patent/CN112418184A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • G06V40/171Local features and components; Facial parts ; Occluding parts, e.g. glasses; Geometrical relationships
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/172Classification, e.g. identification

Landscapes

  • Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • General Health & Medical Sciences (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a face detection method, a face detection device, electronic equipment and a face detection medium based on nasal characteristics, relates to the technical field of face detection, and is used for solving the problems of complex face detection process and low processing efficiency in the related technology. The method comprises the following steps: acquiring an image to be detected; inputting an image to be detected into a face key point detection model to obtain a first key point group, wherein key points in the first key point group are all associated with the nose; according to the first key point group, scratching a nose region on an image to be detected; and inputting the nose region into the classification model to obtain a detection result of whether the nose region is qualified, and if so, judging that the image to be detected comprises the face. The invention has the advantages of simple process and high treatment efficiency.

Description

Face detection method and device based on nose features, electronic equipment and medium
Technical Field
The present invention relates to the field of face detection technologies, and in particular, to a face detection method and apparatus based on nasal characteristics, an electronic device, and a medium.
Background
Face recognition is a biometric technology for identity recognition based on face feature information. In order to avoid meaningless face recognition, face detection is usually performed on a corresponding picture, and face recognition can be performed only when it is determined that the picture has a face.
In the related art, the face detection is usually processed based on all key points of the face, so that not only all key points of the face need to be collected, but also all key points need to be processed by a deep learning method based on a CNN model, and the method is complex in process and low in processing efficiency.
At present, no effective solution is provided for the problems of complex process and low processing efficiency of face detection in the related technology.
Disclosure of Invention
In order to overcome the disadvantages of the related art, an object of the present invention is to provide a method, an apparatus, an electronic device and a medium for detecting a face based on a nose feature, which have the advantages of simple process and high processing efficiency.
One of the purposes of the invention is realized by adopting the following technical scheme:
a face detection method based on nasal features, the method comprising:
acquiring an image to be detected;
inputting the image to be detected into a face key point detection model to obtain a first key point group, wherein key points in the first key point group are all associated with the nose;
according to the first key point group, scratching a nose region on the image to be detected;
and inputting the nose region into a classification model to obtain a detection result of whether the nose region is qualified, and if so, judging that the image to be detected comprises a human face.
In some embodiments, the face key point detection model adopts any one of a mobilenet-v1 model, a mobilenet-v2 model and a shuffle-net model.
In some embodiments, the classification model is any one of a mobilenet-v1 model, a mobilenet-v2 model, and a shuffle-net model.
In some embodiments, after the image to be detected is input into the face keypoint detection model, the face keypoint detection model outputs 98 keypoints, and the first keypoint group includes 52 th keypoint, 76 th keypoint, 79 th keypoint, and 82 th keypoint.
In some embodiments, the matting of the nose region on the image to be detected according to the first keypoint group comprises:
taking the connecting line of the 52 th key point and the 79 th key point as the length, and taking the line segment translating the connecting line of the 76 th key point and the 82 th key point to the 79 th key point as the width to obtain a rectangular area;
the rectangular region is shrunk from the 52 th key point by the length direction [0, 0.1], from the 79 th key point by the length direction [0, 0.2], and from the 76 th key point and the 82 nd key point by the width direction [0, 0.2], respectively, so as to obtain a middle region;
correspondingly scratching the middle area on the image to be detected to serve as the nose area.
In some embodiments, in a case where it is determined that the image to be detected includes a human face, the method further includes:
extracting face regions according to all face key points output by the face key point detection model;
and adjusting the face area to a preset face size and using the face area as an input of face recognition.
In some of these embodiments, prior to inputting the nose region into a classification model, the method further comprises:
and adjusting the nose area to a preset nose size.
The second purpose of the invention is realized by adopting the following technical scheme:
a face detection apparatus based on nasal features, the apparatus comprising:
the acquisition module is used for acquiring an image to be detected;
the processing module is used for inputting the image to be detected into a face key point detection model to obtain a first key point group, and key points in the first key point group are all associated with the nose;
the matting module is used for matting the nose region on the image to be detected according to the first key point group;
and the detection module is used for inputting the nose region into a classification model to obtain a detection result of whether the nose region is qualified or not, and if so, judging that the image to be detected comprises a human face.
It is a further object of the invention to provide an electronic device performing one of the objects of the invention, comprising a memory in which a computer program is stored and a processor arranged to carry out the method described above when executing the computer program.
It is a fourth object of the present invention to provide a computer readable storage medium storing one of the objects of the invention, having stored thereon a computer program which, when executed by a processor, implements the method described above.
Compared with the related technology, the invention has the beneficial effects that: after an image to be detected is input into a face key point detection model, key points associated with a nose are selected to form a first key point group, a nose area is obtained according to the first key points, the nose area is used for detecting whether the image is qualified through a classification model, and a detection result is used as a basis for judging the face.
Drawings
The accompanying drawings, which are included to provide a further understanding of the application and are incorporated in and constitute a part of this application, illustrate embodiment(s) of the application and together with the description serve to explain the application and not to limit the application. In the drawings:
FIG. 1 is a flowchart of a face detection method based on nasal features according to an embodiment of the present application;
FIG. 2 is a flowchart of step S103 shown in the second embodiment of the present application;
fig. 3 is a schematic diagram of face key points in the second embodiment of the present application;
FIG. 4 is a block diagram illustrating a configuration of a face detection apparatus based on nasal characteristics according to a fourth embodiment of the present application;
fig. 5 is a block diagram of an electronic device according to a fifth embodiment of the present application.
Description of the drawings: 41. an acquisition module; 42. a processing module; 43. a module is scratched; 44. and a detection module.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the present application will be described and illustrated below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the present application and are not intended to limit the present application. All other embodiments obtained by a person of ordinary skill in the art based on the embodiments provided in the present application without any inventive step are within the scope of protection of the present application.
It will be appreciated that such a development effort might be complex and tedious, but would nevertheless be a routine undertaking of design, fabrication, and manufacture for those of ordinary skill having the benefit of this disclosure, and is not intended to limit the scope of this disclosure.
Example one
The embodiment provides a face detection method based on nasal characteristics, and aims to solve the problems of complex face detection process and low processing efficiency in the related art.
Fig. 1 is a flowchart of a face detection method based on a nose feature according to an embodiment of the present application, and referring to fig. 1, the method includes steps S101 to S104.
And S101, acquiring an image to be detected. It can be understood that the image to be detected is an RGB image, and the image to be detected may have a human face or may not have a human face.
Step S102, inputting the image to be detected into a face key point detection model to obtain a first key point group, wherein the key points in the first key point group are all associated with the nose. It is understood that there may be abnormalities in the face keypoint group obtained via the face keypoint detection model, such as: after the image to be detected, which does not include the face, is input into the face key point detection model, a face key point group can also be obtained.
It can be understood that the face key point detection is the prior art, and any one of the modes of 4-point labeling, 5-point labeling, 6-point labeling, 21-point labeling, 29-point labeling, 68-point labeling, 96-point labeling, 98-point labeling, 106-point labeling, 108-point labeling and the like can be adopted, and the mode of 98-point labeling is preferably adopted.
It should be noted that the first keypoint group is also a part of the face keypoint group, and the keypoints in the first keypoint group correspond to the same face. The keypoints within the first keypoint group are all associated with, but not critical to, the nose, which is merely the basis for deriving the nose region. The output of the face key point detection model is not limited herein, and the face key point detection model may only output the first key point group, or may output the face key point group and filter the face key point group to obtain the first key point group.
And S103, scratching a nose region on the image to be detected according to the first key point group. The nose region is also an RGB image. It will be appreciated that each keypoint has coordinates and the first keypoint group is associated with the nose, and therefore the nose region may be determined by processing the first keypoint group.
And step S104, inputting the nose region into a classification model to obtain a detection result of whether the nose region is qualified or not. The particular type of classification model is not limited herein. It can be understood that, under the condition that the detection result is qualified, the nose region includes the nose feature, and accordingly, the image to be detected includes a human face; and under the condition that the detection result is unqualified, the nose region comprises the non-nose feature, and correspondingly, the image to be detected may not comprise the face.
It should be noted that, when an image to be detected has more than one face, the image to be detected is input into the face detection model only once to obtain more than one first key point group, so that more than one nose region can be extracted and input into the classification model respectively to obtain more than one detection result, and only under the condition that all the detection results are unqualified, the image to be detected can be judged not to include the face.
In summary, after an image to be detected is input into a face key point detection model, key points associated with a nose are selected to form a first key point group, a nose region is obtained according to the first key points, the nose region is used for detecting whether the image is qualified through a classification model, and a detection result is used as a basis for face judgment.
It is worth mentioning that the steps of the method are performed on the basis of the execution device. Specifically, the execution device may be a server, a cloud server, a client, a processor, or the like, but the execution device is not limited to the above type.
It will be appreciated that the steps illustrated in the flowcharts described above or in the flowcharts of the figures may be performed in a computer system such as a set of computer-executable instructions and that, although a logical order is illustrated in the flowcharts, in some cases, the steps illustrated or described may be performed in an order different than here.
As an optional implementation manner, in step S102, if the number of the key points in the first key point group is less than the preset number, it is determined that the first key point group is invalid, that is, the first key point group does not perform the steps of step S103 and the like, so as to avoid waste of resources of the service device.
As an alternative implementation mode, the face key point detection model can adopt any one of a mobilenet-v1 model, a mobilenet-v2 model and a shuffle-net model. It can be understood that the mobilenet-v1 model, the mobilenet-v2 model and the shuffle-net model are all lightweight network models, which can reduce the amount of calculation and further improve the processing efficiency on the premise of meeting the precision. Of course, the face key point detection model is not limited to the above type, but a mobilene-v 2 model is preferably employed.
As an alternative embodiment, the classification model can adopt any one of a mobilenet-v1 model, a mobilenet-v2 model and a shuffle-net model. It can be understood that the mobilenet-v1 model, the mobilenet-v2 model and the shuffle-net model are all lightweight network models, which can reduce the amount of calculation and further improve the processing efficiency on the premise of meeting the precision. Of course, the classification model is not limited to the above type, but a mobilenet-v2 model is preferably employed.
The generation of the classification model is explained here: acquiring a training set which is subjected to data cleaning and data preprocessing and comprises a nasal part image and a non-nasal part image, taking the images in the training set as the input of a classification model, taking whether the images are qualified as the output of the classification model to train the classification model, continuously adjusting the depth, the width and related parameters in the classification model in the training process, and reducing the calculated amount of the classification model as much as possible on the premise of meeting the precision; after the training of the classification model is completed, acquiring a test set, wherein the test set is subjected to data cleaning and data preprocessing and comprises a nasal image and a non-nasal area image, but the test set and the training set are mutually exclusive; and inputting the images of the training set into the classification model, and judging whether the output of the images accords with the correctness or not so as to test the classification model.
Example two
The second embodiment provides a face detection method based on the nasal characteristics, and the second embodiment is performed on the basis of the first embodiment.
It should be noted that in the present embodiment, the face keypoint detection is implemented in the above 98-point labeling manner, fig. 2 is a flowchart of step S103 shown in the second embodiment of the present application, and fig. 3 is a schematic diagram of the face keypoint shown in the second embodiment of the present application.
Referring to fig. 2 and 3, the step S103 may include steps S201 to S203.
Step S201, taking the connecting line of the 52 th key point and the 79 th key point as the length, and taking the line segment of the connecting line of the 76 th key point and the 82 th key point translated to the 79 th key point as the width, to obtain a rectangular area. It is worth noting here that the first keypoint group includes the 52 th keypoint, the 76 th keypoint, the 79 th keypoint, and the 82 th keypoint.
Step S201, the rectangular area is contracted from the 52 th key point [0, 0.1] in the length direction, from the 79 th key point [0, 0.2] in the length direction, and from the 76 th key point and the 82 th key point [0, 0.2] in the width direction respectively, so as to obtain a middle area. It should be noted that the above shrinking operations are all performed on the rectangular area in step S201, and the execution order is not limited to avoid the interference with each other and affecting the middle area.
It should be noted that, when the corresponding value of each contraction operation is 0, the middle region is equal to the rectangular region, and thus, the rectangular region is the maximum scratching range of the nose region, so as to ensure that the scratched nose region only has the nose feature. The corresponding value of each contraction operation is preferably selected to be 0.1, so that the proportion of the nasal features is increased as much as possible while the completeness of the nasal features is ensured, and the calculated amount of a classification model is reduced.
And S203, correspondingly scratching the middle area on the image to be detected as the nose area. The extraction method is a conventional operation in the art, and is not described herein in detail.
The nose region can be obtained through the technical scheme, and the nose region only contains nose features, so that the requirements of the classification model are met.
As an optional implementation manner, step S103 may also adopt the following manner, specifically including: the first key point group comprises a 52 th key point, a 55 th key point, a 57 th key point and a 59 th key point, the length of a connecting line of the 52 th key point and the 57 th key point is taken as the length, and the width of the connecting line of the 55 th key point and the 59 th key point is taken as the width, so that a rectangular area is obtained; the rectangular area shrinks [0, 0.1] from the 52 th key point in the length direction, expands [0, 0.2] to the 57 th key point in the length direction, and expands [0.1, 0.4] to the 55 th key point and the 59 th key point respectively in the width direction to obtain a middle area; correspondingly scratching the middle area on the image to be detected as a nose area.
Of course, step S103 is not limited to the above manner, and may also be adjusted based on the first keypoint group consisting of the 52 th keypoint, the 57 th keypoint, the 76 th keypoint, the 82 nd keypoint, and the like, as long as the nose region only includes the nose feature and the nose feature is complete.
EXAMPLE III
The third embodiment provides a face detection method based on the nasal characteristics, and the third embodiment is performed on the basis of the first embodiment and/or the second embodiment.
Under the condition that the classification model outputs a qualified detection result of the nose region, the image to be detected is judged to comprise a face, and the method can further comprise a first adjusting step for facilitating subsequent face identification.
The first adjusting step includes:
and (4) scratching the face area according to all face key points output by the face key point detection model. That is, the above-mentioned face key point detection model preferably outputs all face key points, and screens all face key points to obtain a first key point group.
And adjusting the face area to a preset face size and using the face area as an input of face recognition. It can be understood that the preset size of the face is not limited herein, as long as the face region can be unified, so as to reduce the calculation amount of face recognition and improve the processing efficiency of face recognition.
As an optional implementation, the method may further include: and adjusting the nose area to a preset size of the nose. This step is performed after step S103 and before step S104. The preset size of the nose part is not limited, and the nose part region can be unified, so that the calculated amount of a classification model is reduced, and the processing efficiency of face detection is improved.
Example four
The fourth embodiment provides a face detection device based on nasal characteristics, which is the virtual device structure of the foregoing embodiments. Fig. 4 is a block diagram of a configuration of a face detection apparatus based on nasal characteristics according to the fourth embodiment of the present application, and as shown in fig. 4, the apparatus includes an obtaining module 41, a processing module 42, a matting module 43, and a detection module 44.
The obtaining module 41 is used for obtaining an image to be detected.
The processing module 42 is configured to input the image to be detected into the face key point detection model to obtain a first key point group, where all the key points in the first key point group are associated with the nose.
The matting module 43 is configured to matte the nose region on the image to be detected according to the first keypoint group.
The detection module 44 is configured to input the nasal region into the classification model to obtain a detection result of whether the nasal region is qualified, and if so, determine that the image to be detected includes a human face.
The above modules may be functional modules or program modules, and may be implemented by software or hardware. For a module implemented by hardware, the modules may be located in the same processor; or the modules can be respectively positioned in different processors in any combination.
EXAMPLE five
In a fifth embodiment, an electronic device is provided, fig. 5 is a block diagram of the electronic device shown in the fifth embodiment of the present application, and referring to fig. 5, the electronic device includes a memory and a processor, where the memory stores a computer program, and the processor is configured to run the computer program to execute a method for implementing any one of the above embodiments based on a face detection method based on a nose feature.
Optionally, the electronic device may further include a transmission device and an input/output device, wherein the transmission device is connected to the processor, and the input/output device is connected to the processor.
In addition, by combining the face detection method based on the nose feature in the foregoing embodiments, the fifth embodiment of the present application may provide a storage medium to implement. The storage medium having stored thereon a computer program; when executed by a processor, the computer program implements a face detection method based on nasal features in any of the above embodiments, the method including:
acquiring an image to be detected;
inputting an image to be detected into a face key point detection model to obtain a first key point group, wherein key points in the first key point group are all associated with the nose;
according to the first key point group, scratching a nose region on an image to be detected;
and inputting the nose region into the classification model to obtain a detection result of whether the nose region is qualified, and if so, judging that the image to be detected comprises the face.
As shown in fig. 5, taking a processor as an example, the processor, the memory, the input device and the output device in the electronic device may be connected by a bus or other means, and fig. 5 takes the connection by a bus as an example.
The memory, which is a computer-readable storage medium, may include a high-speed random access memory, a non-volatile memory, and the like, and may be used to store an operating system, a software program, a computer-executable program, and a database, such as program instructions/modules corresponding to the method for detecting a face based on a nasal feature according to an embodiment of the present invention, and may further include a memory, which may be used to provide an operating environment for the operating system and the computer program. In some examples, the memory may further include memory located remotely from the processor, and these remote memories may be connected to the electronic device through a network.
The processor, which is used to provide computing and control capabilities, may include a Central Processing Unit (CPU), or A Specific Integrated Circuit (ASIC), or may be configured to implement one or more Integrated circuits of embodiments of the present Application. The processor executes various functional applications and data processing of the electronic device by running the computer-executable program, the software program, the instructions and the modules stored in the memory, that is, the face detection method based on the nose feature of the first embodiment is implemented.
The output device of the electronic equipment can be a liquid crystal display screen or an electronic ink display screen, and the input device of the electronic equipment can be a touch layer covered on the display screen, a key, a track ball or a touch pad arranged on a shell of the computer equipment, an external keyboard, a touch pad or a mouse and the like.
The electronic device may further include a network interface/communication interface, the network interface of the electronic device being for communicating with an external terminal through a network connection. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
Those skilled in the art will appreciate that the structure shown in fig. 5 is a block diagram of only a portion of the structure relevant to the present disclosure, and does not constitute a limitation on the electronic device to which the present disclosure applies, and that a particular electronic device may include more or less components than those shown in the drawings, or may combine certain components, or have a different arrangement of components.
It will be understood by those skilled in the art that all or part of the processes of the methods of the embodiments described above can be implemented by hardware instructions of a computer program, which can be stored in a non-volatile computer-readable storage medium, and when executed, can include the processes of the embodiments of the methods described above. Any reference to memory, storage, database, or other medium used in the embodiments provided herein may include non-volatile and/or volatile memory, among others. Non-volatile memory can include read-only memory (ROM), Programmable ROM (PROM), Electrically Programmable ROM (EPROM), Electrically Erasable Programmable ROM (EEPROM), or flash memory. Volatile memory can include Random Access Memory (RAM) or external cache memory. By way of illustration and not limitation, RAM is available in a variety of forms such as Static RAM (SRAM), Dynamic RAM (DRAM), Synchronous DRAM (SDRAM), Double Data Rate SDRAM (DDRSDRAM), Enhanced SDRAM (ESDRAM), synchronous link (Synchlink), DRAM (SLDRAM), Rambus Direct RAM (RDRAM), direct bus dynamic RAM (DRDRAM), and memory bus dynamic RAM (RDRAM).
It should be noted that, in the embodiment of the face detection method based on the nasal feature, each included unit and module are only divided according to functional logic, but are not limited to the above division, as long as the corresponding function can be realized; in addition, specific names of the functional units are only for convenience of distinguishing from each other, and are not used for limiting the protection scope of the present invention.
Unless defined otherwise, technical or scientific terms referred to herein shall have the ordinary meaning as understood by those of ordinary skill in the art to which this application belongs. Reference to "a," "an," "the," and similar words throughout this application are not to be construed as limiting in number, and may refer to the singular or the plural. The terms "comprises," "comprising," "including," "has," "having," and any variations thereof, as referred to herein, are intended to cover a non-exclusive inclusion. Reference to "connected," "coupled," and the like in this application is not intended to be limited to physical or mechanical connections, but may include electrical connections, whether direct or indirect. The term "plurality" as referred to herein means two or more. "and/or" describe the association relationship of the associated objects, meaning that three relationships may exist. The character "/" generally indicates that the former and latter associated objects are in an "or" relationship. Reference herein to the terms "first," "second," "third," and the like, are merely to distinguish similar objects and do not denote a particular ordering for the objects.
The above examples only express several embodiments of the present application, and the description thereof is more specific and detailed, but not construed as limiting the scope of the invention. It should be noted that, for a person skilled in the art, several variations and modifications can be made without departing from the concept of the present application, which falls within the scope of protection of the present application. Therefore, the protection scope of the present patent shall be subject to the appended claims.

Claims (10)

1. A face detection method based on nose features is characterized by comprising the following steps:
acquiring an image to be detected;
inputting the image to be detected into a face key point detection model to obtain a first key point group, wherein key points in the first key point group are all associated with the nose;
according to the first key point group, scratching a nose region on the image to be detected;
and inputting the nose region into a classification model to obtain a detection result of whether the nose region is qualified, and if so, judging that the image to be detected comprises a human face.
2. The method of claim 1, wherein the face key point detection model is any one of a mobilene-v 1 model, a mobilene-v 2 model and a shuffle-net model.
3. The method of claim 1, wherein the classification model is any one of a mobilenet-v1 model, a mobilenet-v2 model, and a shuffle-net model.
4. The method according to claim 1, wherein after the image to be detected is input into a face keypoint detection model, the face keypoint detection model outputs 98 keypoints, and the first keypoint group comprises a 52 th keypoint, a 76 th keypoint, a 79 th keypoint, and an 82 th keypoint.
5. The method of claim 4, wherein said matting a nose region on the image to be detected from the first keypoint group comprises:
taking the connecting line of the 52 th key point and the 79 th key point as the length, and taking the line segment translating the connecting line of the 76 th key point and the 82 th key point to the 79 th key point as the width to obtain a rectangular area;
the rectangular region is shrunk from the 52 th key point by the length direction [0, 0.1], from the 79 th key point by the length direction [0, 0.2], and from the 76 th key point and the 82 nd key point by the width direction [0, 0.2], respectively, so as to obtain a middle region;
correspondingly scratching the middle area on the image to be detected to serve as the nose area.
6. The method according to any one of claims 1 to 5, wherein in a case where it is determined that the image to be detected includes a human face, the method further comprises:
extracting face regions according to all face key points output by the face key point detection model;
and adjusting the face area to a preset face size and using the face area as an input of face recognition.
7. The method of any of claims 1-5, wherein prior to inputting the nose region into a classification model, the method further comprises:
and adjusting the nose area to a preset nose size.
8. A face detection device based on nasal characteristics, the device comprising:
the acquisition module is used for acquiring an image to be detected;
the processing module is used for inputting the image to be detected into a face key point detection model to obtain a first key point group, and key points in the first key point group are all associated with the nose;
the matting module is used for matting the nose region on the image to be detected according to the first key point group;
and the detection module is used for inputting the nose region into a classification model to obtain a detection result of whether the nose region is qualified or not, and if so, judging that the image to be detected comprises a human face.
9. An electronic device comprising a memory and a processor, wherein the memory has stored therein a computer program, and wherein the processor is arranged to carry out the method of any one of claims 1 to 7 when the computer program is executed.
10. A computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, carries out the method of any one of claims 1 to 7.
CN202011474602.2A 2020-12-14 2020-12-14 Face detection method and device based on nose features, electronic equipment and medium Pending CN112418184A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011474602.2A CN112418184A (en) 2020-12-14 2020-12-14 Face detection method and device based on nose features, electronic equipment and medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011474602.2A CN112418184A (en) 2020-12-14 2020-12-14 Face detection method and device based on nose features, electronic equipment and medium

Publications (1)

Publication Number Publication Date
CN112418184A true CN112418184A (en) 2021-02-26

Family

ID=74775771

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011474602.2A Pending CN112418184A (en) 2020-12-14 2020-12-14 Face detection method and device based on nose features, electronic equipment and medium

Country Status (1)

Country Link
CN (1) CN112418184A (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20150016687A1 (en) * 2012-03-26 2015-01-15 Tencent Technology (Shenzhen) Company Limited Method, system and computer storage medium for face detection
CN108038469A (en) * 2017-12-27 2018-05-15 百度在线网络技术(北京)有限公司 Method and apparatus for detecting human body
CN108090450A (en) * 2017-12-20 2018-05-29 深圳和而泰数据资源与云技术有限公司 Face identification method and device
CN109063542A (en) * 2018-06-11 2018-12-21 平安科技(深圳)有限公司 Image identification method, device, computer equipment and storage medium

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20150016687A1 (en) * 2012-03-26 2015-01-15 Tencent Technology (Shenzhen) Company Limited Method, system and computer storage medium for face detection
CN108090450A (en) * 2017-12-20 2018-05-29 深圳和而泰数据资源与云技术有限公司 Face identification method and device
CN108038469A (en) * 2017-12-27 2018-05-15 百度在线网络技术(北京)有限公司 Method and apparatus for detecting human body
CN109063542A (en) * 2018-06-11 2018-12-21 平安科技(深圳)有限公司 Image identification method, device, computer equipment and storage medium

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
韩松;潘纲;王跃明;吴朝晖;: "三维鼻形:一种新的生物特征识别模式", 计算机辅助设计与图形学学报, no. 01, pages 38 - 42 *

Similar Documents

Publication Publication Date Title
CN109829448B (en) Face recognition method, face recognition device and storage medium
CN110569769A (en) image recognition method and device, computer equipment and storage medium
CN110610154A (en) Behavior recognition method and apparatus, computer device, and storage medium
CN111368638A (en) Spreadsheet creation method and device, computer equipment and storage medium
CN110245621B (en) Face recognition device, image processing method, feature extraction model, and storage medium
CN110909665B (en) Multitask image processing method and device, electronic equipment and storage medium
CN111898411A (en) Text image labeling system, method, computer device and storage medium
CN114550241B (en) Face recognition method and device, computer equipment and storage medium
CN112541079A (en) Multi-intention recognition method, device, equipment and medium
CN113159013B (en) Paragraph identification method, device, computer equipment and medium based on machine learning
CN112001285B (en) Method, device, terminal and medium for processing beauty images
CN113298158A (en) Data detection method, device, equipment and storage medium
CN110942067A (en) Text recognition method and device, computer equipment and storage medium
CN114266946A (en) Feature identification method and device under shielding condition, computer equipment and medium
CN114429636B (en) Image scanning identification method and device and electronic equipment
CN112417985A (en) Face feature point tracking method, system, electronic equipment and storage medium
CN111666931A (en) Character and image recognition method, device and equipment based on mixed convolution and storage medium
CN113806613B (en) Training image set generation method, training image set generation device, computer equipment and storage medium
CN114332883A (en) Invoice information identification method and device, computer equipment and storage medium
CN112348008A (en) Certificate information identification method and device, terminal equipment and storage medium
CN112464860A (en) Gesture recognition method and device, computer equipment and storage medium
CN112418184A (en) Face detection method and device based on nose features, electronic equipment and medium
CN116956954A (en) Text translation method, device, electronic equipment and storage medium
CN110956133A (en) Training method of single character text normalization model, text recognition method and device
CN110909733A (en) Template positioning method and device based on OCR picture recognition and computer equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination