CN111079643B - Face detection method and device based on neural network and electronic equipment - Google Patents

Face detection method and device based on neural network and electronic equipment Download PDF

Info

Publication number
CN111079643B
CN111079643B CN201911285280.4A CN201911285280A CN111079643B CN 111079643 B CN111079643 B CN 111079643B CN 201911285280 A CN201911285280 A CN 201911285280A CN 111079643 B CN111079643 B CN 111079643B
Authority
CN
China
Prior art keywords
feature map
output
input
characteristic diagram
input feature
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201911285280.4A
Other languages
Chinese (zh)
Other versions
CN111079643A (en
Inventor
边旭
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Sany Heavy Industry Co Ltd
Original Assignee
Sany Heavy Industry Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Sany Heavy Industry Co Ltd filed Critical Sany Heavy Industry Co Ltd
Priority to CN201911285280.4A priority Critical patent/CN111079643B/en
Publication of CN111079643A publication Critical patent/CN111079643A/en
Application granted granted Critical
Publication of CN111079643B publication Critical patent/CN111079643B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks

Landscapes

  • Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • General Physics & Mathematics (AREA)
  • Computing Systems (AREA)
  • Software Systems (AREA)
  • Evolutionary Computation (AREA)
  • Computational Linguistics (AREA)
  • Molecular Biology (AREA)
  • Biophysics (AREA)
  • General Engineering & Computer Science (AREA)
  • Biomedical Technology (AREA)
  • Mathematical Physics (AREA)
  • Data Mining & Analysis (AREA)
  • Artificial Intelligence (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Human Computer Interaction (AREA)
  • Multimedia (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

The embodiment of the invention provides a face detection method, a face detection device and electronic equipment based on a neural network, which relate to the field of face recognition, and the method comprises the following steps: determining a face image to be detected, and determining at least one first input feature map based on the face image, wherein m sub-regions are preset in each first input feature map; for each subarea of each input characteristic diagram, performing characteristic extraction according to a preset small convolution kernel to obtain a first output characteristic diagram; for each input characteristic diagram, accumulating the first output characteristic diagrams corresponding to the input characteristic diagrams to obtain a second output characteristic diagram; and performing face recognition based on a second output feature map of the at least one input feature map. The method can have better GPU acceleration experience under the condition of not increasing the precision loss, improves the detection efficiency, and solves the problems that in the prior art, the calculation amount is increased and the detection efficiency is reduced due to the increase of the data amount.

Description

Face detection method and device based on neural network and electronic equipment
Technical Field
The invention relates to the field of face detection, in particular to a face detection method and device based on a neural network and electronic equipment.
Background
The face detection algorithm is a key part for realizing a face recognition comparison verification system. With the development of deep learning technology, face detection algorithms at the present stage are realized in a large scale by using a convolutional neural network.
With the diversity and complexity of scenes increasing and the number of training data sets increasing, the scale of the convolutional neural network becomes larger and larger, and more calculation amount and access amount are needed; this inevitably leads to a reduction in the detection speed, and it is difficult to achieve the real-time detection even with the acceleration of the Graphics Processing Unit (GPU).
Disclosure of Invention
In view of the above, the present invention provides a face detection method and apparatus based on a neural network, and an electronic device.
In order to achieve the above object, the embodiments of the present invention adopt the following technical solutions:
in a first aspect, an embodiment of the present invention provides a face detection method based on a neural network, including:
determining a face image to be detected, and determining at least one first input feature map based on the face image, wherein each first input feature map is preset with m sub-regions;
for each sub-region of each input feature map, performing feature extraction according to a preset small convolution kernel to obtain a first output feature map;
for each input characteristic diagram, accumulating the first output characteristic diagrams corresponding to the input characteristic diagrams to obtain a second output characteristic diagram;
and performing face recognition based on a second output feature map of at least one input feature map.
In an alternative embodiment, the preset small convolution kernel is 3 × 3 small convolution kernels, and m is 9; accumulating the first output characteristic diagram corresponding to the input characteristic diagram to obtain a second output characteristic diagram, wherein the step comprises the following steps of:
and accumulating the 9 first output feature maps corresponding to the input feature maps pixel by pixel to obtain a second output feature map which is the same as the output result of the convolution kernel of 7x 7.
In an alternative embodiment, the number of the small convolution kernels is 9, and each of the small convolution kernels corresponds to the start coordinate and the end coordinate of one of the 9 sub-regions.
In an optional embodiment, the step of accumulating the first output characteristic diagram corresponding to the input characteristic diagram to obtain a second output characteristic diagram includes:
and accumulating the 9 first output characteristic graphs corresponding to the input characteristic graphs pixel by pixel in an Eltwise layer to obtain a second output characteristic graph which is the same as the output result of the convolution kernel of 7 by 7.
In an alternative embodiment, the step of performing face recognition based on a second output feature map of at least one of the input feature maps includes:
performing fixed-size nearest neighbor extension upsampling on a second output feature map of at least one input feature map on a GPU acceleration engine to obtain at least one third input feature map;
and performing face recognition based on at least one third input feature map.
In a second aspect, an embodiment of the present invention provides a face detection apparatus based on a neural network, including:
the determining module is used for determining a face image to be detected and determining at least one first input feature map based on the face image, wherein m sub-regions are preset in each first input feature map;
the extraction module is used for extracting the characteristics of each subarea of each input characteristic diagram according to a preset small convolution kernel to obtain a first output characteristic diagram;
the accumulation module is used for accumulating the first output characteristic diagram corresponding to the input characteristic diagram to obtain a second output characteristic diagram;
and the detection module is used for carrying out face recognition based on a second output feature image of the at least one input feature image.
In an alternative embodiment, the preset small convolution kernel is 3 × 3 small convolution kernels, and m is 9; and the accumulation module is used for accumulating the 9 first output characteristic graphs corresponding to the input characteristic graphs pixel by pixel to obtain a second output characteristic graph which is the same as the output result of the 7-by-7 convolution kernel.
In an alternative embodiment, the number of the small convolution kernels is 9, and each of the small convolution kernels corresponds to the start coordinate and the end coordinate of one of the 9 sub-regions.
In a third aspect, an embodiment of the present invention provides an electronic device, including a processor and a memory, where the memory stores machine executable instructions capable of being executed by the processor, and the processor can execute the machine executable instructions to implement the method described in any one of the foregoing embodiments.
In a fourth aspect, an embodiment of the present invention provides a computer-readable storage medium, on which a computer program is stored, where the computer program, when executed by a processor, implements the method according to any one of the foregoing embodiments.
The embodiment of the invention provides a face detection method, a face detection device, electronic equipment and a computer readable storage medium based on a neural network, wherein the method comprises the following steps: determining a face image to be detected, and determining at least one first input feature map based on the face image, wherein m sub-regions are preset in each first input feature map; for each sub-region of each input feature map, performing feature extraction according to a preset small convolution kernel to obtain a first output feature map; for each input characteristic diagram, accumulating the first output characteristic diagrams corresponding to the input characteristic diagrams to obtain a second output characteristic diagram; and performing face recognition based on a second output feature map of at least one input feature map. Therefore, according to the technical scheme provided by the embodiment of the invention, the original detection network is improved, the preset small convolution is utilized to check the sub-region of the input feature map for feature extraction, then accumulation is carried out, and finally face recognition is carried out according to the accumulated output result, so that better GPU acceleration experience can be achieved under the condition of not increasing the precision loss, the detection efficiency is improved, and the problems that the calculation amount is increased and the detection efficiency is reduced due to the increase of the data amount in the prior art are solved.
In order to make the aforementioned and other objects, features and advantages of the present invention comprehensible, preferred embodiments accompanied with figures are described in detail below.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings needed to be used in the embodiments will be briefly described below, it should be understood that the following drawings only illustrate some embodiments of the present invention and therefore should not be considered as limiting the scope, and for those skilled in the art, other related drawings can be obtained according to the drawings without inventive efforts.
Fig. 1 is a flowchart illustrating a neural network-based face detection method according to an embodiment of the present invention;
FIG. 2 is a diagram illustrating a scenario for executing step S106 in FIG. 1;
FIG. 3 shows a detailed flowchart of step S108 in FIG. 1;
fig. 4 is a schematic diagram illustrating a face detection apparatus based on a neural network according to an embodiment of the present invention;
fig. 5 shows a schematic diagram of an electronic device according to an embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. The components of embodiments of the present invention generally described and illustrated in the figures herein may be arranged and designed in a wide variety of different configurations.
Thus, the following detailed description of the embodiments of the present invention, presented in the figures, is not intended to limit the scope of the invention, as claimed, but is merely representative of selected embodiments of the invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments of the present invention without making any creative effort, shall fall within the protection scope of the present invention.
It is noted that relational terms such as "first" and "second," and the like, may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrases "comprising a," "8230," "8230," or "comprising" does not exclude the presence of additional like elements in a process, method, article, or apparatus that comprises the element.
Example 1
Referring to fig. 1, an embodiment of the present invention provides a face detection method based on a neural network, including:
step S102, determining a face image to be detected, and determining at least one first input feature map based on the face image, wherein each first input feature map is preset with m sub-regions;
step S104, for each subarea of each input characteristic diagram, performing characteristic extraction according to a preset small convolution kernel to obtain a first output characteristic diagram;
step S106, accumulating the first output characteristic diagrams corresponding to the input characteristic diagrams for each input characteristic diagram to obtain a second output characteristic diagram;
and step S108, carrying out face recognition based on a second output feature map of the at least one input feature map.
The embodiment of the invention provides a face detection method based on a neural network, which comprises the steps of determining a face image to be detected, and determining at least one first input feature map based on the face image, wherein m sub-regions are preset in each first input feature map; then, for each sub-area of each input feature map, feature extraction is carried out according to a preset small convolution kernel to obtain a first output feature map; then accumulating the first output characteristic diagrams corresponding to the input characteristic diagrams for each input characteristic diagram to obtain second output characteristic diagrams; and finally, carrying out face recognition based on a second output feature image of at least one input feature image. The method can have better GPU acceleration experience under the condition of not increasing the precision loss, improves the detection efficiency, and solves the problems that in the prior art, the calculation amount is increased and the detection efficiency is reduced due to the increase of the data amount.
In a possible embodiment, the preset small convolution kernel is a 3 × 3 small convolution kernel, and m is 9;
for step S106, accumulating the first output characteristic diagram corresponding to the input characteristic diagram to obtain a second output characteristic diagram, including:
1. and accumulating the 9 first output feature maps corresponding to the input feature maps pixel by pixel to obtain a second output feature map which is the same as the output result of the convolution kernel of 7x 7.
In one possible implementation, there are 9 small convolution kernels, and each of the small convolution kernels corresponds to the start coordinate and the end coordinate of one of the 9 sub-regions.
In a possible implementation manner, in step S106, the first output characteristic map corresponding to the input characteristic map is accumulated to obtain a second output characteristic map, which may be performed through the following steps:
(1) And accumulating the 9 first output characteristic graphs corresponding to the input characteristic graphs pixel by pixel in the Eltwise layer to obtain a second output characteristic graph which is the same as the output result of the convolution kernel of 7 by 7.
For ease of understanding, the following briefly describes an execution scenario of step S106 with reference to fig. 2:
in fig. 2, newConv (new convolution kernel) is a 3 × 3 small convolution kernel, which is a new region-defined convolution operator: the small Convolution Kernel acts on a sub-region of an input feature map, which is equivalent to adding a start coordinate and an end coordinate of Conv (Convolution abbreviation), for example, a 7 × 7 Convolution Kernel (Kernel Convolution) is first expanded (split) into 9 small Convolution kernels of 3 × 3, the acting sub-region of each small Convolution Kernel is different, and then the output feature maps (first output feature maps) of the 9 small Convolution kernels are accumulated pixel by pixel through an Eltwise layer, so as to achieve the same output result as the 7 × 7 Convolution Kernel, that is, the same second output feature map.
In a possible implementation manner, referring to fig. 3, for step S108, performing face recognition based on a second output feature map of at least one of the input feature maps, the method includes the following steps:
step S302, performing fixed-size nearest neighbor extension upsampling on a second output feature map of at least one input feature map on a GPU acceleration engine to obtain at least one third input feature map;
and step S304, carrying out face recognition based on at least one third input feature map.
The traditional bilinear interpolation upsampling of different sizes of the feature map needs to be realized again in a GPU acceleration engine, and the calculation amount is high.
In order to solve the above problem, in the embodiment of the present invention, the method uses a fixed size 2 times nearest neighbor extended upsampling method: since the deep feature map that needs to be size-expanded is approximately half the size of the target shallow feature map, the deep feature map only needs to be scaled up by one time, and for each pixel in the new feature map, the nearest pixel value of the original feature map can be mapped back by the following formula:
NewPixel(x,y)=SrcPixel(int(x/2),int(y/2));
wherein x, y represent coordinates. Because the nearest extension upsampling adopted by the method only relates to the read-write operation of a GPU memory and very simple coordinate calculation, compared with the pixel of a new characteristic diagram generated by the traditional bilinear interpolation upsampling, the nearest extension upsampling does not need the multiply-add operation of an original pixel value, a better acceleration effect is obtained on a GPU acceleration engine, and the detection precision is higher.
The human face detection based on the neural network provided by the embodiment of the invention is combined with the operation characteristics of the GPU acceleration engine, the operation that the operation is slow and the acceleration is not obvious in the neural network structure is developed, the convolution operator is selected by proposing the region, the large convolution kernel is split into a plurality of small convolution kernels and Elitwise superposed, and the operation precision of the large convolution kernel can be achieved under the better GPU acceleration effect; meanwhile, according to the scale relation of the face detection feature graph, the method has better adaptation to GPU acceleration by using simpler fixed-size nearest neighbor extension upsampling. In addition, the method has better acceleration effect, and simultaneously, the face detection effect in the actual scene is better, but the GPU utilization rate in the method is higher, and more paths of video data deployment can be supported.
Example 2
Based on the same inventive concept, the embodiment of the present application further provides a face detection device based on a neural network corresponding to the face detection method based on the neural network, and as the principle of solving the problem of the device in the embodiment of the present application is similar to that of the face detection method based on the neural network in the embodiment of the present application, the implementation of the device can refer to the implementation of the method, and repeated details are not repeated.
Fig. 4 is a schematic diagram of a face detection apparatus based on a neural network according to an embodiment of the present application.
Referring to fig. 4, the apparatus includes: a determination module 401, an extraction module 402, an accumulation module 403 and a detection module 404;
the determining module 401 is configured to determine a face image to be detected, and determine at least one first input feature map based on the face image, where each first input feature map is preset with m sub-regions;
an extracting module 402, configured to perform feature extraction on each sub-region of each input feature map according to a preset small convolution kernel to obtain a first output feature map;
an accumulation module 403, configured to accumulate, for each input feature map, a first output feature map corresponding to the input feature map to obtain a second output feature map;
a detection module 404, configured to perform face recognition based on a second output feature map of the at least one input feature map.
In an alternative embodiment, the preset small convolution kernel is 3 × 3 small convolution kernels, and m is 9; the accumulation module 403 is configured to accumulate the 9 first output feature maps corresponding to the input feature map pixel by pixel to obtain a second output feature map that is the same as the output result of the 7 × 7 convolution kernel.
In an alternative embodiment, the number of the small convolution kernels is 9, and each of the small convolution kernels corresponds to the start coordinate and the end coordinate of one of the 9 sub-regions.
In an alternative embodiment, the accumulation module 403 is configured to accumulate, in an Eltwise layer, the 9 first output feature maps corresponding to the input feature map pixel by pixel to obtain a second output feature map that is the same as the output result of the 7 × 7 convolution kernel.
In an alternative embodiment, the detection module 404 is configured to perform fixed-size nearest neighbor extension upsampling on a second output feature map of the at least one input feature map on the GPU acceleration engine to obtain at least one third input feature map; and performing face recognition based on at least one third input feature map.
The face detection device based on the neural network provided by the embodiment of the application has the same technical characteristics as the face detection method based on the neural network provided by the embodiment, so that the same technical problems can be solved, and the same technical effect can be achieved.
Referring to fig. 5, an embodiment of the present invention further provides an electronic device 100, including:
a processor 41, a memory 42, and a bus 43; the storage 42 is used for storing execution instructions and includes a memory 421 and an external storage 422; the memory 421 is also referred to as an internal memory, and is configured to temporarily store operation data in the processor 41 and data exchanged with the external memory 422 such as a hard disk, the processor 41 exchanges data with the external memory 422 through the memory 421, and when the electronic device 100 operates, the processor 41 communicates with the memory 42 through the bus 43, so that the processor 41 executes the following instructions in a user mode:
determining a face image to be detected, and determining at least one first input feature map based on the face image, wherein m sub-regions are preset in each first input feature map; for each subarea of each input characteristic diagram, performing characteristic extraction according to a preset small convolution kernel to obtain a first output characteristic diagram; for each input characteristic diagram, accumulating the first output characteristic diagrams corresponding to the input characteristic diagrams to obtain a second output characteristic diagram; and performing face recognition based on a second output feature map of at least one input feature map.
Optionally, the preset small convolution kernel is 3 × 3 small convolution kernels, and m is 9; accumulating the first output characteristic diagram corresponding to the input characteristic diagram to obtain a second output characteristic diagram, wherein the step comprises the following steps of: and accumulating the 9 first output feature maps corresponding to the input feature maps pixel by pixel to obtain a second output feature map which is the same as the output result of the convolution kernel of 7x 7.
Optionally, in the instruction executed by the processor 41, the number of the small convolution kernels is 9, and each of the small convolution kernels corresponds to the start coordinate and the end coordinate of one of the 9 sub-regions.
Optionally, the step of accumulating the first output characteristic diagram corresponding to the input characteristic diagram in the instruction executed by the processor 41 to obtain a second output characteristic diagram includes: and accumulating the 9 first output characteristic graphs corresponding to the input characteristic graphs pixel by pixel in an Eltwise layer to obtain a second output characteristic graph which is the same as the output result of the convolution kernel of 7 by 7.
Optionally, the step of performing, in the instructions executed by the processor 41, face recognition based on a second output feature map of the at least one input feature map includes: performing fixed-size nearest neighbor extension upsampling on a second output feature map of at least one input feature map on a GPU acceleration engine to obtain at least one third input feature map; and performing face recognition based on at least one third input feature map.
The embodiment of the present invention further provides a computer-readable storage medium, where a computer program is stored on the computer-readable storage medium, and when the computer program is executed by a processor, the computer program executes the steps of the method provided in the above-mentioned embodiment.
In the embodiments provided in the present application, it should be understood that the disclosed apparatus and method can be implemented in other ways. The apparatus embodiments described above are merely illustrative and, for example, the flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of apparatus, methods and computer program products according to various embodiments of the present invention. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
In addition, each functional module or unit in each embodiment of the present invention may be integrated together to form an independent part, or each module may exist separately, or two or more modules may be integrated to form an independent part.
The functions may be stored in a computer-readable storage medium if they are implemented in the form of software functional modules and sold or used as separate products. Based on such understanding, the technical solution of the present invention or a part of the technical solution that contributes to the prior art in essence can be embodied in the form of a software product, which is stored in a storage medium and includes instructions for causing a computer device (which may be a smart phone, a personal computer, a server, or a network device, etc.) to execute all or part of the steps of the method according to the embodiments of the present invention. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and other various media capable of storing program codes.
The above description is only for the specific embodiments of the present invention, but the scope of the present invention is not limited thereto, and any person skilled in the art can easily conceive of the changes or substitutions within the technical scope of the present invention, and all the changes or substitutions should be covered within the scope of the present invention.

Claims (8)

1. A face detection method based on a neural network is characterized by comprising the following steps:
determining a face image to be detected, and determining at least one input feature map based on the face image, wherein each input feature map is preset with m sub-regions;
for each sub-region of each input feature map, performing feature extraction according to a preset small convolution kernel to obtain a first output feature map; wherein each of the small convolution kernels acts on a different one of the sub-regions of the input feature map, each of the small convolution kernels corresponding to a start coordinate and an end coordinate of the corresponding sub-region;
for each input characteristic diagram, performing pixel-by-pixel accumulation on first output characteristic diagrams corresponding to m sub-regions of the input characteristic diagram to obtain a second output characteristic diagram;
and performing face recognition based on a second output feature map of at least one input feature map.
2. The method according to claim 1, wherein the preset small convolution kernel is 3x3 small convolution kernel, and m is 9; accumulating the first output characteristic diagram corresponding to the input characteristic diagram to obtain a second output characteristic diagram, wherein the step comprises the following steps of:
and accumulating the 9 first output feature maps corresponding to the input feature map pixel by pixel to obtain a second output feature map which is the same as the output result of the 7-by-7 convolution kernel.
3. The method of claim 2, wherein the step of accumulating the first output feature map corresponding to the input feature map to obtain a second output feature map comprises:
and accumulating the 9 first output characteristic graphs corresponding to the input characteristic graphs pixel by pixel in an Eltwise layer to obtain a second output characteristic graph which is the same as the output result of the convolution kernel of 7 by 7.
4. The method of claim 1, wherein the step of performing face recognition based on a second output feature map of the at least one input feature map comprises:
performing fixed-size nearest neighbor extension upsampling on a second output feature map of at least one input feature map on a GPU acceleration engine to obtain at least one third input feature map;
and performing face recognition based on at least one third input feature map.
5. A face detection device based on a neural network is characterized by comprising:
the determining module is used for determining a face image to be detected and determining at least one input feature map based on the face image, wherein m sub-regions are preset in each input feature map;
the extraction module is used for extracting the characteristics of each subarea of each input characteristic diagram according to a preset small convolution kernel to obtain a first output characteristic diagram; wherein each of the small convolution kernels acts on a different one of the sub-regions of the input feature map, each of the small convolution kernels corresponding to a start coordinate and an end coordinate of the corresponding sub-region;
the accumulation module is used for accumulating the first output characteristic diagrams corresponding to the m sub-regions of the input characteristic diagram pixel by pixel to obtain a second output characteristic diagram;
and the detection module is used for carrying out face recognition based on a second output feature map of the at least one input feature map.
6. The apparatus of claim 5, wherein the preset small convolution kernel is 3x3 small convolution kernel, and m is 9; and the accumulation module is used for accumulating the 9 first output characteristic graphs corresponding to the input characteristic graphs pixel by pixel to obtain a second output characteristic graph which is the same as the output result of the 7-by-7 convolution kernel.
7. An electronic device comprising a processor and a memory, the memory storing machine executable instructions executable by the processor to implement the method of any one of claims 1-4.
8. A computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, carries out the method according to any one of claims 1-4.
CN201911285280.4A 2019-12-13 2019-12-13 Face detection method and device based on neural network and electronic equipment Active CN111079643B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911285280.4A CN111079643B (en) 2019-12-13 2019-12-13 Face detection method and device based on neural network and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911285280.4A CN111079643B (en) 2019-12-13 2019-12-13 Face detection method and device based on neural network and electronic equipment

Publications (2)

Publication Number Publication Date
CN111079643A CN111079643A (en) 2020-04-28
CN111079643B true CN111079643B (en) 2023-04-07

Family

ID=70314505

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911285280.4A Active CN111079643B (en) 2019-12-13 2019-12-13 Face detection method and device based on neural network and electronic equipment

Country Status (1)

Country Link
CN (1) CN111079643B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111291635A (en) * 2020-01-19 2020-06-16 深圳壹账通智能科技有限公司 Artificial intelligence detection method and device, terminal and computer readable storage medium
CN114627341A (en) * 2022-01-27 2022-06-14 北京旷视科技有限公司 Model training method and image processing method

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106599883A (en) * 2017-03-08 2017-04-26 王华锋 Face recognition method capable of extracting multi-level image semantics based on CNN (convolutional neural network)
CN108363962A (en) * 2018-01-25 2018-08-03 南京邮电大学 A kind of method for detecting human face and system based on multi-level features deep learning
CN110210457A (en) * 2019-06-18 2019-09-06 广州杰赛科技股份有限公司 Method for detecting human face, device, equipment and computer readable storage medium
CN110321872A (en) * 2019-07-11 2019-10-11 京东方科技集团股份有限公司 Facial expression recognizing method and device, computer equipment, readable storage medium storing program for executing

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109493347B (en) * 2017-09-12 2021-03-23 深圳科亚医疗科技有限公司 Method and system for segmenting sparsely distributed objects in an image
CN107644209A (en) * 2017-09-21 2018-01-30 百度在线网络技术(北京)有限公司 Method for detecting human face and device

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106599883A (en) * 2017-03-08 2017-04-26 王华锋 Face recognition method capable of extracting multi-level image semantics based on CNN (convolutional neural network)
CN108363962A (en) * 2018-01-25 2018-08-03 南京邮电大学 A kind of method for detecting human face and system based on multi-level features deep learning
CN110210457A (en) * 2019-06-18 2019-09-06 广州杰赛科技股份有限公司 Method for detecting human face, device, equipment and computer readable storage medium
CN110321872A (en) * 2019-07-11 2019-10-11 京东方科技集团股份有限公司 Facial expression recognizing method and device, computer equipment, readable storage medium storing program for executing

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
Facial expression recognition method based on deep convolutional neural network combined with improved LBP features;Fanzhi Kong;《Personal and Ubiquitous Computing》;20190608;第1-9页 *
一种多尺度卷积神经网络的人脸检测模型;周安众等;《计算机工程与应用》;20180413;第54卷(第14期);第168-196页 *
基于卷积神经网络的人脸表情识别;徐新飞等;《国外电子测量技术》;20180115;第37卷(第01期);第106-110页 *
基于深度卷积神经网络的人脸识别技术综述;景辰凯等;《计算机应用与软件》;20180131;第35卷(第1期);第223-231页 *

Also Published As

Publication number Publication date
CN111079643A (en) 2020-04-28

Similar Documents

Publication Publication Date Title
CN110097086B (en) Image generation model training method, image generation method, device, equipment and storage medium
CN109583509B (en) Data generation method and device and electronic equipment
KR20170131669A (en) Method and apparatus for generating composite image
CN111079643B (en) Face detection method and device based on neural network and electronic equipment
CN107944381B (en) Face tracking method, face tracking device, terminal and storage medium
CN109636730B (en) Method for filtering pseudo pixels in a depth map
CN111709415B (en) Target detection method, device, computer equipment and storage medium
CN110782398B (en) Image processing method, generative countermeasure network system and electronic device
KR102239588B1 (en) Image processing method and apparatus
WO2017070923A1 (en) Human face recognition method and apparatus
KR20200106104A (en) Method and apparatus for high speed object detection using artificial neural network
CN111768353A (en) Hole filling method and device for three-dimensional model
CN114493988A (en) Image blurring method, image blurring device and terminal equipment
CN109919190B (en) Straight line segment matching method, device, storage medium and terminal
CN111967478B (en) Feature map reconstruction method, system, storage medium and terminal based on weight overturn
CN111027670B (en) Feature map processing method and device, electronic equipment and storage medium
CN112598611A (en) Method and device for synthesizing and identifying embossed bank card number image
JP2021144428A (en) Data processing device and data processing method
CN112232361B (en) Image processing method and device, electronic equipment and computer readable storage medium
CN107622498B (en) Image crossing processing method and device based on scene segmentation and computing equipment
CN112348069B (en) Data enhancement method, device, computer readable storage medium and terminal equipment
CN112801045B (en) Text region detection method, electronic equipment and computer storage medium
CN109741264B (en) Image over-representation method and device, electronic equipment and readable storage medium
CN111160265B (en) File conversion method and device, storage medium and electronic equipment
CN114399563A (en) Noise image generation method, device, equipment and medium, neural network training method, device, equipment and medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant